text
stringlengths
8
267k
meta
dict
Q: Where has my ASP.NET State Service disappeared to? I set my ASP.NET State service to automatic start the other day on a hosted VSP Win 2003 server. I came back today and the service has gone completely missing!? Any ideas why it has gone and how to get it back? Thanks! A: You should ask your hosting service provider, they may have removed it (for some reason). A: I have had the same problem, that the ASP.NET State service disappeared from the Administrative Tools / Services list. And the command "net start aspnet_state" didn't work either. For me it worked fine after doing a repair on the currently latest .net version. Net 4.0 in my case.
{ "language": "en", "url": "https://stackoverflow.com/questions/147909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to check the overall health of database using Toad? Anyone have any idea? And any open source sofware which also seens to perform this kind of functionality? A: The DBA version of Toad has such a feature. In my version, it is under the DBA menu, and is called "Health Check". Screenshot http://toadsoft.com/get2know96/Web/ A: From Quest Software: http://www.youtube.com/watch?v=02LmtxyRVJ8 In Toad 11, Main Menu -> Database -> Diagnose -> DB Health Check. You can learn more about it at ToadWorld, though this may be for a newer version.
{ "language": "en", "url": "https://stackoverflow.com/questions/147910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Monitor SQL currently in shared pool using Toad Anyone have any idea? And any open source sofware which also seens to perform this kind of functionality? A: Toad does it. It is under Tools|SGA Trace/optimization. Here is your open source solution :-) select distinct vs.sql_text, vs.sharable_mem, vs.persistent_mem, vs.runtime_mem, vs.sorts, vs.executions, vs.parse_calls, vs.module, vs.action, vs.buffer_gets, vs.disk_reads, vs.version_count, vs.loaded_versions, vs.open_versions, vs.users_opening, vs.loads, vs.users_executing, vs.invalidations, vs.serializable_aborts, vs.command_type, to_char(to_date(vs.first_load_time,'YYYY-MM-DD/HH24:MI:SS'),'MM/DD HH24:MI:SS') first_load_time, rawtohex(vs.address) address, vs.hash_value hash_value, vs.parsing_user_id ,rows_processed ,optimizer_mode ,vs.is_obsolete, vs.elapsed_time, vs.cpu_time ,vs.Child_latch, vs.fetches ,vs.program_id,vs.java_exec_time,vs.plsql_exec_time,vs.user_io_wait_time, vs.cluster_wait_time, vs.concurrency_wait_time, vs.application_wait_time, vs.direct_writes, vs.end_of_fetch_count from v$sqlarea vs
{ "language": "en", "url": "https://stackoverflow.com/questions/147913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Monitoring a database instance Anyone have any idea? And any open source sofware which also seens to perform this kind of functionality? A: I'm not sure what you need, but would http://www.nagios.org/ be enough for your purposes? A: What database? What platform? If it's MySQL, there are many monitoring applications around - for example, the MySQL GUI Tools include a Health Monitor widget (on OS X) Also, phpMyAdmin shows statistics from the MySQL server. You could also write a simple script that connects to the database, executes some trivial command and check it returns a known value. If it doesn't, send an alert somewhere. A: This depends a lot on what kind of database and what you're monitoring for. Things you might be monitorring for: * *Is the database still up? *How heavily loaded is the database? *Deadlocks? *Security events? *Exceptions? Perhaps you could edit your question to fill in a bit more info? A: Have you looked at OpenNMS? A: You might want to look at cacti (http://www.cacti.net/what_is_cacti.php) which is general purpose tool for giving graphical representations of any type of data. We use it to see how healthy our webservers and mysql servers are. But it does not have any alert system (in case something critical happens and you need to take immediate action) as far as I know for which you might want to consider nagios as pointed by someone already. See the screenshots below for mysql below to have an idea. The screenshots show various graphs for showing various states of mysql server over a period of time: http://www.xaprb.com/blog/2008/05/25/screenshots-of-improved-mysql-cacti-templates/ IF your database is other than mysql then google for "your_database_name cacti" to find templates for your database. A: I'm not sure is I understand your question but I use nagios to monitor just about anything on my server... A: What about Nagios? Here are some recommended scripts for MySQL, MS-SQL, Oracle: http://www.consol.de/opensource/nagios/ A: +1 to the suggestion you give us some more details as to what you want to monitor and you're platform. I use Hyperic and am largely happy OpenNMS I also looked at, same with Nagios, I'd suggest dowbnloading the 3 of them, or doing a little reading about them, and then picking one and going for it. Hyperic in my opinion was a lot easier to get implemented than Nagios, OpenNMS I didn't try for my self. Those 3 are as far as I know it the big open source monitorring solutions.
{ "language": "en", "url": "https://stackoverflow.com/questions/147917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: SBCL on Vista crashes. Do you know how to make it work? I've searched a lot for an answer for this question in the web: they say it's true, SBCL doesn't work under Vista. But I really need to work with lisp on my home Vista laptop and VM doesn't help really... And CL is not so interesting because of speed... If you have any recommendation, please share! A: Have you seen these articles? http://robert.zubek.net/blog/2008/04/09/sbcl-emacs-windows-vista/ http://brainrack.wordpress.com/2008/05/29/running-sbcl-on-windows/ A: Make sure you have DEP off for SBCL.
{ "language": "en", "url": "https://stackoverflow.com/questions/147918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Can you create collapsible #Region like scopes in C++ within VS 2008? I miss it so much (used it a lot in C#). can you do it in C++? A: Yes, you can. See here. #pragma region Region_Name //Your content. #pragma endregion Region_Name A: The Visual assist add-in for VC supports regions for c++. Don't know if 2008 has build in regions for C++ though. A: For the VBers: #Region "identifier_string" ' Your content #End Region https://msdn.microsoft.com/en-us/library/sd032a17.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/147920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to use SQLab Xpert Tuning to tune SQL for better performance? Anyone have any idea? And any open source sofware which also seens to perform this kind of functionality? A: I am not sure what you are asking, it is pretty straightforward, you type in your SQL and SQLLab Xpert tries many combinations of rewriting your query and runs them all, selecting the fastest. I find the approach a little dubious, you probably will get something that runs faster than what you originally had, but probably not the fastest possible (unless it is very simple SQL). I prefer to hand tune, the Oracle performance manual http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/toc.htm, Chapters 11-20, has all the information you need, in my opinion better than the shotgun approach SQLLab Xpert takes.
{ "language": "en", "url": "https://stackoverflow.com/questions/147921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Anyone have commit notification hook script that will send email upon commit of codes? Can share with me any of this script? A: The default one is called commit-email.pl and is included when you install Subversion. But here is one in ruby: #!/usr/bin/ruby -w # A Subversion post-commit hook. Edit the configurable stuff below, and # copy into your repository's hooks/ directory as "post-commit". Don't # forget to "chmod a+x post-commit". # ------------------------------------------------------------------------ # You *will* need to change these. address="FOO@SOME_DOMAIN.com" sendmail="/usr/sbin/sendmail" svnlook="/usr/bin/svnlook" # ------------------------------------------------------------------------ require 'cgi' # Subversion's commit-email.pl suggests that svnlook might create files. Dir.chdir("/tmp") # What revision in what repository? repo = ARGV.shift() rev = ARGV.shift() # Get the overview information. info=`#{svnlook} info #{repo} -r #{rev}` info_lines=info.split("\n") author=info_lines.shift date=info_lines.shift info_lines.shift comment=info_lines # Output the overview. body = "<p><b>#{author}</b> #{date}</p>" body << "<p>" comment.each { |line| body << "#{CGI.escapeHTML(line)}<br/>\n" } body << "</p>" body << "<hr noshade>" # Get and output the patch. changes=`#{svnlook} diff #{repo} -r #{rev}` body << "<pre>" changes.each do |top_line| top_line.split("\n").each do |line| color = case when line =~ /^Modified: / || line =~ /^=+$/ || line =~ /^@@ /: "gray" when line =~ /^-/: "red" when line =~ /^\+/: "blue" else "black" end body << %Q{<font style="color:#{color}">#{CGI.escapeHTML(line)}</font><br/>\n} end end body << "</pre>" # Write the header. header = "" header << "To: #{address}\n" header << "From: #{address}\n" header << "Subject: [SVN] #{repo} revision #{rev}\n" header << "Reply-to: #{address}\n" header << "MIME-Version: 1.0\n" header << "Content-Type: text/html; charset=UTF-8\n" header << "Content-Transfer-Encoding: 8bit\n" header << "\n" # Send the mail. begin fd = open("|#{sendmail} #{address}", "w") fd.print(header) fd.print(body) rescue exit(1) end fd.close # We're done. exit(0) A: For some reason, the ruby script and the default hook script didn't work for me. This might be due to some oddities with our mail server, but I'll include the important part here anyway: #!/bin/sh REPOS="$1" REV="$2" svnnotify --repos-path "$REPOS" --revision "$REV" --with-diff --to mailinglist@server.domain --smtp mailserver.domain --from svn@server.domain -VVVVVVVVV -P "[repository_name]" The -VVVVVVV part displays very verbose messages if you want to test the command outside of the script. It should be removed in the actual script. Of course, for this to work you'll need to install svnnotify. You can install this by first installing cpan, which should come with perl. Then you need to launch cpan and install the SVN::Notify library. $ cpan cpan> install SVN::Notify Note that the '$' and the 'cpan>' parts are just prompts, you don't need to type them. This solution was much more attractive for me, because it gave detailed error message which were instrumental in sorting out those problems with the mail server I mentioned. We also have multiple repositories, so copying a whole program / script into each directory would have been redundant. Your mileage may vary. The text in the code block at the top should be placed in a text file named "post-commit". This file should be located at /path/to/svn/repos/repository_name/hooks and marked as executable. A: In the hooks directory in your svn repository, you'll find a post-commit.tmpl script. Copy it to be named "post-commit" and edit it to suit. Usually it will run the commit-email.pl script that comes with subversion; that will also require editing to set things how you want. A: #!/bin/ksh # # This is a custom post-commit for sending email # when an svn repo is changed. # rcpts="foo@bar.edu, baz@bar.edu" repodir=$1 revision=$2 author=`/usr/bin/svnlook author -r $revision $repodir` date=`/usr/bin/svnlook date -r $revision $repodir` log=`/usr/bin/svnlook log -r $revision $repodir` info=`/usr/bin/svnlook changed -r $revision $repodir` repo=${repodir##*/} subject="$repo svn updated by $author" url="https://myserver.bar.edu/svn/$repo" /usr/bin/mail -s "$subject" "$rcpts"<<EOM repository: $url date: $date username: $author revision: $revision comment: $log $info EOM A: Try this /usr/bin/svnnotify --revision "$REV" --repos-path "$REPOS" \ --subject-cx --subject-prefix "[Project:commit] " --max-sub-length 128 \ --with-diff --handler Alternative --alt HTML::ColorDiff \ --to 'abc@xyz.com' --from 'svn@xyz.com' --set-sender
{ "language": "en", "url": "https://stackoverflow.com/questions/147924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's the best way to get the name of a folder that doesn't exist? What's the best way to get a string containing a folder name that I can be certain does not exist? That is, if I call DirectoryInfo.Exists for the given path, it should return false. EDIT: The reason behind it is I am writing a test for an error checker, the error checker tests whether the path exists, so I wondered aloud on the best way to get a path that doesn't exist. A: Name it after a GUID - just take out the illegal characters. A: There isn't really any way to do precisely what you way you want to do. If you think about it, you see that even after the call to DirectoryInfo.Exists has returned false, some other program could have gone ahead and created the directory - this is a race condition. The normal method to handle this is to create a new temporary directory - if the creation succeeds, then you know it hadn't existed before you created it. A: Well, without creating the directory, all you can be sure of is that it didn't exist a few microseconds ago. Here's one approach that might be close enough: string path = Path.GetTempFileName(); File.Delete(path); Directory.CreateDirectory(path); Note that this creates the directory to block against thread races (i.e. another process/thread stealing the directory from under you). A: What I ended up using: using System.IO; string path = Path.Combine(Path.GetTempPath(), Guid.NewGuid().ToString()); (Also, it doesn't seem like you need to strip out chars from a guid - they generate legal filenames) A: Well, one good bet will be to concatenate strings like the user name, today's date, and time down to the millisecond. I'm curious though: Why would you want to do this? What should it be for? A: Is this to create a temporary folder or something? I often use Guid.NewGuid to get a string to use for the folder name that you want to be sure does not already exist. A: I think you can be close enough: string directoryName = Guid.NewGuid.ToSrtring(); Since Guid's are fairly random you should not get 2 dirs the same. A: Using a freshly generated GUID within a namespace that is also somewhat unique (for example, the name of your application/product) should get you what you want. For example, the following code is extremely unlikely to fail: string ParentPath = System.IO.Path.Combine(Environment.GetEnvironmentVariable("TEMP"), "MyAppName"); string UniquePath = System.IO.Path.Combine(ParentPath, Guid.NewGuid().ToString()); System.IO.Directory.CreateDirectory(UniquePath);
{ "language": "en", "url": "https://stackoverflow.com/questions/147925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: C# - Sending messages to Google Chrome from C# application I've been searching around, and I haven't found how I would do this from C#. I was wanting to make it so I could tell Google Chrome to go Forward, Back, Open New Tab, Close Tab, Open New Window, and Close Window from my C# application. I did something similar with WinAmp using [DllImport("user32", EntryPoint = "SendMessageA")] private static extern int SendMessage(int Hwnd, int wMsg, int wParam, int lParam); and a a few others. But I don't know what message to send or how to find what window to pass it to, or anything. So could someone show me how I would send those 6 commands to Chrome from C#? thanks EDIT: Ok, I'm getting voted down, so maybe I wasn't clear enough, or people are assuming I didn't try to figure this out on my own. First off, I'm not very good with the whole DllImport stuff. I'm still learning how it all works. I found how to do the same idea in winamp a few years ago, and I was looking at my code. I made it so I could skip a song, go back, play, pause, and stop winamp from my C# code. I started by importing: [DllImport("user32.dll", CharSet = CharSet.Auto)] public static extern IntPtr FindWindow([MarshalAs(UnmanagedType.LPTStr)] string lpClassName, [MarshalAs(UnmanagedType.LPTStr)] string lpWindowName); [DllImport("user32.dll", CharSet = CharSet.Auto)] static extern int SendMessageA(IntPtr hwnd, int wMsg, int wParam, uint lParam); [DllImport("user32.dll", CharSet = System.Runtime.InteropServices.CharSet.Auto)] public static extern int GetWindowText(IntPtr hwnd, string lpString, int cch); [DllImport("user32", EntryPoint = "FindWindowExA")] private static extern int FindWindowEx(int hWnd1, int hWnd2, string lpsz1, string lpsz2); [DllImport("user32", EntryPoint = "SendMessageA")] private static extern int SendMessage(int Hwnd, int wMsg, int wParam, int lParam); Then the code I found to use this used these constants for the messages I send. const int WM_COMMAND = 0x111; const int WA_NOTHING = 0; const int WA_PREVTRACK = 40044; const int WA_PLAY = 40045; const int WA_PAUSE = 40046; const int WA_STOP = 40047; const int WA_NEXTTRACK = 40048; const int WA_VOLUMEUP = 40058; const int WA_VOLUMEDOWN = 40059; const int WINAMP_FFWD5S = 40060; const int WINAMP_REW5S = 40061; I would get the hwnd (the program to send the message to) by: IntPtr hwnd = FindWindow(m_windowName, null); then I would send a message to that program: SendMessageA(hwnd, WM_COMMAND, WA_STOP, WA_NOTHING); I assume that I would do something very similar to this for Google Chrome. but I don't know what some of those values should be, and I googled around trying to find the answer, but I couldn't, which is why I asked here. So my question is how do I get the values for: m_windowName and WM_COMMAND and then, the values for the different commands, forward, back, new tab, close tab, new window, close window? A: You can get the window name easily using Visual Studio's Spy++ and pressing CTRL+F, then finding chrome. I tried it and got "Chrome_VistaFrame" for the out window. The actual window with the webpage in is "Chrome_RenderWidgetHostHWND". As far as WM_COMMAND goes - you'll need to experiment. You'll obviously want to send button clicks (WM_MOUSEDOWN of the top off my head). As the back,forward buttons aren't their own windows, you'll need to figure out how to do this with simulating a mouse click at a certain x,y position so chrome knows what you're doing. Or you could send the keyboard shortcut equivalent for back/forward and so on. An example I wrote a while ago does this with trillian and winamp: sending messages to windows via c# and winapi There's also tools out there to macro out this kind of thing already, using a scripting language - autoit is one I've used: autoit.com A: Ok, here's what I've got so far... I kinda know what I need to do, but it's just a matter of doing it now... Here's the window from Spy++, I locked onto the Chrome_RenderWidgetHostHWND and clicked the Back button on my keyboard. Here's what I got: So here's my assumptions, and I've been playing with this forever now, I just can't figure out the values. IntPtr hWnd = FindWindow("Chrome_RenderWidgetHostHWND", null); SendMessage(hWnd, WM_KEYDOWN, VK_BROWSER_BACK, 0); SendMessage(hWnd, WM_KEYUP, VK_BROWSER_BACK, 0); Now, I just don't know what I should make the WM_KEYDOWN/UP values or the VK_BROWSER_BACK/FORWARD values... I tried this: const int WM_KEYDOWN = 0x100; const int WM_KEYUP = 0x101; const int VK_BROWSER_BACK = 0x6A; const int VK_BROWSER_FORWARD = 0x69; The latter two values I got from the image I just showed, the ScanCodes for those two keys. I don't know if I did it right though. The former two values I got after searching google for the WM_KEYDOWN value, and someone used &H100 and &H101 for the two values. I've tried several other random ideas I've seen floating around. I just can't figure this out. Oh, and here's the SendMessage method [DllImport("user32.dll", CharSet = CharSet.Auto)] static extern int SendMessage(IntPtr hwnd, int wMsg, int wParam, uint lParam); A: This is a great site for interop constants: pinvoke Another way of finding the values is to search koders.com, using C# as the language, for WM_KEYDOWN or the constant you're after: Koders.com search &H values look like that's from VB(6). pinvoke and koders both return results for VK_BROWSER_FORWARD, private const UInt32 WM_KEYDOWN = 0x0100; private const UInt32 WM_KEYUP = 0x0101; public const ushort VK_BROWSER_BACK = 0xA6; public const ushort VK_BROWSER_FORWARD = 0xA7; public const ushort VK_BROWSER_REFRESH = 0xA8; public const ushort VK_BROWSER_STOP = 0xA9; public const ushort VK_BROWSER_SEARCH = 0xAA; public const ushort VK_BROWSER_FAVORITES = 0xAB; public const ushort VK_BROWSER_HOME = 0xAC; (It's funny how many wrong defintions of VK constants are floating about, considering VK_* are 1 byte 0-255 values, and people have made them uints). Looks slightly different from your consts. I think the function you're after is SendInput (but I haven't tried it) as it's a virtual key. [DllImport("User32.dll")] private static extern uint SendInput(uint numberOfInputs, [MarshalAs(UnmanagedType.LPArray, SizeConst = 1)] KEYBOARD_INPUT[] input, int structSize); Explanation about the parameters: Parameters * *nInputs- Number of structures in the pInputs array. *pInputs - Pointer to an array of INPUT structures. Each structure represents an event to be inserted into the keyboard or mouse input stream. *cbSize - Specifies the size, in bytes, of an INPUT structure. If cbSize is not the size of an INPUT structure, the function fails. This needs a KEYBOARD_INPUT type: [StructLayout(LayoutKind.Sequential)] public struct KEYBOARD_INPUT { public uint type; public ushort vk; public ushort scanCode; public uint flags; public uint time; public uint extrainfo; public uint padding1; public uint padding2; } And finally a sample, which I haven't tested if it works: /* typedef struct tagKEYBDINPUT { WORD wVk; WORD wScan; DWORD dwFlags; DWORD time; ULONG_PTR dwExtraInfo; } KEYBDINPUT, *PKEYBDINPUT; */ public static void sendKey(int scanCode, bool press) { KEYBOARD_INPUT[] input = new KEYBOARD_INPUT[1]; input[0] = new KEYBOARD_INPUT(); input[0].type = INPUT_KEYBOARD; input[0].vk = VK_BROWSER_BACK; uint result = SendInput(1, input, Marshal.SizeOf(input[0])); } Also you'll need to focus the Chrome window using SetForegroundWindow A: Start your research at http://dev.chromium.org/developers EDIT: Sending a message to a window is only half of the work. The window has to respond to that message and act accordingly. If that window doesn't know about a message or doesn't care at all you have no chance to control it by sending window messages. You're looking at an implementation detail on how you remote controlled Winamp. Sending messages is just one way to do it and it's the way the Winamp developers chose. Those messages you're using are user defined messages that have a specific meaning only to Winamp. What you have to do in the first step is to find out if Chromium supports some kind of remote controlling and what those mechanisms are.
{ "language": "en", "url": "https://stackoverflow.com/questions/147929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Create a tag upon every build of the application? Do anyone do this in ANT Build. Meaning everytimie a certain target is called e.g. build-sit. A tag will be created in the svn respositoyr to reference to that particular sit version. Any idea how to do about doing that? A: http://subclipse.tigris.org/svnant/svn.html it is eays to integrate.
{ "language": "en", "url": "https://stackoverflow.com/questions/147933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I read an Http response stream twice in C#? I am trying to read an Http response stream twice via the following: HttpWebResponse response = (HttpWebResponse)request.GetResponse(); stream = response.GetResponseStream(); RssReader reader = new RssReader(stream); do { element = reader.Read(); if (element is RssChannel) { feed.Channels.Add((RssChannel)element); } } while (element != null); StreamReader sr = new StreamReader(stream); feed._FeedRawData = sr.ReadToEnd(); However when the StreamReader code executes there is no data returned because the stream has now reached the end. I tried to reset the stream via stream.Position = 0 but this throws an exception (I think because the stream can't have its position changed manually). Basically, I would like to parse the stream for XML and have access to the raw data (in string format). Any ideas? A: Copy it into a new MemoryStream first. Then you can re-read the MemoryStream as many times as you like: Stream responseStream = CopyAndClose(resp.GetResponseStream()); // Do something with the stream responseStream.Position = 0; // Do something with the stream again private static Stream CopyAndClose(Stream inputStream) { const int readSize = 256; byte[] buffer = new byte[readSize]; MemoryStream ms = new MemoryStream(); int count = inputStream.Read(buffer, 0, readSize); while (count > 0) { ms.Write(buffer, 0, count); count = inputStream.Read(buffer, 0, readSize); } ms.Position = 0; inputStream.Close(); return ms; } A: Copying the stream to a MemoryStream as suggested by Iain is the right approach. But since .NET Framework 4 (released 2010) we have Stream.CopyTo. Example from the docs: // Create the streams. MemoryStream destination = new MemoryStream(); using (FileStream source = File.Open(@"c:\temp\data.dat", FileMode.Open)) { Console.WriteLine("Source length: {0}", source.Length.ToString()); // Copy source to destination. source.CopyTo(destination); } Console.WriteLine("Destination length: {0}", destination.Length.ToString()); Afterwards you can read destination as many times as you like: // re-set to beginning and convert stream to string destination.Position = 0; StreamReader streamReader = new StreamReader(destination); string text = streamReader.ReadToEnd(); // re-set to beginning and read again destination.Position = 0; RssReader cssReader = new RssReader(destination); (I have seen Endy's comment but since it is an appropriate, current answer, it should have its own answer entry.) A: have you tried resetting the stream position? if this does not work you can copy the stream to a MemoryStream and there you can reset the position (i.e. to 0) as often as you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/147941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: What is the best way to list a member and all of its descendants in MDX? In an OLAP database I work with there is a 'Location' hierarchy consisting of the levels Company, Region, Area, Site, Room, Till. For a particular company I need to write some MDX that lists all regions, areas and sites (but not any levels below Site). Currently I am achieving this with the following MDX HIERARCHIZE({ [Location].[Test Company], Descendants([Location].[Test Company], [Location].[Region]), Descendants([Location].[Test Company], [Location].[Area]), Descendants([Location].[Test Company], [Location].[Site]) }) Because my knowledge of MDX is limited, I was wondering if there was a simpler way to do this, with a single command rather that four? Is there a less verbose way of achieveing this, or is my example the only real way of achieving this? A: DESCENDANTS([Location].[Test Company],[Location].[Site], SELF_AND_BEFORE) A: The command you want is DESCENDANTS. Keep the 'family tree' analogy in mind, and you can see that this will list the descendants of a member, down as far as you want. You can specify the 'distance' (in levels) from the chosen member, 3 in your case. There are a few weird options you can specify with the third argument, you want SELF_AND_AFTER, see http://msdn.microsoft.com/en-us/library/ms146075.aspx EDIT - oops, as santiiiii noticed, it should be SELF_AND_BEFORE
{ "language": "en", "url": "https://stackoverflow.com/questions/147953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Dynamic search and display I have a big load of documents, text-files, that I want to search for relevant content. I've seen a searching tool, can't remeber where, that implemented a nice method as I describe in my requirement below. My requirement is as follows: * *I need an optimised search function: I supply this search function with a list (one or more) partially-complete (or complete) words separated with spaces. *The function then finds all the documents containing words starting or equal to the first word, then search these found documents in the same way using the second word, and so on, at the end of which it returns a list containing the actual words found linked with the documents (name & location) containing them, for the complete the list of words. *The documents must contain all the words in the list. *I want to use this function to do an as-you-type search so that I can display and update the results in a tree-like structure in real-time. A possible approach to a solution I came up with is as follows: I create a database (most likely using mysql) with three tables: 'Documents', 'Words' and 'Word_Docs'. * *'Documents' will have (idDoc, Name, Location) of all documents. *'Words' will have (idWord, Word) , and be a list of unique words from all the documents (a specific word appears only once). *'Word_Docs' will have (idWord, idDoc) , and be a list of unique id-combinations for each word and document it appears in. The function is then called with the content of an editbox on each keystroke (except space): * *the string is tokenized *(here my wheels spin a bit): I am sure a single SQL statement can be constructed to return the required dataset: (actual_words, doc_name, doc_location); (I'm not a hot-number with SQL), alternatively a sequence of calls for each token and parse-out the non-repeating idDocs? *this dataset (/list/array) is then returned The returned list-content is then displayed: e.g.: called with: "seq sta cod" displays: sequence - start - code - Counting Sequences [file://docs/sample/con_seq.txt] - stop - code - Counting Sequences [file://docs/sample/con_seq.txt] sequential - statement - code - SQL intro [file://somewhere/sql_intro.doc] (and-so-on) Is this an optimal way of doing it? The function needs to be fast, or should it be called only when a space is hit? Should it offer word-completion? (Got the words in the database) At least this would prevent useless calls to the function for words that does not exist. If word-completion: how would that be implemented? (Maybe SO could also use this type of search-solution for browsing the tags? (In top-right of main page)) A: The fastest way is certainly not using a database at all, since if you do the search manually with optimized data, you can easily beat select search performance. The fastest way, assuming the documents don't change very often, is to build index files and use these for finding the keywords. The index file is created like this: * *Find all unique words in the text file. That is split the text file by spaces into words and add every word to a list unless already found on that list. *Take all words you have found and sort them alphabetically; the fastest way to do this is using Three Way Radix QuickSort. This algorithm is hard to beat in performance when sorting strings. *Write the sorted list to disk, one word a line. *When you now want to search the document file, ignore it completely, instead load the index file to memory and use binary search to find out if a word is in the index file or not. Binary search is hard to beat when searching large, sorted lists. Alternatively you can merge step (1) and step (2) within a single step. If you use InsertionSort (which uses binary search to find the right insert position to insert a new element into an already sorted list), you not only have a fast algorithm to find out if the word is already on the list or not, in case it is not, you immediately get the correct position to insert it and if you always insert new ones like that, you will automatically have a sorted list when you get to step (3). The problem is you need to update the index whenever the document changes... however, wouldn't this be true for the database solution as well? On the other hand, the database solution buys you some advantages: You can use it, even if the documents contain so many words, that the index files wouldn't fit into memory anymore (unlikely, as even a list of all English words will fit into memory of any average user PC); however, if you need to load index files of a huge number of documents, then memory may become a problem. Okay, you can work around that using clever tricks (e.g. searching directly within the files that you mapped to memory using mmap and so on), but these are the same tricks databases use already to perform speedy look-ups, thus why re-inventing the wheel? Further you also can prevent locking problems between searching words and updating indexes when a document has changed (that is, if the database can perform the locking for you or can perform the update or updates as an atomic operation). For a web solution with AJAX calls for list updates, using a database is probably the better solution (my first solution is rather suitable if this is a locally running application written in a low level language like C). If you feel like doing it all in a single select call (which might not be optimal, but when you dynamacilly update web content with AJAX, it usually proves as the solution causing least headaches), you need to JOIN all three tables together. May SQL is a bit rusty, but I'll give it a try: SELECT COUNT(Document.idDoc) AS NumOfHits, Documents.Name AS Name, Documents.Location AS Location FROM Documents INNER JOIN Word_Docs ON Word_Docs.idDoc=Documents.idDoc INNER JOIN Words ON Words.idWord=Words_Docs.idWord WHERE Words.Word IN ('Word1', 'Word2', 'Word3', ..., 'WordX') GROUP BY Document.idDoc HAVING NumOfHits=X Okay, maybe this is not the fastest select... I guess it can be done faster. Anyway, it will find all matching documents that contain at least one word, then groups all equal documents together by ID, count how many have been grouped togetehr, and finally only shows results where NumOfHits (the number of words found of the IN statement) is equal to the number of words within the IN statement (if you search for 10 words, X is 10). A: What you're talking about is known as an inverted index or posting list, and operates similary to what you propose and what Mecki proposes. There's a lot of literature about inverted indexes out there; the Wikipedia article is a good place to start. Better, rather than trying to build it yourself, use an existing inverted index implementation. Both MySQL and recent versions of PostgreSQL have full text indexing by default. You may also want to check out Lucene for an independent solution. There are a lot of things to consider in writing a good inverted index, including tokenisation, stemming, multi-word queries, etc, etc, and a prebuilt solution will do all this for you. A: Not sure about the syntax (this is sql server syntax), but: -- N is the number of elements in the list SELECT idDoc, COUNT(1) FROM Word_Docs wd INNER JOIN Words w on w.idWord = wd.idWord WHERE w.Word IN ('word1', ..., 'wordN') GROUP BY wd.idDoc HAVING COUNT(1) = N That is, without using like. With like things are MUCH more complex. A: Google Desktop Search or a similar tool might meet your requirements.
{ "language": "en", "url": "https://stackoverflow.com/questions/147962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is it idiomatic Ruby to add an assert( ) method to Ruby's Kernel class? I'm expanding my Ruby understanding by coding an equivalent of Kent Beck's xUnit in Ruby. Python (which Kent writes in) has an assert() method in the language which is used extensively. Ruby does not. I think it should be easy to add this but is Kernel the right place to put it? BTW, I know of the existence of the various Unit frameworks in Ruby - this is an exercise to learn the Ruby idioms, rather than to "get something done". A: It's not especially idiomatic, but I think it's a good idea. Especially if done like this: def assert(msg=nil) if DEBUG raise msg || "Assertion failed!" unless yield end end That way there's no impact if you decide not to run with DEBUG (or some other convenient switch, I've used Kernel.do_assert in the past) set. A: My understanding is that you're writing your own testing suite as a way of becoming more familiar with Ruby. So while Test::Unit might be useful as a guide, it's probably not what you're looking for (because it's already done the job). That said, python's assert is (to me, at least), more analogous to C's assert(3). It's not specifically designed for unit-tests, rather to catch cases where "this should never happen". How Ruby's built-in unit tests tend to view the problem, then, is that each individual test case class is a subclass of TestCase, and that includes an "assert" statement which checks the validity of what was passed to it and records it for reporting. A: I think it is totally valid to use asserts in Ruby. But you are mentioning two different things: * *xUnit frameworks use assert methods for checking your tests expectations. They are intended to be used in your test code, not in your application code. *Some languages like C, Java or Python, include an assert construction intended to be used inside the code of your programs, to check assumptions you make about their integrity. These checks are built inside the code itself. They are not a test-time utility, but a development-time one. I recently wrote solid_assert: a little Ruby library implementing a Ruby assertion utility and also a post in my blog explaining its motivation. It lets you write expressions in the form: assert some_string != "some value" assert clients.empty?, "Isn't the clients list empty?" invariant "Lists with different sizes?" do one_variable = calculate_some_value other_variable = calculate_some_other_value one_variable > other_variable end And they can be deactivated, so assert and invariant get evaluated as empty statements. This let you avoid performance problems in production. But note that The Pragmatic Programmer: from journeyman to master recommends against deactivating them. You should only deactivate them if they really affect the performance. Regarding the answer saying that the idiomatic Ruby way is using a normal raise statement, I think it lacks expressivity. One of the golden rules of assertive programming is not using assertions for normal exception handling. They are two completely different things. If you use the same syntax for the two of them, I think your code will be more obscure. And of course you lose the capability of deactivating them. Some widely-regarded books that dedicate whole sections to assertions and recommend their use: * *The Pragmatic Programmer: from Journeyman to Master by Andrew Hunt and David Thomas *Code Complete: A Practical Handbook of Software Construction by Steve McConnell *Writing Solid Code by Steve Maguire Programming with assertions is an article that illustrates well what assertive programming is about and when to use it (it is based in Java, but the concepts apply to any language). A: What's your reason for adding the assert method to the Kernel module? Why not just use another module called Assertions or something? Like this: module Assertions def assert(param) # do something with param end # define more assertions here end If you really need your assertions to be available everywhere do something like this: class Object include Assertions end Disclaimer: I didn't test the code but in principle I would do it like this. A: No it's not a best practice. The best analogy to assert() in Ruby is just raising raise "This is wrong" unless expr and you can implement your own exceptions if you want to provide for more specific exception handling
{ "language": "en", "url": "https://stackoverflow.com/questions/147969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "88" }
Q: .NET Remoting and Server 2008 (64 Bit) I have a .NET application that is meant to be run on a local PC and started from a file share on the LAN. It works fine on 32 bit Windows XP and Vista workstations. But it fails with a System.InvalidOperationException on 64 bit Windows Server 2008. It runs fine locally on all three configurations. What could be the cause? .NET 2.0 is installed an all machines involved. Summary: 32 bit XP: runs locally and remotely 32 bit Vista: runs locally and remotely 64 bit 2008: runs locally, fails remotely "remotely" means running locally but launched from a file share rather than a local drive. Zone security is set to "full trust" for "Local Intranet" on all machines involved including the 64 bit 2008 machine. Any ideas? A: Are the projects set to run in x86 mode? Use the configuration Manager to check. A: My first guess would be Internet Explorer security settings. Try adding your server as a Trusted Site. A: This sounds like a security issue, I don't think the instruction set of the CPU makes a difference. I used to have the same problem when running applications from network drives. I believe this should fix your problem. Caspol
{ "language": "en", "url": "https://stackoverflow.com/questions/147973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: .html() jQuery method bizzare bug - resolves to empty space locally, but not on production I'm making a simple jquery command: element.html("&nbsp;&nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;"); using the attributes/html method: http://docs.jquery.com/Attributes/html It works on my local app engine server, but it doesn't work once I push to the Google server. The element empties but doesn't fill with spaces. So instead of " " (6 spaces) it's just "". Once again, this is running on App Engine, but I don't think that should matter... A: This might not be a direct answer to your problem, but why are you even wanting to put in a heap of spaces? You can probably achieve the same result by just changing the padding-left or text-indent of that element. element.css("textIndent", "3em"); Using a heap of &nbsp;s is a very dodgy way to do indentation. A: You could try generating the space during run-time, so it won't be trimmed or whatever happens during transport: element.html(String.fromCharCode(32)); A: Your jQuery should look like this: $('element').html('&nbsp;&nbsp;'); ... where '&nbsp;' equals once space. (with however many spaces you want, of course) A: Have you tried using &nbsp; instead of spaces? The html() method just pumps the string into the innerHTML of the element(s). A: Is there a possibility that the code is minified as part of the process of being deployed onto the App Engine? I would not expect any string of whitespace to be retained as written, perhaps you could actually escape the white space and force any minification to leave it: example: element.html('\ \ \ \ \ \ \ \ \ \ \ \ ');
{ "language": "en", "url": "https://stackoverflow.com/questions/147976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Tomahawk and scrolling tabs is there a tomahawk component, that enables "scrollable tabs"? What I mean is something for the following situation: If I have very many tabs, the tab-bar gets a little arrow on the side to scroll through all the open tabs (like in firefox). Is there a tomahawk component for creating something similar? A: Are you/have you tried using the panelTabbedPane tag?
{ "language": "en", "url": "https://stackoverflow.com/questions/147981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Standard algorithm to tokenize a string, keep delimiters (in PHP) I want to split an arithmetic expression into tokens, to convert it into RPN. Java has the StringTokenizer, which can optionally keep the delimiters. That way, I could use the operators as delimiters. Unfortunately, I need to do this in PHP, which has strtok, but that throws away the delimiters, so I need to brew something myself. This sounds like a classic textbook example for Compiler Design 101, but I'm afraid I'm lacking some formal education here. Is there a standard algorithm you can point me to? My other options are to read up on Lexical Analysis or to roll up something quick and dirty with the available string functions. A: This might help. Practical Uses of Tokenizer A: As often, I would just use a regular expression to do this: $expr = '(5*(7 + 2 * -9.3) - 8 )/ 11'; $tokens = preg_split('/([*\/^+-]+)\s*|([\d.]+)\s*/', $expr, -1, PREG_SPLIT_DELIM_CAPTURE | PREG_SPLIT_NO_EMPTY); $tts = print_r($tokens, true); echo "<pre>x=$tts</pre>"; It needs a little more work to accept numbers with exponent (like -9.2e-8). A: OK, thanks to PhiLho, my final code is this, should anyone need it. It's not even really dirty. :-) static function rgTokenize($s) { $rg = array(); // remove whitespace $s = preg_replace("/\s+/", '', $s); // split at numbers, identifiers, function names and operators $rg = preg_split('/([*\/^+\(\)-])|(#\d+)|([\d.]+)|(\w+)/', $s, -1, PREG_SPLIT_DELIM_CAPTURE | PREG_SPLIT_NO_EMPTY); // find right-associative '-' and put it as a sign onto the following number for ($ix = 0, $ixMax = count($rg); $ix < $ixMax; $ix++) { if ('-' == $rg[$ix]) { if (isset($rg[$ix - 1]) && self::fIsOperand($rg[$ix - 1])) { continue; } else if (isset($rg[$ix + 1]) && self::fIsOperand($rg[$ix + 1])) { $rg[$ix + 1] = $rg[$ix].$rg[$ix + 1]; unset($rg[$ix]); } else { throw new Exception("Syntax error: Found right-associative '-' without operand"); } } } $rg = array_values($rg); echo join(" ", $rg)."\n"; return $rg; }
{ "language": "en", "url": "https://stackoverflow.com/questions/147988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to get WCF statistics from inside an application that uses it? Is it possible to gather performance statistics programmatically from inside a WCF application? For example, the number of connections open or requests received. A: Look into WCF Performance Counters. You can query performance counters using the respective .NET Framework Classes. Also, you could enable WMI for your WCF Services and query this information from inside your application. Probably depends on the kind of information you need to get.
{ "language": "en", "url": "https://stackoverflow.com/questions/147990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is having different users for different types of queries a good practice? I am using MySQL and PHP for a project I am working. I have created separate users for carrying out different functions (one for running select queries, one for running update queries, etc.) to provide an extra layer of security. This way, I figure if someone does manage to carry out an injection attack (which is unlikely since I am using prepared statements), whatever they are able to do will be limited to the type of query that was originally meant to be run. Is this a good practice or is it not worth the trouble? A: Aside from the extended logic, you will also have different connections and essential overhead in that area. IMHO it's wise to not do all your queries in a webapp with the root user and if the data is so hot, then make sure the designated user has no DROP, DELETE etc. priviledges. You could implement soft-delete if it's necessary in your application. Last but not least, make sure to sanitize all GPC and make sure to properly quote/escape files in your queries. Using prepared statements can be one thing, but in the end it can be as simple as using mysql_real_escape_string() or whatever quoting-methods your DBAL/ORM offer. A: I personally don't think it's worth the bother, since it's trickier to code, test and deploy. Make sure your software is immune to SQL injection instead.
{ "language": "en", "url": "https://stackoverflow.com/questions/147992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Why doesnt paginator remember my custom parameters when I go to page 2? When using the paginator helper in cakephp views, it doesnt remember parts of the url that are custom for my useage. For example: http://example.org/users/index/moderators/page:2/sort:name/dir:asc here moderators is a parameter that helps me filter by that type. But pressing a paginator link will not include this link. A: To add to Alexander Morland's answer above, it's worth remembering that the syntax has changed in CakePHP 1.3 and is now: $this->Paginator->options(array('url' => $this->passedArgs)); This is described further in the pagination in views section of the CakePHP book. A: The secret is adding this line to your view: $paginator->options(array('url'=>$this->passedArgs)); (I created this question and answer because it is a much asked question and I keep having to dig out the answer since i cant remember it.) A: $this->passedArgs is the preferred way to do this from the view. A: You saved me! This helped me a lot, Thanks. I needed a way to pass the parameters I originally sent via post ($this->data) to the paging component, so my custom query would continue to use them. Here is what I did: on my view I put $paginator->options(array('url'=>$this->data['Transaction'])); before the $paginator->prev('<< Previous ' stuff. Doing this made the next link on the paginator like " .../page:1/start_date:2000-01-01%2000:00:00/end_date:3000-01-01%2023:59:59/payments_recieved:1" Then on my controller I just had to get the parameters and put them in the $this->data so my function would continue as usual: foreach($this->params['named'] as $k=>$v) { /* * set data as is normally expected */ $this->data['Transaction'][$k] = $v; } And that's it. Paging works with my custom query. :) A: The options here are a good lead ... You can also check for more info on cakePHP pagination at cakephp.org/view/166/Pagination-in-Views A: With that param 'url' you can only put your preferred string before the string pagination in url.. if I use this tecnique: $urlpagin = '?my_get1=1&my_get2=2'; $paginator->options = array('url'=>$urlpagin); I only obtain: url/controller/action/?my_get1=1&my_get2=2/sort:.../... and Cake lost my get params Have you an alternative tecnique?
{ "language": "en", "url": "https://stackoverflow.com/questions/147995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How do you write the end of a file opened with FILE_FLAG_NO_BUFFERING? I am using VB6 and the Win32 API to write data to a file, this functionality is for the export of data, therefore write performance to the disk is the key factor in my considerations. As such I am using the FILE_FLAG_NO_BUFFERING and FILE_FLAG_WRITE_THROUGH options when opening the file with a call to CreateFile. FILE_FLAG_NO_BUFFERING requires that I use my own buffer and write data to the file in multiples of the disk's sector size, this is no problem generally, apart from the last part of data, which if it is not an exact multiple of the sector size will include character zero's padding out the file, how do I set the file size once the last block is written to not include these character zero's? I can use SetEndOfFile however this requires me to close the file and re-open it without using FILE_FLAG_NO_BUFFERING. I have seen someone talk about NtSetInformationFile however I cannot find how to use and declare this in VB6. SetFileInformationByHandle can do exactly what I want however it is only available in Windows Vista, my application needs to be compatible with previous versions of Windows. A: I believe SetEndOfFile is the only way. And I agree with Mike G. that you should bench your code with and without FILE_FLAG_NO_BUFFERING. Windows file buffering on modern OS's is pretty darn effective. A: I'm not sure, but are YOU sure that setting FILE_FLAG_NO_BUFFERING and FILE_FLAG_WRITE_THROUGH give you maximum performance? They'll certainly result in your data hitting the disk as soon as possible, but that sort of thing doesn't actually help performance - it just helps reliability for things like journal files that you want to be as complete as possible in the event of a crash. For a data export routine like you describe, allowing the operating system to buffer your data will probably result in BETTER performance, since the writes will be scheduled in line with other disk activity, rather than forcing the disk to jump back to your file every write. Why don't you benchmark your code without those options? Leave in the zero-byte padding logic to make it a fair test. If it turns out that skipping those options is faster, then you can remove the 0-padding logic, and your file size issue fixes itself. A: For a one-gigabyte file, Windows buffering will indeed probably be faster, especially if doing many small I/Os. If you're dealing with files which are much larger than available RAM, and doing large-block I/O, the flags you were setting WILL produce must better throughput (up to three times faster for heavily threaded and/or random large-block I/O).
{ "language": "en", "url": "https://stackoverflow.com/questions/147996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Algorithm for finding the maximum difference in an array of numbers I have an array of a few million numbers. double* const data = new double (3600000); I need to iterate through the array and find the range (the largest value in the array minus the smallest value). However, there is a catch. I only want to find the range where the smallest and largest values are within 1,000 samples of each other. So I need to find the maximum of: range(data + 0, data + 1000), range(data + 1, data + 1001), range(data + 2, data + 1002), ...., range(data + 3599000, data + 3600000). I hope that makes sense. Basically I could do it like above, but I'm looking for a more efficient algorithm if one exists. I think the above algorithm is O(n), but I feel that it's possible to optimize. An idea I'm playing with is to keep track of the most recent maximum and minimum and how far back they are, then only backtrack when necessary. I'll be coding this in C++, but a nice algorithm in pseudo code would be just fine. Also, if this number I'm trying to find has a name, I'd love to know what it is. Thanks. A: The algorithm you describe is really O(N), but i think the constant is too high. Another solution which looks reasonable is to use O(N*log(N)) algorithm the following way: * create sorted container (std::multiset) of first 1000 numbers * in loop (j=1, j<(3600000-1000); ++j) - calculate range - remove from the set number which is now irrelevant (i.e. in index *j - 1* of the array) - add to set new relevant number (i.e. in index *j+1000-1* of the array) I believe it should be faster, because the constant is much lower. A: This is a good application of a min-queue - a queue (First-In, First-Out = FIFO) which can simultaneously keep track of the minimum element it contains, with amortized constant-time updates. Of course, a max-queue is basically the same thing. Once you have this data structure in place, you can consider CurrentMax (of the past 1000 elements) minus CurrentMin, store that as the BestSoFar, and then push a new value and pop the old value, and check again. In this way, keep updating BestSoFar until the final value is the solution to your question. Each single step takes amortized constant time, so the whole thing is linear, and the implementation I know of has a good scalar constant (it's fast). I don't know of any documentation on min-queue's - this is a data structure I came up with in collaboration with a coworker. You can implement it by internally tracking a binary tree of the least elements within each contiguous sub-sequence of your data. It simplifies the problem that you'll only pop data from one end of the structure. If you're interested in more details, I can try to provide them. I was thinking of writing this data structure up as a paper for arxiv. Also note that Tarjan and others previously arrived at a more powerful min-deque structure that would work here, but the implementation is much more complex. You can google for "mindeque" to read about Tarjan et al.'s work. A: This type of question belongs to a branch of algorithms called streaming algorithms. It is the study of problems which require not only an O(n) solution but also need to work in a single pass over the data. the data is inputted as a stream to the algorithm, the algorithm can't save all of the data and then and then it is lost forever. the algorithm needs to get some answer about the data, such as for instance the minimum or the median. Specifically you are looking for a maximum (or more commonly in literature - minimum) in a window over a stream. Here's a presentation on an article that mentions this problem as a sub problem of what they are trying to get at. it might give you some ideas. I think the outline of the solution is something like that - maintain the window over the stream where in each step one element is inserted to the window and one is removed from the other side (a sliding window). The items you actually keep in memory aren't all of the 1000 items in the window but a selected representatives which are going to be good candidates for being the minimum (or maximum). read the article. it's abit complex but after 2-3 reads you can get the hang of it. A: Idea of algorithm: Take the first 1000 values of data and sort them The last in the sort - the first is range(data + 0, data + 999). Then remove from the sort pile the first element with the value data[0] and add the element data[1000] Now, the last in the sort - the first is range(data + 1, data + 1000). Repeat until done // This should run in (DATA_LEN - RANGE_WIDTH)log(RANGE_WIDTH) #include <set> #include <algorithm> using namespace std; const int DATA_LEN = 3600000; double* const data = new double (DATA_LEN); .... .... const int RANGE_WIDTH = 1000; double range = new double(DATA_LEN - RANGE_WIDTH); multiset<double> data_set; data_set.insert(data[i], data[RANGE_WIDTH]); for (int i = 0 ; i < DATA_LEN - RANGE_WIDTH - 1 ; i++) { range[i] = *data_set.end() - *data_set.begin(); multiset<double>::iterator iter = data_set.find(data[i]); data_set.erase(iter); data_set.insert(data[i+1]); } range[i] = *data_set.end() - *data_set.begin(); // range now holds the values you seek You should probably check this for off by 1 errors, but the idea is there. A: std::multiset<double> range; double currentmax = 0.0; for (int i = 0; i < 3600000; ++i) { if (i >= 1000) range.erase(range.find(data[i-1000])); range.insert(data[i]); if (i >= 999) currentmax = max(currentmax, *range.rbegin()); } Note untested code. Edit: fixed off-by-one error. A: * *read in the first 1000 numbers. *create a 1000 element linked list which tracks the current 1000 number. *create a 1000 element array of pointers to linked list nodes, 1-1 mapping. *sort the pointer array based on linked list node's values. This will rearrange the array but keep the linked list intact. *you can now calculate the range for the first 1000 numbers by examining the first and last element of the pointer array. *remove the first inserted element, which is either the head or the tail depending on how you made your linked list. Using the node's value perform a binary search on the pointer array to find the to-be-removed node's pointer, and shift the array one over to remove it. *add the 1001th element to the linked list, and insert a pointer to it in the correct position in the array, by performing one step of an insertion sort. This will keep the array sorted. *now you have the min. and max. value of the numbers between 1 and 1001, and can calculate the range using the first and last element of the pointer array. *it should now be obvious what you need to do for the rest of the array. The algorithm should be O(n) since the delete and insertion is bounded by log(1e3) and everything else takes constant time. A: I decided to see what the most efficient algorithm I could think of to solve this problem was using actual code and actual timings. I first created a simple solution, one that tracks the min/max for the previous n entries using a circular buffer, and a test harness to measure the speed. In the simple solution, each data value is compared against the set of min/max values, so that's about window_size * count tests (where window size in the original question is 1000 and count is 3600000). I then thought about how to make it faster. First off, I created a solution that used a fifo queue to store window_size values and a linked list to store the values in ascending order where each node in the linked list was also a node in the queue. To process a data value, the item at the end of the fifo was removed from the linked list and the queue. The new value was added to the start of the queue and a linear search was used to find the position in the linked list. The min and max values could then be read from the start and end of the linked list. This was quick, but wouldn't scale well with increasing window_size (i.e. linearly). So I decided to add a binary tree to the system to try to speed up the search part of the algorithm. The final timings for window_size = 1000 and count = 3600000 were: Simple: 106875 Quite Complex: 1218 Complex: 1219 which was both expected and unexpected. Expected in that using a sorted linked list helped, unexpected in that the overhead of having a self balancing tree didn't offset the advantage of a quicker search. I tried the latter two with an increased window size and found that the were always nearly identical up to a window_size of 100000. Which all goes to show that theorising about algorithms is one thing, implementing them is something else. Anyway, for those that are interested, here's the code I wrote (there's quite a bit!): Range.h: #include <algorithm> #include <iostream> #include <ctime> using namespace std; // Callback types. typedef void (*OutputCallback) (int min, int max); typedef int (*GeneratorCallback) (); // Declarations of the test functions. clock_t Simple (int, int, GeneratorCallback, OutputCallback); clock_t QuiteComplex (int, int, GeneratorCallback, OutputCallback); clock_t Complex (int, int, GeneratorCallback, OutputCallback); main.cpp: #include "Range.h" int checksum; // This callback is used to get data. int CreateData () { return rand (); } // This callback is used to output the results. void OutputResults (int min, int max) { //cout << min << " - " << max << endl; checksum += max - min; } // The program entry point. void main () { int count = 3600000, window = 1000; srand (0); checksum = 0; std::cout << "Simple: Ticks = " << Simple (count, window, CreateData, OutputResults) << ", checksum = " << checksum << std::endl; srand (0); checksum = 0; std::cout << "Quite Complex: Ticks = " << QuiteComplex (count, window, CreateData, OutputResults) << ", checksum = " << checksum << std::endl; srand (0); checksum = 0; std::cout << "Complex: Ticks = " << Complex (count, window, CreateData, OutputResults) << ", checksum = " << checksum << std::endl; } Simple.cpp: #include "Range.h" // Function to actually process the data. // A circular buffer of min/max values for the current window is filled // and once full, the oldest min/max pair is sent to the output callback // and replaced with the newest input value. Each value inputted is // compared against all min/max pairs. void ProcessData ( int count, int window, GeneratorCallback input, OutputCallback output, int *min_buffer, int *max_buffer ) { int i; for (i = 0 ; i < window ; ++i) { int value = input (); min_buffer [i] = max_buffer [i] = value; for (int j = 0 ; j < i ; ++j) { min_buffer [j] = min (min_buffer [j], value); max_buffer [j] = max (max_buffer [j], value); } } for ( ; i < count ; ++i) { int index = i % window; output (min_buffer [index], max_buffer [index]); int value = input (); min_buffer [index] = max_buffer [index] = value; for (int k = (i + 1) % window ; k != index ; k = (k + 1) % window) { min_buffer [k] = min (min_buffer [k], value); max_buffer [k] = max (max_buffer [k], value); } } output (min_buffer [count % window], max_buffer [count % window]); } // A simple method of calculating the results. // Memory management is done here outside of the timing portion. clock_t Simple ( int count, int window, GeneratorCallback input, OutputCallback output ) { int *min_buffer = new int [window], *max_buffer = new int [window]; clock_t start = clock (); ProcessData (count, window, input, output, min_buffer, max_buffer); clock_t end = clock (); delete [] max_buffer; delete [] min_buffer; return end - start; } QuiteComplex.cpp: #include "Range.h" template <class T> class Range { private: // Class Types // Node Data // Stores a value and its position in various lists. struct Node { Node *m_queue_next, *m_list_greater, *m_list_lower; T m_value; }; public: // Constructor // Allocates memory for the node data and adds all the allocated // nodes to the unused/free list of nodes. Range ( int window_size ) : m_nodes (new Node [window_size]), m_queue_tail (m_nodes), m_queue_head (0), m_list_min (0), m_list_max (0), m_free_list (m_nodes) { for (int i = 0 ; i < window_size - 1 ; ++i) { m_nodes [i].m_list_lower = &m_nodes [i + 1]; } m_nodes [window_size - 1].m_list_lower = 0; } // Destructor // Tidy up allocated data. ~Range () { delete [] m_nodes; } // Function to add a new value into the data structure. void AddValue ( T value ) { Node *node = GetNode (); // clear links node->m_queue_next = 0; // set value of node node->m_value = value; // find place to add node into linked list Node *search; for (search = m_list_max ; search ; search = search->m_list_lower) { if (search->m_value < value) { if (search->m_list_greater) { node->m_list_greater = search->m_list_greater; search->m_list_greater->m_list_lower = node; } else { m_list_max = node; } node->m_list_lower = search; search->m_list_greater = node; } } if (!search) { m_list_min->m_list_lower = node; node->m_list_greater = m_list_min; m_list_min = node; } } // Accessor to determine if the first output value is ready for use. bool RangeAvailable () { return !m_free_list; } // Accessor to get the minimum value of all values in the current window. T Min () { return m_list_min->m_value; } // Accessor to get the maximum value of all values in the current window. T Max () { return m_list_max->m_value; } private: // Function to get a node to store a value into. // This function gets nodes from one of two places: // 1. From the unused/free list // 2. From the end of the fifo queue, this requires removing the node from the list and tree Node *GetNode () { Node *node; if (m_free_list) { // get new node from unused/free list and place at head node = m_free_list; m_free_list = node->m_list_lower; if (m_queue_head) { m_queue_head->m_queue_next = node; } m_queue_head = node; } else { // get node from tail of queue and place at head node = m_queue_tail; m_queue_tail = node->m_queue_next; m_queue_head->m_queue_next = node; m_queue_head = node; // remove node from linked list if (node->m_list_lower) { node->m_list_lower->m_list_greater = node->m_list_greater; } else { m_list_min = node->m_list_greater; } if (node->m_list_greater) { node->m_list_greater->m_list_lower = node->m_list_lower; } else { m_list_max = node->m_list_lower; } } return node; } // Member Data. Node *m_nodes, *m_queue_tail, *m_queue_head, *m_list_min, *m_list_max, *m_free_list; }; // A reasonable complex but more efficent method of calculating the results. // Memory management is done here outside of the timing portion. clock_t QuiteComplex ( int size, int window, GeneratorCallback input, OutputCallback output ) { Range <int> range (window); clock_t start = clock (); for (int i = 0 ; i < size ; ++i) { range.AddValue (input ()); if (range.RangeAvailable ()) { output (range.Min (), range.Max ()); } } clock_t end = clock (); return end - start; } Complex.cpp: #include "Range.h" template <class T> class Range { private: // Class Types // Red/Black tree node colours. enum NodeColour { Red, Black }; // Node Data // Stores a value and its position in various lists and trees. struct Node { // Function to get the sibling of a node. // Because leaves are stored as null pointers, it must be possible // to get the sibling of a null pointer. If the object is a null pointer // then the parent pointer is used to determine the sibling. Node *Sibling ( Node *parent ) { Node *sibling; if (this) { sibling = m_tree_parent->m_tree_less == this ? m_tree_parent->m_tree_more : m_tree_parent->m_tree_less; } else { sibling = parent->m_tree_less ? parent->m_tree_less : parent->m_tree_more; } return sibling; } // Node Members Node *m_queue_next, *m_tree_less, *m_tree_more, *m_tree_parent, *m_list_greater, *m_list_lower; NodeColour m_colour; T m_value; }; public: // Constructor // Allocates memory for the node data and adds all the allocated // nodes to the unused/free list of nodes. Range ( int window_size ) : m_nodes (new Node [window_size]), m_queue_tail (m_nodes), m_queue_head (0), m_tree_root (0), m_list_min (0), m_list_max (0), m_free_list (m_nodes) { for (int i = 0 ; i < window_size - 1 ; ++i) { m_nodes [i].m_list_lower = &m_nodes [i + 1]; } m_nodes [window_size - 1].m_list_lower = 0; } // Destructor // Tidy up allocated data. ~Range () { delete [] m_nodes; } // Function to add a new value into the data structure. void AddValue ( T value ) { Node *node = GetNode (); // clear links node->m_queue_next = node->m_tree_more = node->m_tree_less = node->m_tree_parent = 0; // set value of node node->m_value = value; // insert node into tree if (m_tree_root) { InsertNodeIntoTree (node); BalanceTreeAfterInsertion (node); } else { m_tree_root = m_list_max = m_list_min = node; node->m_tree_parent = node->m_list_greater = node->m_list_lower = 0; } m_tree_root->m_colour = Black; } // Accessor to determine if the first output value is ready for use. bool RangeAvailable () { return !m_free_list; } // Accessor to get the minimum value of all values in the current window. T Min () { return m_list_min->m_value; } // Accessor to get the maximum value of all values in the current window. T Max () { return m_list_max->m_value; } private: // Function to get a node to store a value into. // This function gets nodes from one of two places: // 1. From the unused/free list // 2. From the end of the fifo queue, this requires removing the node from the list and tree Node *GetNode () { Node *node; if (m_free_list) { // get new node from unused/free list and place at head node = m_free_list; m_free_list = node->m_list_lower; if (m_queue_head) { m_queue_head->m_queue_next = node; } m_queue_head = node; } else { // get node from tail of queue and place at head node = m_queue_tail; m_queue_tail = node->m_queue_next; m_queue_head->m_queue_next = node; m_queue_head = node; // remove node from tree node = RemoveNodeFromTree (node); RebalanceTreeAfterDeletion (node); // remove node from linked list if (node->m_list_lower) { node->m_list_lower->m_list_greater = node->m_list_greater; } else { m_list_min = node->m_list_greater; } if (node->m_list_greater) { node->m_list_greater->m_list_lower = node->m_list_lower; } else { m_list_max = node->m_list_lower; } } return node; } // Rebalances the tree after insertion void BalanceTreeAfterInsertion ( Node *node ) { node->m_colour = Red; while (node != m_tree_root && node->m_tree_parent->m_colour == Red) { if (node->m_tree_parent == node->m_tree_parent->m_tree_parent->m_tree_more) { Node *uncle = node->m_tree_parent->m_tree_parent->m_tree_less; if (uncle && uncle->m_colour == Red) { node->m_tree_parent->m_colour = Black; uncle->m_colour = Black; node->m_tree_parent->m_tree_parent->m_colour = Red; node = node->m_tree_parent->m_tree_parent; } else { if (node == node->m_tree_parent->m_tree_less) { node = node->m_tree_parent; LeftRotate (node); } node->m_tree_parent->m_colour = Black; node->m_tree_parent->m_tree_parent->m_colour = Red; RightRotate (node->m_tree_parent->m_tree_parent); } } else { Node *uncle = node->m_tree_parent->m_tree_parent->m_tree_more; if (uncle && uncle->m_colour == Red) { node->m_tree_parent->m_colour = Black; uncle->m_colour = Black; node->m_tree_parent->m_tree_parent->m_colour = Red; node = node->m_tree_parent->m_tree_parent; } else { if (node == node->m_tree_parent->m_tree_more) { node = node->m_tree_parent; RightRotate (node); } node->m_tree_parent->m_colour = Black; node->m_tree_parent->m_tree_parent->m_colour = Red; LeftRotate (node->m_tree_parent->m_tree_parent); } } } } // Adds a node into the tree and sorted linked list void InsertNodeIntoTree ( Node *node ) { Node *parent = 0, *child = m_tree_root; bool greater; while (child) { parent = child; child = (greater = node->m_value > child->m_value) ? child->m_tree_more : child->m_tree_less; } node->m_tree_parent = parent; if (greater) { parent->m_tree_more = node; // insert node into linked list if (parent->m_list_greater) { parent->m_list_greater->m_list_lower = node; } else { m_list_max = node; } node->m_list_greater = parent->m_list_greater; node->m_list_lower = parent; parent->m_list_greater = node; } else { parent->m_tree_less = node; // insert node into linked list if (parent->m_list_lower) { parent->m_list_lower->m_list_greater = node; } else { m_list_min = node; } node->m_list_lower = parent->m_list_lower; node->m_list_greater = parent; parent->m_list_lower = node; } } // Red/Black tree manipulation routine, used for removing a node Node *RemoveNodeFromTree ( Node *node ) { if (node->m_tree_less && node->m_tree_more) { // the complex case, swap node with a child node Node *child; if (node->m_tree_less) { // find largest value in lesser half (node with no greater pointer) for (child = node->m_tree_less ; child->m_tree_more ; child = child->m_tree_more) { } } else { // find smallest value in greater half (node with no lesser pointer) for (child = node->m_tree_more ; child->m_tree_less ; child = child->m_tree_less) { } } swap (child->m_colour, node->m_colour); if (child->m_tree_parent != node) { swap (child->m_tree_less, node->m_tree_less); swap (child->m_tree_more, node->m_tree_more); swap (child->m_tree_parent, node->m_tree_parent); if (!child->m_tree_parent) { m_tree_root = child; } else { if (child->m_tree_parent->m_tree_less == node) { child->m_tree_parent->m_tree_less = child; } else { child->m_tree_parent->m_tree_more = child; } } if (node->m_tree_parent->m_tree_less == child) { node->m_tree_parent->m_tree_less = node; } else { node->m_tree_parent->m_tree_more = node; } } else { child->m_tree_parent = node->m_tree_parent; node->m_tree_parent = child; Node *child_less = child->m_tree_less, *child_more = child->m_tree_more; if (node->m_tree_less == child) { child->m_tree_less = node; child->m_tree_more = node->m_tree_more; node->m_tree_less = child_less; node->m_tree_more = child_more; } else { child->m_tree_less = node->m_tree_less; child->m_tree_more = node; node->m_tree_less = child_less; node->m_tree_more = child_more; } if (!child->m_tree_parent) { m_tree_root = child; } else { if (child->m_tree_parent->m_tree_less == node) { child->m_tree_parent->m_tree_less = child; } else { child->m_tree_parent->m_tree_more = child; } } } if (child->m_tree_less) { child->m_tree_less->m_tree_parent = child; } if (child->m_tree_more) { child->m_tree_more->m_tree_parent = child; } if (node->m_tree_less) { node->m_tree_less->m_tree_parent = node; } if (node->m_tree_more) { node->m_tree_more->m_tree_parent = node; } } Node *child = node->m_tree_less ? node->m_tree_less : node->m_tree_more; if (node->m_tree_parent->m_tree_less == node) { node->m_tree_parent->m_tree_less = child; } else { node->m_tree_parent->m_tree_more = child; } if (child) { child->m_tree_parent = node->m_tree_parent; } return node; } // Red/Black tree manipulation routine, used for rebalancing a tree after a deletion void RebalanceTreeAfterDeletion ( Node *node ) { Node *child = node->m_tree_less ? node->m_tree_less : node->m_tree_more; if (node->m_colour == Black) { if (child && child->m_colour == Red) { child->m_colour = Black; } else { Node *parent = node->m_tree_parent, *n = child; while (parent) { Node *sibling = n->Sibling (parent); if (sibling && sibling->m_colour == Red) { parent->m_colour = Red; sibling->m_colour = Black; if (n == parent->m_tree_more) { LeftRotate (parent); } else { RightRotate (parent); } } sibling = n->Sibling (parent); if (parent->m_colour == Black && sibling->m_colour == Black && (!sibling->m_tree_more || sibling->m_tree_more->m_colour == Black) && (!sibling->m_tree_less || sibling->m_tree_less->m_colour == Black)) { sibling->m_colour = Red; n = parent; parent = n->m_tree_parent; continue; } else { if (parent->m_colour == Red && sibling->m_colour == Black && (!sibling->m_tree_more || sibling->m_tree_more->m_colour == Black) && (!sibling->m_tree_less || sibling->m_tree_less->m_colour == Black)) { sibling->m_colour = Red; parent->m_colour = Black; break; } else { if (n == parent->m_tree_more && sibling->m_colour == Black && (sibling->m_tree_more && sibling->m_tree_more->m_colour == Red) && (!sibling->m_tree_less || sibling->m_tree_less->m_colour == Black)) { sibling->m_colour = Red; sibling->m_tree_more->m_colour = Black; RightRotate (sibling); } else { if (n == parent->m_tree_less && sibling->m_colour == Black && (!sibling->m_tree_more || sibling->m_tree_more->m_colour == Black) && (sibling->m_tree_less && sibling->m_tree_less->m_colour == Red)) { sibling->m_colour = Red; sibling->m_tree_less->m_colour = Black; LeftRotate (sibling); } } sibling = n->Sibling (parent); sibling->m_colour = parent->m_colour; parent->m_colour = Black; if (n == parent->m_tree_more) { sibling->m_tree_less->m_colour = Black; LeftRotate (parent); } else { sibling->m_tree_more->m_colour = Black; RightRotate (parent); } break; } } } } } } // Red/Black tree manipulation routine, used for balancing the tree void LeftRotate ( Node *node ) { Node *less = node->m_tree_less; node->m_tree_less = less->m_tree_more; if (less->m_tree_more) { less->m_tree_more->m_tree_parent = node; } less->m_tree_parent = node->m_tree_parent; if (!node->m_tree_parent) { m_tree_root = less; } else { if (node == node->m_tree_parent->m_tree_more) { node->m_tree_parent->m_tree_more = less; } else { node->m_tree_parent->m_tree_less = less; } } less->m_tree_more = node; node->m_tree_parent = less; } // Red/Black tree manipulation routine, used for balancing the tree void RightRotate ( Node *node ) { Node *more = node->m_tree_more; node->m_tree_more = more->m_tree_less; if (more->m_tree_less) { more->m_tree_less->m_tree_parent = node; } more->m_tree_parent = node->m_tree_parent; if (!node->m_tree_parent) { m_tree_root = more; } else { if (node == node->m_tree_parent->m_tree_less) { node->m_tree_parent->m_tree_less = more; } else { node->m_tree_parent->m_tree_more = more; } } more->m_tree_less = node; node->m_tree_parent = more; } // Member Data. Node *m_nodes, *m_queue_tail, *m_queue_head, *m_tree_root, *m_list_min, *m_list_max, *m_free_list; }; // A complex but more efficent method of calculating the results. // Memory management is done here outside of the timing portion. clock_t Complex ( int count, int window, GeneratorCallback input, OutputCallback output ) { Range <int> range (window); clock_t start = clock (); for (int i = 0 ; i < count ; ++i) { range.AddValue (input ()); if (range.RangeAvailable ()) { output (range.Min (), range.Max ()); } } clock_t end = clock (); return end - start; }
{ "language": "en", "url": "https://stackoverflow.com/questions/148003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Find out the calling stored procedure in SQL Server Is it possible to find out who called a stored procedure? For example, say I get an error in proc3. From within that proc I want to know if it was called by proc1 or proc2. A: There is no nice automatic way to do this (alas). So it really depends on how much you are prepared to (re)write your procs in order to be able to do this. If you have a logging mechanism, you might be able to read the log and work out who called you. For example, if you implement logging by inserting to a table, for example: CREATE TABLE Log (timestamp dattime, spid int, procname varchar(255), message varchar(255) ) ... text of proc ... INSERT INTO Log SELECT get_date(), @@spid, @currentproc, 'doing something' -- you have to define @currentproc in each proc -- get name of caller SELECT @caller = procname FROM Log WHERE spid = @@spid AND timestamp = (SELECT max(timestamp) FROM Log WHERE timestamp < get_date() AND procname != @currentproc ) This wouldn't work for recursive calls, but perhaps someone can fix that? A: Do you need to know in proc3 at runtime which caused the error, or do you just need to know while debugging? You can use SQL Server profiler if you only need to do it during debugging/monitoring. Otherwise in 2005 I don't believe you have the ability to stack trace. To work around it you could add and extra parameter to proc3, @CallingProc or something like that. OR you could add try catch blocks to proc1 and proc2. BEGIN TRY EXEC Proc3 END TRY BEGIN CATCH SELECT 'Error Caught' SELECT ERROR_PROCEDURE() END CATCH Good reference here : http://searchsqlserver.techtarget.com/tip/1,289483,sid87_gci1189087,00.html and of course always SQL Server Books Online SQL Server 2008 does have the ability to debug through procedures however. A: You could have proc1 and proc2 pass their names into proc3 as a parameter. For example: CREATE PROCEDURE proc3 @Caller nvarchar(128) -- Name of calling proc. AS BEGIN -- Produce error message that includes caller's name. RAISERROR ('Caller was %s.', 16,10, @Caller); END GO CREATE PROCEDURE proc1 AS BEGIN -- Get the name of this proc. DECLARE @ProcName nvarchar(128); SET @ProcName = OBJECT_NAME(@@PROCID); -- Pass it to proc3. EXEC proc3 @ProcName END GO CREATE PROCEDURE proc2 AS BEGIN -- Get the name of this proc. DECLARE @ProcName nvarchar(128); SET @ProcName = OBJECT_NAME(@@PROCID); -- Pass it to proc3. EXEC proc3 @ProcName END GO A: I would use an extra input parameter, to specify the source, if this is important for your logic. This will also make it easier to port your database to another platform, since you don't depend on some obscure platform dependent function.
{ "language": "en", "url": "https://stackoverflow.com/questions/148004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How to update unique values in SQL using a PostgreSQL sequence? In SQL, how do update a table, setting a column to a different value for each row? I want to update some rows in a PostgreSQL database, setting one column to a number from a sequence, where that column has a unique constraint. I hoped that I could just use: update person set unique_number = (select nextval('number_sequence') ); but it seems that nextval is only called once, so the update uses the same number for every row, and I get a 'duplicate key violates unique constraint' error. What should I do instead? A: Don't use a subselect, rather use the nextval function directly, like this: update person set unique_number = nextval('number_sequence'); A: I consider pg's sequences a hack and signs that incremental integers aren't the best way to key rows. Although pgsql didn't get native support for UUIDs until 8.3 http://www.postgresql.org/docs/8.3/interactive/datatype-uuid.html The benefits of UUID is that the combination are nearly infinite, unlike a random number which will hit a collision one day.
{ "language": "en", "url": "https://stackoverflow.com/questions/148005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How can I plot data with a non-numeric X-axis? I have a series of performance tests I would like to show as a graph. I have a set of tests (about 10) which I run on a set of components (currently 3), and get throughput results. The Y-axis would be the throughput result from the test, and the X-axis should have an abbreviated name of the test, with the results from the various components I'm testing. So, for each X label (eg. retrieve20Items, store20Items) there would be 3 different results above it, one for each of the three components I'm testing, each colour-coded and referenced in the legend. Is this non-numeric x-axis something that I can do with gnuplot? This is being done on a linux platform, so Windows-only tools won't work for me. A: See this very helpful page. Essentially you create a number-label mapping using set xtics ("lbl1" 1, "lbl2" 2, "lbl3" 3, "lbl4" 4) Then plot as normal.
{ "language": "en", "url": "https://stackoverflow.com/questions/148020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Why is my parameter passed by reference not modified within the function? I have got a C function in a static library, let's call it A, with the following interface : int A(unsigned int a, unsigned long long b, unsigned int *y, unsigned char *z); This function will change the value of y an z (this is for sure). I use it from within a dynamic C++ library, using extern "C". Now, here is what stune me : * *y is properly set, z is not changed. What I exactly mean is that if both are initialized with a (pointed) value of 666, the value pointed by y will have changed after the call but not the value pointed by z (still 666). *when called from a C binary, this function works seamlessly (value pointed by z is modified). *if I create a dummy C library with a function having the same prototype, and I use it from within my dynamic C++ library, it works very well. If I re-use the same variables to call A(..), I get the same result as before, z is not changed. I think that the above points show that it is not a stupid mistake with the declaration of my variables. I am clearly stuck, and I can't change the C library. Do you have any clue on what can be the problem ? I was thinking about a problem on the C/C++ interface, per instance the way a char* is interpreted. Edit : I finally found out what was the problem. See below my answer. A: It looks like a difference between the the way your C library and C++ compiler is dealing with long longs. My guess is that it is that the C library is probably pre C89 standard and actually treating the 64bit long long as a 32bit long. Your C++ library is handling it correctly and placing 64bits on the call stack and hence corrupting y and z. Maybe try calling the function through *int A(unsigned int a, unsigned long b, unsigned int *y, unsigned char z), and see what you get. Just a thought. A: This is one of those questions where there's nothing obviously wrong from what you've described, yet things aren't working the way you expect. I think you should edit your post to give a lot more information in order to get some sensible answers. In particular, let's start with:- * *What platform is this code for: Windows, linux, something embedded or ...? *What compiler is the C static library built with? *What compiler is the C++ dynamic library built with? *What compiler is the C which can successfully call the library built with? *Do you have a source-level debugger? If so, can you step into the C code from the C++. Unless you're wrong about A always modifying the data pointed to by Z, the only likely cause of your problem is an incompatibility between the parameter passing conventions . The "long long" issue may be a hint that things are not as they seem. As a last resort, you could compare the disassembled C++ calling code (which you say fails) and the C calling code (which you say succeeds), or step through the CPU instructions with the debugger (yes, really - you'll learn a good skill as well as solving the problem) A: As far as I know, long long is not part of standard C++, maybe that is the source of your problem. A: dunno. Try to debug-step into A and see what happens (assembly code alert!) A: Maybe you can wrap the original function in a C library that you call from your C++ library? Based on your points 2 and 3, it seems like this could work. If it doesn't, it gives you another debug point to find more clues - see which of your libraries the failure first pops up in, and check why 2 and 3 work, but this doesn't - what is the minimal difference? You could also try to examine the stack that is set up by your function call in each case to check if the difference is here -- considering different calling conventions. A: Step 1: Compare the pointers y and z passed from the C++ side with those received by the C function. P.S. I don't want to sound obvious, but just double-checking here. I suppose when you say that z is modified just fine when called from a C binary, you mean that the data where z is pointing is modified just fine. The pointers y and z themselves are passed by value, so you can't change the pointers. A: Another wild guess: are you sure you're linking against the right instance of the function in your C library? Could it be that there are several such functions available in your libraries? In C the linker doesn't care about the return type or the parameter list when deciding how to resolve a function -- only the name is important. So, if you have multiple functions with the same name... You could programmatically verify the identity of the function. Create a C library that calls your function A with some test parameters and that works fine and that prints the pointer to function A. Link the library into your C++ app. Then print the pointer to the original A function as seen from the C++ code and compare the pointer with that seen by your C library when invoked in the same process. A: Again, an obvious one, but who knows... Are you sure the C function you're invoking is stateless, meaning its output depends only on its inputs? If the function isn't stateless, then it might be that the "hidden" state is responsible for the different behavior (not changing the data pointed to by z) of the function when invoked from your C++ app. A: First of all, I am very grateful to everyone for your help. Thanks to the numerous ideas and clues you gave me, I have been able to finally sort out this problem. Your advices helped me to question what I took for granted. Short answer to my problem : The problem was that my C++ library used an old version of the C library. This old version missed the 4th argument. As a consequence, the 4th argument was obviously never changed. I am a bit ashamed now that I realised this was the problem. However, I was misslead by the fact that my code was compiling fine. This was due to the fact that the C++ library compiled against the correct version of the C lib, but at runtime it used the old version statically linked with another library that I was using. C++ Lib (M) ---> dyn C++ lib (N) ---> C lib (P) v.1.0 | ------> C lib (P) v.1.1 (N) is a dynamic library which is statically linked with (P) version 1.0. The compiler accepted the call from (M) to the function with 4 arguments because I linked against (P) version 1.1, but at runtime it used the old version of (P). Feel free to edit this answer or the question or to ask me to do so. A: In your C++ program, is the prototype declared with extern "C"?
{ "language": "en", "url": "https://stackoverflow.com/questions/148024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's the best approach to migrate a CGI to a Framework? i have a big web application running in perl CGI. It's running ok, it's well written, but as it was done in the past, all the html are defined hardcoded in the CGI calls, so as you could imagine, it's hard to mantain, improve and etc. So now i would like to start to add some templating and integrate with a framework (catalyst or CGI::application). My question is: Somebody here has an experience like that? There is any things that i must pay attention for? I'm aware that with both frameworks i can run native CGI scripts, so it's good because i can run both (CGI native ad "frameworked" code) together without any trauma. Any tips? A: Extricate the HTML from the processing logic in the CGI script. Identify all code that affects the HTML output, as these are candidates for becoming template variables. Separate that into a HTML file, with the identified parts marked with template variables. Eventually you will be able to refactor the page such that all processing is done at the start of the code and the HTML template just called up at the end of all processing. A: In this kind of situation, rewriting from scratch basically, the old code is useful for A) testing, and B) design details. Ideally you'd make a set of tests, for all the basic functionality that you want to replicate, or at least tests that parse the final result pages so you can see the new code is returning the same information for the same inputs. Design details within the code might be useless, depending on how much the framework handles automatically. If you have a good set of tests, and a straightforward conversion works well, you're done. If the behavior of the new doesn't match the old, you probably need to dig deeper into the "why?" and that'll probably be something odd looking, that doesn't make sense at first glance. One thing to remember to do first is, find out if anyone has made something similar in the framework you're using. You could save yourself a LOT of time and money. A: Here is how I did it using Python instead of Perl, but that should not matter: * *Separated out HTML and code into distinct files. I used a template engine for that. *Created functions from the code which rendered a template with a set of parameters. *Organized the functions (which I termed views, inspired by Django) in a sensible way. (Admin views, User views, etc.) The views all follow the same calling convention! *Refactored out the database and request stuff so that the views would only contain view specific code (read: Handling GET, POST requests, etc. but nothing low-level!). Relied heavily on existing libraries for that. I am here at the moment. :-) The next obvious step is of course: * *Write a dispatcher which maps URLs to your views. This will also lead to nicer URLs and nicer 404- and error handling of course. A: One of the assumptions that frameworks make is that the urls map to the code. For example in a framework you'll often see the following: http://app.com/docs/list http://app.com/docs/view/123 Usually though the old CGI scripts don't work like that, you're more likely to have something like: http://app.com/docs.cgi?action=view&id=123 To take advantage of the framework you may well need to change all the urls. Whether you can do this, and how you keep old links working, may well form a large part of your decision. Also frameworks provide support for some sort of ORM (object relational mapper) which abstracts the database calls and lets you only deal with objects. For Catalyst this is usually DBIx::Class. You should evaluate what the cost of switching to this will be. You'll probably find that you want to do a complete rewrite, with the old code as a reference platform. This may be much less work than you expect. However start with a few toy sites to get a feel for whichever framework/orm/template you decide to go with. A: Write tests first (for example with Test::WWW::Mechanize). Then when you change things you always know if something breaks, and what it is that breaks. Then extract HTML into templates, and commonly used subs into modules. After that it's a piece of cake to switch to a framework. In general, go step by step so that you always have a working application.
{ "language": "en", "url": "https://stackoverflow.com/questions/148039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Using OR comparisons with IF statements When using IF statements in Python, you have to do the following to make the "cascade" work correctly. if job == "mechanic" or job == "tech": print "awesome" elif job == "tool" or job == "rock": print "dolt" Is there a way to make Python accept multiple values when checking for "equals to"? For example, if job == "mechanic" or "tech": print "awesome" elif job == "tool" or "rock": print "dolt" A: You can use in: if job in ["mechanic", "tech"]: print "awesome" When checking very large numbers, it may also be worth storing off a set of the items to check, as this will be faster. Eg. AwesomeJobs = set(["mechanic", "tech", ... lots of others ]) ... def func(): if job in AwesomeJobs: print "awesome" A: if job in ("mechanic", "tech"): print "awesome" elif job in ("tool", "rock"): print "dolt" The values in parentheses are a tuple. The in operator checks to see whether the left hand side item occurs somewhere inside the right handle tuple. Note that when Python searches a tuple or list using the in operator, it does a linear search. If you have a large number of items on the right hand side, this could be a performance bottleneck. A larger-scale way of doing this would be to use a frozenset: AwesomeJobs = frozenset(["mechanic", "tech", ... lots of others ]) def func(): if job in AwesomeJobs: print "awesome" The use of frozenset over set is preferred if the list of awesome jobs does not need to be changed during the operation of your program. A: if job in ("mechanic", "tech"): print "awesome" elif job in ("tool", "rock"): print "dolt" A: While I don't think you can do what you want directly, one alternative is: if job in [ "mechanic", "tech" ]: print "awesome" elif job in [ "tool", "rock" ]: print "dolt" A: Tuples with constant items are stored themselves as constants in the compiled function. They can be loaded with a single instruction. Lists and sets on the other hand, are always constructed anew on each execution. Both tuples and lists use linear search for the in-operator. Sets uses a hash-based look-up, so it will be faster for a larger number of options. A: In other languages I'd use a switch/select statement to get the job done. You can do that in python too.
{ "language": "en", "url": "https://stackoverflow.com/questions/148042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Is it OK to have object instantation 'hooks' in base classes? I have created my own Tree implementation for various reasons and have come up with two classes, a 'base' class that is a generic tree node that is chock full of logic and another class that extends that one which is more specialised. In my base class certain methods involve instantiating new tree nodes (e.g. adding children). These instantations are inside logic (in a nested loop, say) which makes the logic hard to separate from the instantation. So, if I don't override these instantations in the specific class the wrong type of node will be created. However, I don't want to override those methods because they also contained shared logic that shouldn't be duplicated! The problem can be boiled down to this: public class Foo { public String value() { return "foo"; } public Foo doStuff() { // Logic logic logic.. return new Foo(); } } class Bar extends Foo { public String value() { return "bar"; } } new Bar().doStuff().value(); // returns 'foo', we want 'bar' The first thing that popped into my head would have a 'create hook' that extending classes could override: public Foo createFooHook(/* required parameters */) { return new Foo(); } Now. while it was a fine first thought, there is a stench coming off that code something awful. There is something very... wrong about it. It's like cooking while naked-- it feels dangerous and unnecessary. So, how would you deal with this situation? A: So, after getting my copy of Design Patterns and opening it for what I'm fairly sure is the first time ever I discovered what I want. It's called the Factory Method and it's mostly a perfect fit. It's still a bit ugly because my super class (Foo in the above example) is not abstract which means subclasses are not forced to implement the hook. That can be fixed with some refactoring though, and I'll end up with something to the effect of: abstract class AbstractFoo { public String value() { return "Foo"; } public AbstractFoo doStuff() { // Logic logic logic return hook(); } protected abstract AbstractFoo hook(); } class Foo extends AbstractFoo { protected AbstractFoo hook() { return new Foo(); } } class Bar extends AbstractFoo { public String value() { return "Bar"; } protected AbstractFoo hook() { return new Bar(); } } new Bar().doStuff().value(); // Returns 'Bar'! A: In addition to the Factory pattern, I'd take a look at the Composite pattern - it tends to lend itself well to working with a Factory in tree-based situations. Composite Design Pattern A: I don't think there's a better approach. Just be careful not to call these hooks from the constructor.
{ "language": "en", "url": "https://stackoverflow.com/questions/148056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Call a Mathematica program from the command line, with command-line args, stdin, stdout, and stderr If you have Mathematica code in foo.m, Mathematica can be invoked with -noprompt and with -initfile foo.m (or -run "<<foo.m") and the command line arguments are available in $CommandLine (with extra junk in there) but is there a way to just have some mathematica code like #!/usr/bin/env MathKernel x = 2+2; Print[x]; Print["There were ", Length[ARGV], " args passed in on the command line."]; linesFromStdin = readList[]; etc. and chmod it executable and run it? In other words, how does one use Mathematica like any other scripting language (Perl, Python, Ruby, etc)? A: Here is a solution that does not require an additional helper script. You can use the following shebang to directly invoke the Mathematica kernel: #!/bin/sh exec <"$0" || exit; read; read; exec /usr/local/bin/math -noprompt "$@" | sed '/^$/d'; exit (* Mathematica code starts here *) x = 2+2; Print[x]; The shebang code skips the first two lines of the script and feeds the rest to the Mathematica kernel as standard input. The sed command drops empty lines produced by the kernel. This hack is not as versatile as MASH. Because the Mathematica code is read from stdin you cannot use stdin for user input, i.e., the functions Input and InputString do not work. A: Assuming you add the Mathematica binaries to the PATH environment variable in ~/.profile, export PATH=$PATH:/Applications/Mathematica.app/Contents/MacOS Then you just write this shebang line in your Mathematica scripts. #!/usr/bin/env MathKernel -script Now you can dot-slash your scripts. $ cat hello.ma #!/usr/bin/env MathKernel -script Print["Hello World!"] $ chmod a+x hello.ma $ ./hello.ma "Hello World!" Tested with Mathematica 8.0. Minor bug: Mathematica surrounds Print[s] with quotes in Windows and Mac OS X, but not Linux. WTF? A: Try -initfile filename And put the exit command into your program A: I found another solution that worked for me. Save the code in a .m file, then run it like this: MathKernel -noprompt -run “< This is the link: http://bergmanlab.smith.man.ac.uk/?p=38 A: MASH -- Mathematica Scripting Hack -- will do this. Since Mathematica version 6, the following perl script suffices: http://ai.eecs.umich.edu/people/dreeves/mash/mash.pl For previous Mathematica versions, a C program is needed: http://ai.eecs.umich.edu/people/dreeves/mash/pre6 UPDATE: At long last, Mathematica 8 supports this natively with the "-script" command-line option: http://www.wolfram.com/mathematica/new-in-8/mathematica-shell-scripts/ A: For mathematica 7 $ cat test.m #!/bin/bash MathKernel -noprompt -run < <( cat $0| sed -e '1,4d' ) | sed '1d' exit 0 ### code start Here ... ### Print["Hello World!"] X=7 X*5 Usage: $ chmod +x test.m $ ./test.m "Hello World!" 7 35
{ "language": "en", "url": "https://stackoverflow.com/questions/148057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How would you test an SSL connection? I'm experimenting with OpenSSL on my network application and I want to test if the data sent is encrypted and can't be seen by eavesdropper. What tools can you use to check? Could this be done programmatically so it could be placed in a unit test? A: Franci Penov made an answer to one of my questions "Log Post Parameters sent to a website", suggesting I take a look at Fiddler: http://www.fiddler2.com/fiddler2/ I tried it and it works beautifully, if you're interested in viewing HTTP requests. :) A: openssl has an s_client, which is a quick and dirty generic client that you can use to test the server connection. It'll show the server certificate and negotiated encryption scheme. A: I found this guide very helpful. These are some of the tools that he used: $ openssl s_client -connect mail.prefetch.net:443 -state -nbio 2>&1 | grep "^SSL" $ ssldump -a -A -H -i en0 $ ssldump -a -A -H -k rsa.key -i en0 $ ssldump -a -A -H -k rsa.key -i en0 host fred and port 443 A: check out wire shark http://www.wireshark.org/ and tcp dump http://en.wikipedia.org/wiki/Tcpdump Not sure about integrating these into unit tests. They will let you look at a very low level whats going on at the network level. Perhaps for the unit test determine what the stream looks like unencrypted and make sure the encrypted stream is not similar A: Yeah - Wire Shark (http://www.wireshark.org/) is pretty cool (filters, reports, stats). As to testing you could do it as a part of integration tests (there are some command line options in wireshark) A: For a quick check you can use Wireshark (formerly known as Ethereal) to see if your data is transmitted in plain-text or not. A: As mentioned before http://www.wireshark.org/, you can also use cain & able to redirect the traffic to a 3rd machine and anylze the protocol from there.
{ "language": "en", "url": "https://stackoverflow.com/questions/148059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Is encrypting AJAX calls for authentication possible with jQuery? I'm fairly new to the AJAX methodologies (I only recently discovered jQuery a short time ago). I am interested to know if there is anyway to authenticate a user on a PHP setup; securely. Does jQuery have any special options to allow use of HTTPS (or any other way to encrypt my ajax call)? Yes, I could very well just post data back to the server, but that ruins the fun. :) A: Well, in case you are intrested. There is AES JavaScript implementation. I had lots of fun playing with it :). Still, it might be litte tricky... A: Unless jQuery already does this (I use MooTools so I wouldn't know) I'd highly suggest that you link the AJAX login to the PHP session by using a $_GET variable in the query string. This way even though it's through HTTPS, you'll still know what session its tied to for an added layer of protection. A: To use Ajax over HTTPS, you have to load the originating page over HTTPS. Same origin policy So, in a sense, yes -- but, not on its own.
{ "language": "en", "url": "https://stackoverflow.com/questions/148068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Problem using large binary segment in OOXML System Description A plotting component that uses OOXML to generate a document. Plotting component consists of several parts. All parts are written in C++ as exe + dll's, with the exception of the interface to the OOXML document. The latter component is a COM component that was created in C#/.NET. The main reason for this is that the .NET framework contains System.IO.Packaging. This is a very handy built-in facility for dealing with OOXML documents. We create a document out of a template OOXML document where certain bits and pieces are replaced by their actual content. One of these bits is an OLE Server component. Basically this is a binary segment within the OOXML file. For writing this binary segment, the Packaging component apparently uses isolated storage. Problem Writing a segment > 8MB results in an exception being thrown "Unable to determine the identity of the domain". On the C++ side this exception contains the error ISS_E_ISOSTORE ( 0x80131450 ). We have analyzed this and as far as we can tell, this is a security feature that prevents semi-untrusted third-party component from completely ruining your HD by writing immense files. We have then tried a lot of things in the .NET/COM component ( creating custom AppDomains, setting Attributes for maximum permissivity, Creating our own Streams and passing those to the Packaging component ) but every time it resulted in the same exception being thrown. What could we do to make this work? Could it be that when the .NET component is instantiated as a COM component, its AppDomain is alway untrusted? A: You might try to unzip the package yourself (instead of using the .NET package API), write directly to the file which represents the binary segment and zip it again. A: You should change the title of that question since your problem is not OOXML related. Other than that: what system are you working on that 8MB chunks of data result in the risk of totalling your hard drive?
{ "language": "en", "url": "https://stackoverflow.com/questions/148071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Is the sorting algorithm used by .NET's `Array.Sort()` method a stable algorithm? Is the sorting algorithm used by .NET's Array.Sort() method a stable algorithm? A: From MSDN: This implementation performs an unstable sort; that is, if two elements are equal, their order might not be preserved. In contrast, a stable sort preserves the order of elements that are equal. The sort uses introspective sort. (Quicksort in version 4.0 and earlier of the .NET framework). If you need a stable sort, you can use Enumerable.OrderBy. A: Adding to Rasmus Faber's answer… Sorting in LINQ, via Enumerable.OrderBy and Enumerable.ThenBy, is a stable sort implementation, which can be used as an alternative to Array.Sort. From Enumerable.OrderBy documentation over at MSDN: This method performs a stable sort; that is, if the keys of two elements are equal, the order of the elements is preserved. In contrast, an unstable sort does not preserve the order of elements that have the same key. Also, any unstable sort implementation, like that of Array.Sort, can be stabilized by using the position of the elements in the source sequence or array as an additional key to serve as a tie-breaker. Below is one such implementation, as a generic extension method on any single-dimensional array and which turns Array.Sort into a stable sort: using System; using System.Collections.Generic; public static class ArrayExtensions { public static void StableSort<T>(this T[] values, Comparison<T> comparison) { var keys = new KeyValuePair<int, T>[values.Length]; for (var i = 0; i < values.Length; i++) keys[i] = new KeyValuePair<int, T>(i, values[i]); Array.Sort(keys, values, new StabilizingComparer<T>(comparison)); } private sealed class StabilizingComparer<T> : IComparer<KeyValuePair<int, T>> { private readonly Comparison<T> _comparison; public StabilizingComparer(Comparison<T> comparison) { _comparison = comparison; } public int Compare(KeyValuePair<int, T> x, KeyValuePair<int, T> y) { var result = _comparison(x.Value, y.Value); return result != 0 ? result : x.Key.CompareTo(y.Key); } } } Below is a sample program using StableSort from above: static class Program { static void Main() { var unsorted = new[] { new Person { BirthYear = 1948, Name = "Cat Stevens" }, new Person { BirthYear = 1955, Name = "Kevin Costner" }, new Person { BirthYear = 1952, Name = "Vladimir Putin" }, new Person { BirthYear = 1955, Name = "Bill Gates" }, new Person { BirthYear = 1948, Name = "Kathy Bates" }, new Person { BirthYear = 1956, Name = "David Copperfield" }, new Person { BirthYear = 1948, Name = "Jean Reno" }, }; Array.ForEach(unsorted, Console.WriteLine); Console.WriteLine(); var unstable = (Person[]) unsorted.Clone(); Array.Sort(unstable, (x, y) => x.BirthYear.CompareTo(y.BirthYear)); Array.ForEach(unstable, Console.WriteLine); Console.WriteLine(); var stable = (Person[]) unsorted.Clone(); stable.StableSort((x, y) => x.BirthYear.CompareTo(y.BirthYear)); Array.ForEach(stable, Console.WriteLine); } } sealed class Person { public int BirthYear { get; set; } public string Name { get; set; } public override string ToString() { return string.Format( "{{ BirthYear = {0}, Name = {1} }}", BirthYear, Name); } } Below is the output from the sample program above (running on a machine with Windows Vista SP1 and .NET Framework 3.5 SP1 installed): { BirthYear = 1948, Name = Cat Stevens } { BirthYear = 1955, Name = Kevin Costner } { BirthYear = 1952, Name = Vladimir Putin } { BirthYear = 1955, Name = Bill Gates } { BirthYear = 1948, Name = Kathy Bates } { BirthYear = 1956, Name = David Copperfield } { BirthYear = 1948, Name = Jean Reno } { BirthYear = 1948, Name = Jean Reno } { BirthYear = 1948, Name = Kathy Bates } { BirthYear = 1948, Name = Cat Stevens } { BirthYear = 1952, Name = Vladimir Putin } { BirthYear = 1955, Name = Bill Gates } { BirthYear = 1955, Name = Kevin Costner } { BirthYear = 1956, Name = David Copperfield } { BirthYear = 1948, Name = Cat Stevens } { BirthYear = 1948, Name = Kathy Bates } { BirthYear = 1948, Name = Jean Reno } { BirthYear = 1952, Name = Vladimir Putin } { BirthYear = 1955, Name = Kevin Costner } { BirthYear = 1955, Name = Bill Gates } { BirthYear = 1956, Name = David Copperfield } A: No, it isn't: This method uses the QuickSort algorithm. This implementation performs an unstable sort A: As other answers have stated, Array.Sort isn't stable. However, the LINQ OrderBy methods (and OrderByDescending etc) are stable, which can be very useful. A: UPDATE: This code not stabilizing Array.Sort (ensure that the elements are always sorted in the same order): public static class ComparisonExtensions { public static Comparison<T> WithGetHashCode<T>(this Comparison<T> current) { return (x, y) => { var result = current(x, y); if (result == 0) return x.GetHashCode() - y.GetHashCode(); return result; }; } } Use: Comparison<Person> comparison = (x, y) => x.BirthYear.CompareTo(y.BirthYear); Array.Sort(unstable, comparison.WithGetHashCode());
{ "language": "en", "url": "https://stackoverflow.com/questions/148074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71" }
Q: How to make a method exclusive in a multithreaded context? I have a method which should be executed in an exclusive fashion. Basically, it's a multi threaded application where the method is invoked periodically by a timer, but which could also be manually triggered by a user action. Let's take an example : * *The timer elapses, so the method is called. The task could take a few seconds. *Right after, the user clicks on some button, which should trigger the same task : BAM. It does nothing since the method is already running. I used the following solution : public void DoRecurentJob() { if(!Monitor.TryEnter(this.lockObject)) { return; } try { // Do work } finally { Monitor.Exit(this.lockObject); } } Where lockObject is declared like that: private readonly object lockObject = new object(); Edit : There will be only one instance of the object which holds this method, so I updated the lock object to be non-static. Is there a better way to do that ? Or maybe this one is just wrong for any reason ? A: This looks reasonable if you are just interested in not having the method run in parallel. There's nothing to stop it from running immediately after each other, say that you pushed the button half a microsecond after the timer executed the Monitor.Exit(). And having the lock object as readonly static also make sense. A: You could also use Mutex or Semaphore if you want it to work cross process (with a slight performance penalty), or if you need to set any other number than one of allowed simultaneous threads running your piece of code. There are other signalling constructs that would work, but your example looks like it does the trick, and in a simple and straightforward manner. A: Minor nit: if the lockObject variable is static, then "this.lockObject" shouldn't compile. It also feels slightly odd (and should at least be heavily documented) that although this is an instance method, it has distinctly type-wide behaviour as well. Possibly make it a static method which takes an instance as the parameter? Does it actually use the instance data? If not, make it static. If it does, you should at least return a boolean to say whether or not you did the work with the instance - I find it hard to imagine a situation where I want some work done with a particular piece of data, but I don't care if that work isn't performed because some similar work was being performed with a different piece of data. I think it should work, but it does feel a little odd. I'm not generally a fan of using manual locking, just because it's so easy to get wrong - but this does look okay. (You need to consider asynchronous exceptions between the "if" and the "try" but I suspect they won't be a problem - I can't remember the exact guarantees made by the CLR.) A: I think Microsoft recommends using the lock statement, instead of using the Monitor class directly. It gives a cleaner layout and ensures the lock is released in all circumstances. public class MyClass { // Used as a lock context private readonly object myLock = new object(); public void DoSomeWork() { lock (myLock) { // Critical code section } } } If your application requires the lock to span all instances of MyClass you can define the lock context as a static field: private static readonly object myLock = new object(); A: The code is fine, but would agree with changing the method to be static as it conveys intention better. It feels odd that all instances of a class have a method between them that runs synchronously, yet that method isn't static. Remember you can always have the static syncronous method to be protected or private, leaving it visible only to the instances of the class. public class MyClass { public void AccessResource() { OneAtATime(this); } private static void OneAtATime(MyClass instance) { if( !Monitor.TryEnter(lockObject) ) // ... A: This is a good solution although I'm not really happy with the static lock. Right now you're not waiting for the lock so you won't get into trouble with deadlocks. But making locks too visible can easily get you in to trouble the next time you have to edit this code. Also this isn't a very scalable solution. I usually try to make all the resources I try to protect from being accessed by multiple threads private instance variables of a class and then have a lock as a private instance variable too. That way you can instantiate multiple objects if you need to scale. A: A more declarative way of doing this is using the MethodImplOptions.Synchronized specifier on the method to which you wish to synchronize access: [MethodImpl(MethodImplOptions.Synchronized)] public void OneAtATime() { } However, this method is discouraged for several reasons, most of which can be found here and here. I'm posting this so you won't feel tempted to use it. In Java, synchronized is a keyword, so it may come up when reviewing threading patterns. A: We have a similar requirement, with the added requirement that if the long-running process is requested again, it should enqueue to perform another cycle after the current cycle is complete. It's similar to this: https://codereview.stackexchange.com/questions/16150/singleton-task-running-using-tasks-await-peer-review-challenge private queued = false; private running = false; private object thislock = new object(); void Enqueue() { queued = true; while (Dequeue()) { try { // do work } finally { running = false; } } } bool Dequeue() { lock (thislock) { if (running || !queued) { return false; } else { queued = false; running = true; return true; } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/148078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to convert a string into bignum in C code which extends Guile? In Guile 1.6.*, the function scm_istring2number(char *str,int strlen,int radix) does the work. However, this function does not exist in Guile 1.8.. How can I accomplish the same task in Guile 1.8.? This is not trivial because the function scm_string_to_number(SCM str,int radix) does not convert numbers larger than 231-1 (at least in Guile 1.6.*). A: According to the 1.8 ChangeLog, the function has been renamed scm_c_locale_stringn_to_number.
{ "language": "en", "url": "https://stackoverflow.com/questions/148079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to deploy an ASP.NET Application with zero downtime To deploy a new version of our website we do the following: * *Zip up the new code, and upload it to the server. *On the live server, delete all the live code from the IIS website directory. *Extract the new code zipfile into the now empty IIS directory This process is all scripted, and happens quite quickly, but there can still be a 10-20 second downtime when the old files are being deleted, and the new files being deployed. Any suggestions on a 0 second downtime method? A: I went through this recently and the solution I came up with was to have two sites set up in IIS and to switch between them. For my configuration, I had a web directory for each A and B site like this: c:\Intranet\Live A\Interface c:\Intranet\Live B\Interface In IIS, I have two identical sites (same ports, authentication etc) each with their own application pool. One of the sites is running (A) and the other is stopped (B). the live one also has the live host header. When it comes to deploy to live, I simply publish to the STOPPED site's location. Because I can access the B site using its port, I can pre-warm the site so the first user doesn't cause an application start. Then using a batch file I copy the live host header to B, stop A and start B. A: You need 2 servers and a load balancer. Here's in steps: * *Turn all traffic on Server 2 *Deploy on Server 1 *Test Server 1 *Turn all traffic on Server 1 *Deploy on Server 2 *Test Server 2 *Turn traffic on both servers Thing is, even in this case you will still have application restarts and loss of sessions if you are using "sticky sessions". If you have database sessions or a state server, then everything should be fine. A: OK so since everyone is downvoting the answer I wrote way back in 2008*... I will tell you how we do it now in 2014. We no longer use Web Sites because we are using ASP.NET MVC now. We certainly do not need a load balancer and two servers to do it, that's fine if you have 3 servers for every website you maintain but it's total overkill for most websites. Also, we don't rely on the latest wizard from Microsoft - too slow, and too much hidden magic, and too prone to changing its name. Here's how we do it: * *We have a post build step that copies generated DLLs into a 'bin-pub' folder. *We use Beyond Compare (which is excellent**) to verify and sync changed files (over FTP because that is widely supported) up to the production server *We have a secure URL on the website containing a button which copies everything in 'bin-pub' to 'bin' (taking a backup first to enable quick rollback). At this point the app restarts itself. Then our ORM checks if there are any tables or columns that need to be added and creates them. That is only milliseconds downtime. The app restart can take a second or two but during the restart requests are buffered so there is effectively zero downtime. The whole deployment process takes anywhere from 5 seconds to 30 minutes, depending how many files are changed and how many changes to review. This way you do not have to copy an entire website to a different directory but just the bin folder. You also have complete control over the process and know exactly what is changing. **We always do a quick eyeball of the changes we are deploying - as a last minute double check, so we know what to test and if anything breaks we ready. We use Beyond Compare because it lets you easily diff files over FTP. I would never do this without BC, you have no idea what you are overwriting. *Scroll to the bottom to see it :( BTW I would no longer recommend Web Sites because they are slower to build and can crash badly with half compiled temp files. We used them in the past because they allowed more agile file-by-file deployment. Very quick to fix a minor issue and you can see exactly what you are deploying (if using Beyond Compare of course - otherwise forget it). A: Using Microsoft.Web.Administration's ServerManager class you can develop your own deployment agent. The trick is to change the PhysicalPath of the VirtualDirectory, which results in an online atomic switch between old and new web apps. Be aware that this can result in old and new AppDomains executing in parallel! The problem is how to synchronize changes to databases etc. By polling for the existence of AppDomains with old or new PhysicalPaths it is possible to detect when the old AppDomain(s) have terminated, and if the new AppDomain(s) have started up. To force an AppDomain to start you must make an HTTP request (IIS 7.5 supports Autostart feature) Now you need a way to block requests for the new AppDomain. I use a named mutex - which is created and owned by the deployment agent, waited on by the Application_Start of the new web app, and then released by the deployment agent once the database updates have been made. (I use a marker file in the web app to enable the mutex wait behaviour) Once the new web app is running I delete the marker file. A: The Microsoft Web Deployment Tool supports this to some degree: Enables Windows Transactional File System (TxF) support. When TxF support is enabled, file operations are atomic; that is, they either succeed or fail completely. This ensures data integrity and prevents data or files from existing in a "half-way" or corrupted state. In MS Deploy, TxF is disabled by default. It seems the transaction is for the entire sync. Also, TxF is a feature of Windows Server 2008, so this transaction feature will not work with earlier versions. I believe it's possible to modify your script for 0-downtime using folders as versions and the IIS metabase: * *for an existing path/url: * *path: \web\app\v2.0\ *url: http://app *Copy new (or modified) website to server under * *\web\app\v2.1\ *Modify IIS metabase to change the website path * *from \web\app\2.0\ *to \web\app\v2.1\ This method offers the following benefits: * *In the event new version has a problem, you can easily rollback to v2.0 *To deploy to multiple physical or virtual servers, you could use your script for file deployment. Once all servers have the new version, you can simultaneously change all servers' metabases using the Microsoft Web Deployment Tool. A: The only zero downtime methods I can think of involve hosting on at least 2 servers. A: You can achieve zero downtime deployment on a single server by utilizing Application Request Routing in IIS as a software load balancer between two local IIS sites on different ports. This is known as a blue green deployment strategy where only one of the two sites is available in the load balancer at any given time. Deploy to the site that is "down", warm it up, and bring it into the load balancer (usually by passing a Application Request Routing health check), then take the original site that was up, out of the "pool" (again by making its health check fail). A full tutorial can be found here. A: I would refine George's answer a bit, as follows, for a single server: * *Use a Web Deployment Project to pre-compile the site into a single DLL *Zip up the new site, and upload it to the server *Unzip it to a new folder located in a folder with the right permissions for the site, so the unzipped files inherit the permissions correctly (perhaps e:\web, with subfolders v20090901, v20090916, etc) *Use IIS Manager to change the name of folder containing the site *Keep the old folder around for a while, so you can fallback to it in the event of problems Step 4 will cause the IIS worker process to recycle. This is only zero downtime if you're not using InProc sessions; use SQL mode instead if you can (even better, avoid session state entirely). Of course, it's a little more involved when there are multiple servers and/or database changes.... A: To expand on sklivvz's answer, which relied on having some kind of load balancer (or just a standby copy on the same server) * *Direct all traffic to Site/Server 2 *Optionally wait a bit, to ensure that as few users as possible have pending workflows on the deployed version *Deploy to Site/Server 1 and warm it up as much as possible *Execute database migrations transactionally (strive to make this possible) *Immediately direct all traffic to Site/Server 1 *Deploy to Site/Server 2 *Direct traffic to both sites/servers It is possible to introduce a bit of smoke testing, by creating a database snapshot/copy, but that's not always feasible. If possible and needed use "routing differences", such as different tenant URL:s (customerX.myapp.net) or different users, to deploy to an unknowing group of guinea pigs first. If nothing fails, release to everyone. Since database migrations are involved, rolling back to a previous version is often impossible. There are ways to make applications play nicer in these scenarios, such as using event queues and playback mechanisms, but since we're talking about deploying changes to something that is in use, there's really no fool proof way. A: This is how I do it: Absolute minimum system requirements: 1 server with * *1 load balancer/reverse proxy (e.g. nginx) running on port 80 *2 ASP.NET-Core/mono reverse-proxy/fastcgi chroot-jails or docker-containers listening on 2 different TCP ports (or even just two reverse-proxy applications on 2 different TCP ports without any sandbox) Workflow: start transaction myupdate try Web-Service: Tell all applications on all web-servers to go into primary read-only mode Application switch to primary read-only mode, and responds Web sockets begin notifying all clients Wait for all applications to respond wait (custom short interval) Web-Service: Tell all applications on all web-servers to go into secondary read-only mode Application switch to secondary read-only mode (data-entry fuse) Updatedb - secondary read-only mode (switches database to read-only) Web-Service: Create backup of database Web-Service: Restore backup to new database Web-Service: Update new database with new schema Deploy new application to apt-repository (for windows, you will have to write your own custom deployment web-service) ssh into every machine in array_of_new_webapps run apt-get update then either apt-get dist-upgrade OR apt-get install <packagename> OR apt-get install --only-upgrade <packagename> depending on what you need -- This deploys the new application to all new chroots (or servers/VMs) Test: Test new application under test.domain.xxx -- everything that fails should throw an exception here commit myupdate; Web-Service: Tell all applications to send web-socket request to reload the pages to all clients at time x (+/- random number) @client: notify of reload and that this causes loss of unsafed data, with option to abort @ time x: Switch load balancer from array_of_old_webapps to array_of_new_webapps Decomission/Recycle array_of_old_webapps, etc. catch rollback myupdate switch to read-write mode Web-Service: Tell all applications to send web-socket request to unblock read-only mode end try A: A workaround with no down time and I am regularly using is: * *Rename running .NET core application dll to filename.dll.backup *Upload the new .dll (web application is available and serving the requests while file is being uploaded) *Once upload is complete recycle the Application Pool. Either Requires RDP Access to server or function to recycle application pool in your hosting control panel. IIS overlaps the app pool when recycling so there usually isn’t any downtime during a recycle. So requests still come in without every knowing the app pool has been recycled and the requests are served seamlessly with no downtime. I am still searching for more better method than this..!! :) A: IIS/Windows After trying every possible solution we use this very simple technique: * *IIS application points to a folder /app that is a symlink (!) to /app_green *We deploy the app to /app_blue *We change the symlink to point to /app_blue (the app keeps working) *We recycle the application pool Zero downtime, but the app does choke for 3-5 seconds (JIT compilation and other initialization tasks) Someone called it a "poor man's blue-green deployment" without a load balancer. Nginx/linux On nginx/linux we use "proper" blue-green deployment: * *nginx reverse proxy points to localhost:3000 *we deploy to localhost:3001 *warmup the localhost:3001 *switch the reverse proxy *shot down localhost:3000 (or use docker) Both windows and linux solutions can be easily automated with powershell/bash scripts and invoked via Github Actions or a similar CD/CI engine. A: I would suggest keeping the old files there and simply overwriting them. That way the downtime is limited to single-file overwrite times and there is only ever one file missing at a time. Not sure this helps in a "web application" though (i think you are saying that's what you're using), which is why we always use "web sites". Also with "web sites" deploying doesn't restart your site and drop all the user sessions.
{ "language": "en", "url": "https://stackoverflow.com/questions/148084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "133" }
Q: "Colorizing" images in .NET Is there any simple way to programatically colorize images in .NET? Basically we have a black and white image and need to put a layer of say pink above it and reduce the opacity of that layer to make the picture colorized in pink. A: You should use the wonderful ImageMagick library. It has .NET bindings so no problem there. Have fun! :) A: Check out these links: http://www.codeproject.com/KB/GDI-plus/csharpgraphicfilters11.aspx http://www.codeproject.com/KB/GDI-plus/KVImageProcess.aspx A: The way that springs to mind is using the Drawing packages to draw a rectangle over the picture in a given colour (you can set alpha). It's not very efficient but, with caching, it wouldn't do any harm, even on a busy server. A: This is a little bit too custom for a .net framework method.. If you can't find a single method call solution.. I post something that could be something to look at. If you have a WPF, you could load the image in a control. Have another control (Rectangle with Pink fill and Transparency) on top of it. (Use something like a Grid for layout so that both of them overlap perfectly) Next you could RenderTargetBitmap bmp = new RenderTargetBitmap( imageWidth,imageHeight, DPIHoriz, DPIVert, PixelFormats.Pbrga32); // if you don't want to make the controls 'visible' on screen, you need to trigger size calculations explicitly. grid.Measure(new Size(imageWidth, imageHeight)); grid.Arrange(new Rect(0,0, imageWidth, imageHeight); bmp.Render(grid); So you get whatever you see on screen, written into the Bitmap in memory. You could then save it off. If that doesn't work, you can go for pixel level control with WriteableBitmap class and do byte-labor. A: I think it will be a bit more complicated if you want to colorize an image rather than just putting a smi-transparent layer on top. If you want to have the same effect as the "screen" layer mode in PhotoShop then you have to replace all the shades black in the image with shades of the new color to keep the white parts white. It can most definetly be done in .NET but I supose it wouldn't hurt to look into a library of some sort.
{ "language": "en", "url": "https://stackoverflow.com/questions/148086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: enabling/disabling asp.net web service extension via script In IIS 6, I can use the Web Service Extensions folder in Inetmgr to allow/prohibit isapi filters, such as ASP.net. I want to be able to do this programmatically (in particular, from an installer script/exe). Any ideas? A: Adding Web Service Extension Files Using Iisext.vbs should be pretty much what you're looking for (the linked article describes how to add a new filter: if you just need to enable it, scroll down and see the list of linked articles for exact instructions on how to achieve that) A: Set iisinfo = GetObject("IIS://localhost/W3SVC/Info") If CInt(iisinfo.MajorIIsVersionNumber) >= 6 Then Set iisinfo = Nothing Set iis = GetObject("IIS://localhost/W3SVC") iis.EnableWebServiceExtension "ASP" End If
{ "language": "en", "url": "https://stackoverflow.com/questions/148097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can addChild choose the wrong insertion index? So, In a Flex app I add a new GUI component by creating it and calling parent.addChild(). However in some cases, this causes an error in the bowels of Flex. Turns out, addChild actually does: return addChildAt(child, numChildren); In the cases where it breaks, somehow the numChildren is off by one. Leading to this error: RangeError: Error #2006: The supplied index is out of bounds. at flash.display::DisplayObjectContainer/addChildAt() at mx.core::Container/addChildAt() at mx.core::Container/addChild() . . at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at mx.core::UIComponent/dispatchEvent() at mx.controls::SWFLoader::contentLoaderInfo_completeEventHandler() Is this a bug in Flex or in how I am using it? It kind of looks like it could be a threading bug, but since Flex doesn't support threads that is a bit confusing. A: I have noticed that it most often occurs when re-parenting a UIComponent that is already on the display list. Are you re-parenting in this situation? A: Could it be possible that you are adding a child before the component has been full initialized? Maybe try adding a child after Event.COMPLETE has been broadcast? It may not support threads but it's still asynchronous... A: numChildren doesn't validly reference an existing index in the children array. Arrays in AS3 are indexed starting at 0. This means that the last item in your array as for index numChildren - 1, not numChildren. try addChildAt(child, numChildren - 1); A: OK, like a dope, I was trying to add a child to a container even though it was already there, hence the confusing "wrong insertion index" message. A: cf. http://forums.devshed.com/flash-help-38/scroll-pane-scroll-bars-not-working-818174.html - what you need to do is add children to a display object, and then set the source of the scrollpane to the be the display object. Kinda like this... Code: var myDisplay : DisplayObject = new DisplayObject(); myDisplay.addChild(myChild1); myDisplay.addChild(myChild2); myDisplay.addChild(myChild3); myDisplay.addChild(myChild4); ScrollPane.source = myDisplay; ScrollPane.update();
{ "language": "en", "url": "https://stackoverflow.com/questions/148116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What framework would you recommend for making desktop-like apps for the web? Several frameworks for writing web-based desktop-like applications have recently appeared. E.g. SproutCore and Cappuccino. Do you have any experience using them? What's your impression? Did I miss some other framework? I've seen related questions on StackOverflow, but they generate mostly standard answers like "use jQuery or MochiKit or MooTools or Dojo or YUI". While some people give non-standard answers, they seem to have little experience using this frameworks. Can anyone share real experience developing destop-like apps for the browser? A: To my point of view, Cappuccino is an example of what NOT to do. They implemented another language on top of JavaScript, which already bring slowliness while browser developer are already fighting hard against, and, what is worst, they don't rely at all on the browser widget, breaking all the user navigation experience. For example, they implemented their own scrollbar, with the main drawback that using mouse wheel won't work anymore! I really prefer ExtJS's approach which give you rich widgets while keeping the UI as close as possible from the browser. A: Due to the speed issues these high-level frameworks cause for many larger (as in: non-trivial) applications, we only use plain jQuery. In our tests, all high-level frameworks broke down in situations where there are many draggable objects or many drop targets, and in situation where long lists (with >1000 entries) were shown on screen. Part of this is due to issues with IE6 and IE7 (where performance suddenly starts to deteriorate dramatically after DOM trees reach a certain complexity), but part is due to the overhead these frameworks generate. So I would not recommend any of the high-level frameworks. My recommendation would be to use jQuery and work with the DOM directly. Some tips to improve performance: * *Where possibly, render HTML on the server. *Keep the HTML as simple as possible. *Avoid having many elements in the DOM tree. *Avoid recursive table structure (IE suddenly stops showing them after relatively few levels of nesting). *Remove invisible elements from the DOM tree. *Remove things from the DOM tree before changing them and then re-insert them, rather than changing them while they're in the tree. A: I also, as gizmo, recommend EXT JS. Their license has changed and it may not work for all, but it's still a good choice if you want to do stuff like a desktop. Here's their example page for a desktop environment: http://extjs.com/deploy/dev/examples/desktop/desktop.html A: Apple is demonstrating that sproutcore does work, although it's hard to estimate how well it works. Currently I build web apps with a home-grown set of libraries, duplicating a set of functionality from our windows software suite (but adapted to a web interface). Up to now I've avoided frameworks particularly for the reason that I didn't want the bloat. The problem with this approach is that I waste an inordinate amount of time duplicating functionality that's already in the frameworks, and I feel that over time I'm going to approximate towards something that resembles these frameworks. Because of this I've been experimenting with implementing a web app in extjs, and it was a surprisingly nice experience. The performance is excellent, and ease of development is quite high because their component set is good for actually building apps, not just for fancy demos (a common problem in web toolkits). I would definitely recommend it if you are interested in building desktop-like web apps. The problem of scaling it up obviously still applies, but honestly, I feel that it's better to use a toolkit in situations where scale is not that important, and to fallback to basic javascript only where you need to (premature optimization being the root of all evil). Extjs can layer on top of prototype or jquery, so this approach is definitely workable. Avoiding too much content in the DOM is usually an approach of loading and unloading on-demand. For example, there's a third party extension to the extjs grid class that allows scrolling through a million row dataset by being clever about loading and unloading data. A: You might consider using GWT-Ext (uses Ext underneath) might be a very clean solution if you're going to use Java. A: I like qooxdoo, although it takes the OOP approach of JS rather than the prototypal it is a solid framework and has a lot of features. A: I don't have any experience with SproutCore or Capuccino. But have made attempts to use Dojo on top of Django for this kind of work. Can only tell you it's slow and buggy. A: extjs might be of help. http://dev.extjs.com/deploy/dev/examples/
{ "language": "en", "url": "https://stackoverflow.com/questions/148118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Performance considerations for Triggers vs Constraints I'm trying to find out whether I should be using business critical logic in a trigger or constraint inside of my database. So far I've added logic in triggers as it gives me the control over what happens next and means I can provide custom user messages instead of an error that will probably confuse the users. Is there any noticable performance gain in using constraints over triggers and what are the best practices for determining which to use. A: Triggers can blossom into a performance problem. About the same time that happens they've also become a maintenance nightmare. You can't figure out what's happening and (bonus!) the application behaves erratically with "spurious" data problems. [Really, they're trigger issues.] No end-user touches SQL directly. They use application programs. Application programs contain business logic in a much smarter and more maintainable way than triggers. Put the application logic in application programs. Put data in the database. Unless you and your "users" don't share a common language, you can explain the constraint violations to them. The alternative -- not explaining -- turns a simple database into a problem because it conflates the data and the application code into an unmaintainable quagmire. "How do I get absolute assurance that everyone's using the data model correctly?" Two (and a half) techniques. * *Make sure the model is right: it matches the real-world problem domain. No hacks or workaround or shortcuts that can only be sorted out through complex hand-waving explanations, stored procedures and triggers. *Help define the business model layer of the applications. The layer of application code that everyone shares and reuses. a. Also, be sure that the model layer meets people's needs. If the model layer has the right methods and collections, there's less incentive to bypass it to get direct access to the underlying data. Generally, if the model is right, this isn't a profound concern. Triggers are an train-wreck waiting to happen. Constraints aren't. A: In addition to the other reasons to use constraints, the Oracle optimizer can use constraints to it's advantage. For example, if you have a constraint saying (Amount >= 0) and then you query with WHERE (Amount = -5) Oracle knows immediately that there are no matching rows. A: Constraints and triggers are for 2 different things. Constraints are used to constrain the domain (valid inputs) of your data. For instance, a SSN would be stored as char(9), but with a constraint of [0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9] (all numeric). Triggers are a way of enforcing business logic in your database. Taking SSN again, perhaps an audit trail needs to be maintained whenever an SSN is changed - that would be done with a trigger, In general, data integrity issues in a modern RDBMS can be handled with some variation of a constraint. However, you'll sometimes get into a situation where improper normalization (or changed requirements, resulting in now improper normalization) prevents a constraint. In that case, a trigger may be able to enforce your constraint - but it is opaque to the RDBMS, meaning it can't be used for optimization. It's also "hidden" logic, and can be a maintenance issue. Deciding whether to refactor the schema or use a trigger is a judgment call at that point. A: Generally speaking I would prefer constraints and my code would catch sql server errors and present something more friendly to the user. A: @onedaywhen You can have a query as a constraint in SQL Server, you just have to be able to fit it in a scalar function:http://www.eggheadcafe.com/software/aspnet/30056435/check-contraints-and-tsql.aspx A: If at all possible use constraints. They tend to be slighlty faster. Triggers should be used for complex logic that a constraint can't handle. Trigger writing is tricky as well and if you find you must write a trigger, make sure to use set-based statements becasue triigers operate against the whole insert, update or delete (Yes there will be times when more than one record is affected, plan on that!), not just one record at a time. Do not use a cursor in a trigger if it can be avoided. As far whether to put the logic in the application instead of a trigger or constraint. DO NOT DO THAT!!! Yes, the applications should have checks before they send the data, but data integrity and business logic must be at the database level or your data will get messed up when multiple applications hook into it, when global inserts are done outsiide the application etc. Data integrity is key to databases and must be enforced at the database level. A: @Mark Brackett: "Constraints are used to constrain the domain... Triggers are a way of enforcing business logic": It's not that simple in SQL Server because its constraints' functionality is limited e.g. not yet full SQL-92. Take the classic example of a sequenced 'primary key' in a temporal database table: ideally I'd use a CHECK constraint with a subquery to prevent overlapping periods for the same entity but SQL Server can't do that so I have to use a trigger. Also missing from SQL Server is the SQL-92 ability to defer the checking of constraints but instead they are (in effect) checked after every SQL statement, so again a trigger may be necessary to work around SQL Server's limitations. A: Constraints hands down! * *With constraints you specify relational principles, i.e. facts about your data. You will never need to change your constraints, unless some fact changes (i.e. new requirements). *With triggers you specify how to handle data (in inserts, updates etc.). This is a "non-relational" way of doing things. To explain myself better with an analogy: the proper way to write a SQL query is to specify "what you want" instead of "how to get it" – let the RDBMS figure out the best way to do it for you. The same applies here: if you use triggers you have to keep in mind various things like the order of execution, cascading, etc... Let SQL do that for you with constraints if possible. That's not to say that triggers don't have uses. They do: sometimes you can't use a constraint to specify some fact about your data. It is extremely rare though. If it happens to you a lot, then there's probably some issue with the schema. A: Best Practice: if you can do it with a constraint, use a constraint. Triggers are not quite as bad as they get discredit for (if used correctly), although I would always use a constraint where ever possible. In a modern RDMS, the performance overhead of triggers is comparable to constraints (of course, that doesn't mean someone can't place horrendous code in a trigger!). Occasionally it's necessary to use a trigger to enforce a 'complex' constraint such as the situation of wanting to enforce that one and only one of a Table's two Foreign key fields are populated (I've seen this situation in a few domain models). The debate of whether the business logic should reside in the application rather than the DB, depends to some extent on the environment; if you have many applications accessing the DB, both constraints and triggers can serve as final guard that data is correct. A: @Meff: there are potential problems with the approach of using a function because, simply put, SQL Server CHECK constraints were designed with a single row as the unit of work, and has flaws when working on a resultset. For some more details on this, see: [http://blogs.conchango.com/davidportas/archive/2007/02/19/Trouble-with-CHECK-Constraints.aspx][1]. [1]: David Portas' Blog: Trouble with CHECK constraints. A: Same as Skliwz. Just to let you know a canonical use of trigger is audit table. If many procedures update/insert/delete a table you want to audit ( who modified what and when ), trigger is the simplest way to do it. one way is to simply add a flag in your table ( active/inactive with some unicity constraint ) and insert something in the audit table. Another way if you want the table not to hold the historical data is to copy the former row in your audit table... Many people have many ways of doing it. But one thing is for sure, you'll have to perform an insert for each update/insert/delete in this table To avoid writing the insert in dozen of different places, you can here use a trigger. A: I agree with everyone here about constraints. Use them as much as possible. There is a tendency to overuse triggers, especially with new developers. I have seen situations where a trigger fires another trigger which fires another trigger that repeats the first trigger, creating a cascading trigger that ties up your server. This is a non-optimal user of triggers ;o) That being said, triggers have their place and should be used when appropriate. They are especially good for tracking changes in data (as Mark Brackett mentioned). You need to answer the question "Where does it make the most sense to put my business logic"? Most of the time I think it belongs in the code but you have to keep an open mind.
{ "language": "en", "url": "https://stackoverflow.com/questions/148129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How do I peek at the first two bytes in an InputStream? Should be pretty simple: I have an InputStream where I want to peek at (not read) the first two bytes, i.e. I want the "current position" of the InputStream to stil be at 0 after my peeking. What is the best and safest way to do this? Answer - As I had suspected, the solution was to wrap it in a BufferedInputStream which offers markability. Thanks Rasmus. A: For a general InputStream, I would wrap it in a BufferedInputStream and do something like this: BufferedInputStream bis = new BufferedInputStream(inputStream); bis.mark(2); int byte1 = bis.read(); int byte2 = bis.read(); bis.reset(); // note: you must continue using the BufferedInputStream instead of the inputStream A: When using a BufferedInputStream make sure that the inputStream is not already buffered, double buffering will cause some seriously hard to find bugs. Also you need to handle Readers differently, converting to a StreamReader and Buffering will cause bytes to be lost if the Reader is Buffered. Also if you are using a Reader remember that you are not reading bytes but characters in the default encoding (unless an explicit encoding was set). An example of a buffered input stream, that you may not know is URL url; url.openStream(); I do not have any references for this information, it comes from debugging code. The main case where the issue occurred for me was in code that read from a file into a compressed stream. If I remember correctly once you start debugging through the code there are comments in the Java source that certain things do not work correctly always. I do not remember where the information from using BufferedReader and BufferedInputStream comes from but I think that fails straight away on even the simplest test. Remember to test this you need to be marking more than the buffer size (which is different for BufferedReader versus BufferedInputStream), the problems occur when the bytes being read reach the end of the buffer. Note there is a source code buffer size which can be different to the buffer size you set in the constructor. It is a while since I did this so my recollections of details may be a little off. Testing was done using a FilterReader/FilterInputStream, add one to the direct stream and one to the buffered stream to see the difference. A: I found an implementation of a PeekableInputStream here: http://www.heatonresearch.com/articles/147/page2.html The idea of the implementation shown in the article is that it keeps an array of "peeked" values internally. When you call read, the values are returned first from the peeked array, then from the input stream. When you call peek, the values are read and stored in the "peeked" array. As the license of the sample code is LGPL, It can be attached to this post: package com.heatonresearch.httprecipes.html; import java.io.*; /** * The Heaton Research Spider Copyright 2007 by Heaton * Research, Inc. * * HTTP Programming Recipes for Java ISBN: 0-9773206-6-9 * http://www.heatonresearch.com/articles/series/16/ * * PeekableInputStream: This is a special input stream that * allows the program to peek one or more characters ahead * in the file. * * This class is released under the: * GNU Lesser General Public License (LGPL) * http://www.gnu.org/copyleft/lesser.html * * @author Jeff Heaton * @version 1.1 */ public class PeekableInputStream extends InputStream { /** * The underlying stream. */ private InputStream stream; /** * Bytes that have been peeked at. */ private byte peekBytes[]; /** * How many bytes have been peeked at. */ private int peekLength; /** * The constructor accepts an InputStream to setup the * object. * * @param is * The InputStream to parse. */ public PeekableInputStream(InputStream is) { this.stream = is; this.peekBytes = new byte[10]; this.peekLength = 0; } /** * Peek at the next character from the stream. * * @return The next character. * @throws IOException * If an I/O exception occurs. */ public int peek() throws IOException { return peek(0); } /** * Peek at a specified depth. * * @param depth * The depth to check. * @return The character peeked at. * @throws IOException * If an I/O exception occurs. */ public int peek(int depth) throws IOException { // does the size of the peek buffer need to be extended? if (this.peekBytes.length <= depth) { byte temp[] = new byte[depth + 10]; for (int i = 0; i < this.peekBytes.length; i++) { temp[i] = this.peekBytes[i]; } this.peekBytes = temp; } // does more data need to be read? if (depth >= this.peekLength) { int offset = this.peekLength; int length = (depth - this.peekLength) + 1; int lengthRead = this.stream.read(this.peekBytes, offset, length); if (lengthRead == -1) { return -1; } this.peekLength = depth + 1; } return this.peekBytes[depth]; } /* * Read a single byte from the stream. @throws IOException * If an I/O exception occurs. @return The character that * was read from the stream. */ @Override public int read() throws IOException { if (this.peekLength == 0) { return this.stream.read(); } int result = this.peekBytes[0]; this.peekLength--; for (int i = 0; i < this.peekLength; i++) { this.peekBytes[i] = this.peekBytes[i + 1]; } return result; } } A: You might find PushbackInputStream to be useful: http://docs.oracle.com/javase/6/docs/api/java/io/PushbackInputStream.html
{ "language": "en", "url": "https://stackoverflow.com/questions/148130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: Oracle sql query, concatenate fileds with CASE section I'm currently generating SQL insert statements from more than one tables, and in the generated data I need to use a CASE statement, like this: select 'INSERT INTO TABLE1 (f1, f2, f3, f4 ...) values (' ||t.f1||',' ||CASE WHEN t.f2 > 0 THEN '1' ELSE '0' END CASE from table2 t , table3 t3 But at this point if I want to continue my statement with ... END CASE||','|| .... I can't run the query anymore, as TOAD complains about not finding the FROM keyword. A quick solution was to separate the ouput into fields, then save it to text, and edit, but there must be a better way. A: Use END instead of END CASE select 'INSERT INTO TABLE1 (f1, f2, f3, f4 ...) values (' ||t.f1||',' ||CASE WHEN t.f2 > 0 THEN '1' ELSE '0' END||','||t.f2 from table2 t , table3 t3 A: For some similar situations, the "decode" function works quite well. You might be able to feed the expression (t.f2 > 0) into a decode, and then translate 'T' into '1' and 'F' into '0'. I haven't tried this.
{ "language": "en", "url": "https://stackoverflow.com/questions/148136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there anyway to tell Visual Studio not to open all the documents when I load solution? When you open a solution in Visual Studio 2008 (or ealier versions for that matter), it opens all the documents that you did not close before you closed Visual Studio. Is there anyway to turn this functionality off, or a plugin that fixes this behavior? It takes forever to load a solution with 50 files open? A: From Visual Studio 2017 Update 8 there is an option in projects and solutions which you can use to enable this: A: ALT-W-L That's the key combination to close all open tabs, which can be pressed before closing a project, unless you prefer clicking Window | Close All Documents before closing the project. --Gus A: Have you tried deleting the .suo file? It's a hidden file that lives beside your solution (sln) file. suo is "solution user options", and contains your last configuration, such as what tabs you left open the last time you worked on the project, so they open again when you reload the project in Visual Studio. If you delete it, a new 'blank' suo file will be recreated silently. A: You can automate the process of closing all the files prior to closing a solution by adding a handler for the BeforeClosing event of EnvDTE.SolutionEvents -- this will get invoked when VS is exiting. In VS2005, adding the following to the EnvironmentEvents macro module will close all open documents: Private Sub SolutionEvents_BeforeClosing() Handles SolutionEvents.BeforeClosing DTE.ExecuteCommand("Window.CloseAllDocuments") End Sub Visual Studio 2008 appears to support the same events so I'm sure this would work there too. I'm sure you could also delete the .suo file for your project in the handler if you wanted, but you'd probably want the AfterClosing event. A: I dont think there is an option for this (or I couldnt find one) but you could probably write a macro to do this for you on project open. This link has some code to close open files which you could adapt: http://blogs.msdn.com/djpark/ I couldnt find the answer to this particular question but a good link for ide tips and tricks is: http://blogs.msdn.com/saraford/default.aspx A: Alternative answer: Before you close your solution, press and hold Ctrl+F4, until all windows have been closed. A: VS attempts to save the last known view. Other than the scripts mentioned above you can manually close all documents before exiting VS
{ "language": "en", "url": "https://stackoverflow.com/questions/148143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Sharepoint Workflow Modification is not disabled I am working on a Sharepoint Server 2007 State machine Workflow. Until now I have a few states and a custom Association/InitiationForm which I created with InfoPath 2007. In Addition I have a few modification forms. I have a Problem with the removing of the modification link in the state-page of my workflow. I have a state and in the initialize block of this state my EnableWorkflowModification Activity appears. So at the beginning of the state the modification is active. In the same state I have an OnWorkflowModification activity, which catches the event raised by the EnableWorkflowModification activity. After this state my modification is over and the link should disappear in the state-page. But this is not the case. Both activities have the same correlation token (modification) and the same owner (the owning state). Has anybody an idea why the link is not removed and how to remove the modification link? Thank you in advance, Stefan! A: Have you checked the OnWorkflowModification event handler is actually firing? Try debugging or adding some eventlog traces to make sure it is. I've run into similar issues with OnWorkflowItemChanged event handler. A: Make sure you have the enableWorkflowModification and onWorkflowModified inside an eventHandlingScopeActivity, and set that as the OwnerActivityName for each.
{ "language": "en", "url": "https://stackoverflow.com/questions/148157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Generate getters and setters (Zend Studio for Eclipse) I'm using Zend Studio for Eclipse (Linux), and I'm trying to generate getter and setters methods in a PHP class. I try to do this: http://files.zend.com/help/Zend-Studio-Eclipse-Help/creating_getters_and_setters.htm but I haven't "Generate Getters and Setters" option in Source Menu, it's missed! Could u help me? Thanks! A: Like Omnipotent say, you can use templates to do this. Here what I use: /** * @var ${PropertyType} */ private $$m${PropertyName}; ${cursor} /** * Getter for ${PropertyName} * * @author ${user} * @since ${date} ${time} * @return ${PropertyType} private variable $$m_${PropertyName} */ public function get${PropertyName}() { return $$this->m_${PropertyName}; } /** * Setter for ${PropertyName} * * @author ${user} * @since ${date} ${time} * @param ${PropertyType} $$Value */ public function set${PropertyName}($$Value) { $$this->m_${PropertyName} = $$Value; } To create the template just go to the preferences. Then in PHP/Templates you will have your list of templates. A: It has to be there under the menu - source in Eclipse. Could you provide a snapshot of your Eclipse to verify. EDITED: I guess it is not possible to generate getters and setters automatically in your version, though you would be able to create templates for the same and use it as per your requirements. Omnipotent (0 seconds ago) A: I haven't seen anyone mention the Zend Studio ctrl+3 shortcut/search: ctrl+3 and search... I type "setters", and first option on the menu is the "Generate Getters and Setters" wizard. A: If there is a 'Refactor' menu, check in there as well. A lot of those methods have been moved to the 'Refactor' menu in later versions of eclipse and if Zend has updated recently and not updated it's documentation, the items may have encountered an undocumented move. A: @Omnipotent It's Zend Studio v6.01, "generate getters and setters" feature should be available. I can see doc about it in Help Contents. By the way i'll try updating to v6.1 Thanks anyway! EDITED: Templates and Code Assist works fine but are not usefull as "Generate getters and setters".
{ "language": "en", "url": "https://stackoverflow.com/questions/148161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: When deleting hasOne or hasMany associations, should the foreignKey be set to NULL? Given : Group hasMany Persons but the relationship is independent (ie. Persons can exist without belonging to a Group), should the foreign key in the persons' table (ie group_id) be set to 0 (or NULL) when deleting a group? If you do not, the person will try to belong to a group that doesn't exist. The reason I ask is that this is the default behavior in Cakephp. If you set dependent to true, it will delete the associated models, but if it's set to false it will leave the associated model untouched. A: Yes, the foreign keys should be set to NULL (or 0, if this is your chosen 'no group' value) or you lose referential integrity. If your database supports it, you should be able to set an 'On delete' trigger or a cascade rule in your framework to enforce this. And the behaviour in CakePHP seems correct. If the value is dependent, then it should be removed on deletion. If it isn't dependent then you need to give extra behaviour logic as to the correct action to take (in this case, you want to set all values to NULL. In other cases, you may want to set to a 'default' group, etc) A: In a word, yes. Leaving the foreign key on the persons table would result in the loss of referential integrity within the database. A: > If you do not, the person will try to belong to a group that does't exist. There is also a worse scenario: in the future a new group B may appear that will reuse the id of deleted group A. Then all former A group's users will be "magically" enlisted into new group B. A: An alternative, more stable way to implement a situation where both entities are independent would be to remove the foreign key entirely from Person and create a join table group_persons. This way you won't have to worry about your reference integrity when deleting a group. When you delete a group, the association would be deleted from group_persons. The table would look like this id, group_id, person_id The group_persons model will look like this Person hasMany GroupPerson Group hasMany GroupPerson GroupPerson belongsTo Person, Group If you want the Person to only be able to be in one group at a time, set a unique validation rule in GroupPerson. var $validate=array( 'person_id'=>array( array( 'rule'=>'isUnique', 'message'=>'This person is already in a group.' ) ) );
{ "language": "en", "url": "https://stackoverflow.com/questions/148169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Strange MFC / VC++ Linker Error (std::list already defined) I've got a really odd error message that only occurs when I add the following line to my project: std::list<CRect> myVar; It's worth noting that it doesn't have to be a std::list, it can be std::vector or any other STL container I assume. Here is the error message: Error 1 error LNK2005: "public: __thiscall std::list ::list >(void)" (??0?$list@VCRect@@V?$allocator@VCRect@@@std@@@std@@QAE@XZ) already defined in SomeLowLevelLibrary.lib The low level library that's referenced in the error message has no idea about the project I am building, it only has core low level functionality and doesn't deal with high level MFC GUIs. I can get the linker error to go away if I change the line of code to: std::list<CRect*> myVar; But I don't want to hack it for the sake of it. Also, it doesn't matter if I create the variable on the stack or the heap, I still get the same error. Does anyone have any ideas whatsoever about this? I'm using Microsoft Visual Studio 2008 SP1 on Vista Enterprise. Edit: The linker error above is for the std::list<> constructor, I also get an error for the destructor, _Nextnode and clear functions. Edit: In other files in the project, std::vector won't link, in other files it might be std::list. I can't work out why some containers work, and some don't. MFC linkage is static across both libraries. In the low level library we have 1 class that inherits from std::list. Edit: The low level library doesn't have any classes that inherit from CRect, but it does make use of STL. A: You should be looking at the linker settings, but I can't immediately say which. It's normal for STL instantiations to be done in multiple files. The linker should pick one. They're all identical (assuming you do have consistent compiler settings). A: I recently stumbled across this error again in our project and decided to have a more thorough investigation compared to just patching it up with a hack like last time (swap std::list for CArray). It turns out that one of our low level libraries was inheriting from std::list, e.g. class LIB_EXPORT CRectList : public std::list<CRect> { }; This is not just bad practice, but it also was the cause of the linker errors in the main application. I change CRectList to wrap std::list rather than inherit from it and the error went away. A: This doesn't sound like the exact symptom, but to be sure you should check that your main project and all your included libraries use the same "Runtime Library" setting under "C++:Code Generation". Mixing these settings can create runtime library link errors. (What confuses me in your case is that you can make it go away by changing the code, but it's worth checking if you haven't already.) A: Does SomeLowLevelLibrary.lib contain or use any classes named CRect? Does it use STL? A: Is the file included in a header which might be compiled into two seperate code modules? A: Another random possibility popped into my head today. Is it possible that your current DLL and low level library are referencing two different versions of MFC? Long shot.
{ "language": "en", "url": "https://stackoverflow.com/questions/148178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the best way to create a script to use with the rails request profiler? The rails scirpt script/performance/request require a session script, what is the best way to generate this session script? A: Add this code to your application.rb file before_filter :benchmark_log def benchmark_log File.open("request_log.txt","a") do |f| f.puts request.method.to_s + " '" + request.request_uri + "', " + params.except(:action).except(:controller).inspect.gsub(/(^\{|\}$)/,"") end end Then you can visit several pages in your browser, and the session script will be written into the request_log.txt file in your applications root directory
{ "language": "en", "url": "https://stackoverflow.com/questions/148181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: where can i download opengl for windows vista I've been searching and can't find the download files on opengl.org. can someone please point me to the right direction? A: OpenGL is just a standard. The implementations come with your graphics card drivers and are exposed using WGL extensions in Windows. There is a 'standard' implementation in the platform SDK that is accessed by including the OpenGL headers and the windows header, but this is a basic version (1.1 in XP. I think it's 1.4 in Vista). If you want an easier method to gain access to all the features from your card's supported features and the basic set of features in the Windows standard OpenGL implementation then I'd suggest looking at GLEW (The openGL Extension Wrangler) which handles all the WGL calls to set up extensions for you. A: OpenGL 1.1 header files are included in the Platform SDK. If you need to work with a more recent version this may help: Moving Beyond OpenGL 1.1 for Windows A: Windows (well, Visual Studio at least) comes with opengl, but only the older v1.1 - just #include <windows.h> #include <GL/gl.h> #include <GL/glu.h> and link with opengl32.lib, and glu32.lib and you should be ok (its been a while, I may have missed a bit in there)
{ "language": "en", "url": "https://stackoverflow.com/questions/148182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How efficient is define in PHP? C++ preprocessor #define is totally different. Is the PHP define() any different than just creating a var? define("SETTING", 0); $something = SETTING; vs $setting = 0; $something = $setting; A: In general, the idea of a constant is to be constant, (Sounds funny, right? ;)) inside your program. Which means that the compiler (interpreter) will replace "FOOBAR" with FOOBAR's value throughout your entire script. So much for the theory and the advantages - if you compile. Now PHP is pretty dynamic and in most cases you will not notice a different because the PHP script is compiled with each run. Afai-can-tell you should not see a notable difference in speed between constants and variables unless you use a byte-code cache such as APC, Zend Optimizer or eAccelerator. Then it can make sense. All other advantages/disadvantages of constants have been already noted here and can be found in the PHP manual. A: php > $cat='';$f=microtime(1);$s='cowcow45';$i=9000;while ($i--){$cat.='plip'.$s.'cow';}echo microtime(1)-$f."\n"; 0.00689506530762 php > $cat='';$f=microtime(1);define('s','cowcow45');$i=9000;while ($i--){$cat.='plip'.s.'cow';}echo microtime(1)-$f."\n"; 0.00941896438599 This is repeatable with similar results. It looks to me like constants are a bit slower to define and/or use than variables. A: Here are the differences, from the manual * *Constants do not have a dollar sign ($) before them; *Constants may only be defined using the define() function, not by simple assignment; *Constants may be defined and accessed anywhere without regard to variable scoping rules; *Constants may not be redefined or undefined once they have been set; and *Constants may only evaluate to scalar values. For me, the main benefit is the global scope. I certainly don't worry about their efficiency - use them whenever you need a global scalar value which should not be alterable. A: NOT efficient it appears. (And i'm basing all the assumptions here on one comment from php.net, i still haven't did the benchmarks myself.) recalling a constant, will take 2x the time of recalling a variable. checking the existence of a Constant will take 2ms and 12ms for a false positive! Here's a benchmark from the comments of the define page in php's online doc. Before using defined() have a look at the following benchmarks: true 0.65ms $true 0.69ms (1) $config['true'] 0.87ms TRUE_CONST 1.28ms (2) true 0.65ms defined('TRUE_CONST') 2.06ms (3) defined('UNDEF_CONST') 12.34ms (4) isset($config['def_key']) 0.91ms (5) isset($config['undef_key']) 0.79ms isset($empty_hash[$good_key]) 0.78ms isset($small_hash[$good_key]) 0.86ms isset($big_hash[$good_key]) 0.89ms isset($small_hash[$bad_key]) 0.78ms isset($big_hash[$bad_key]) 0.80ms PHP Version 5.2.6, Apache 2.0, Windows XP Each statement was executed 1000 times and while a 12ms overhead on 1000 calls isn't going to have the end users tearing their hair out, it does throw up some interesting results when comparing to if(true): 1) if($true) was virtually identical 2) if(TRUE_CONST) was almost twice as slow - I guess that the substitution isn't done at compile time (I had to double check this one!) 3) defined() is 3 times slower if the constant exists 4) defined() is 19 TIMES SLOWER if the constant doesn't exist! 5) isset() is remarkably efficient regardless of what you throw at it (great news for anyone implementing array driven event systems - me!) May want to avoid if(defined('DEBUG'))... from tris+php at tfconsulting dot com dot au 26-Mar-2009 06:40 http://us.php.net/manual/en/function.defined.php#89886 A: 'define' operation itself is rather slow - confirmed by xdebug profiler. Here is benchmarks from http://t3.dotgnu.info/blog/php/my-first-php-extension.html: * *pure 'define' 380.785 fetches/sec 14.2647 mean msecs/first-response *constants defined with 'hidef' extension 930.783 fetches/sec 6.30279 mean msecs/first-response broken link update The blog post referenced above has left the internet. It can still be viewed here via Wayback Machine. Here is another similar article. The libraries the author references can be found here (apc_define_constants) and here (hidef extension). A: Define is simple static sense, meaning its value can't be changed during runtime while variable is dynamic sense because you can freely manipulate its value along the process. A: When I run speed tests, constants being set and dumped out run much a little faster than setting variables and dumping them out. A: 2020 update (PHP 7.2, AMD Ryzen9, Zend OpCache enabled) summary: redefining the same constant is slow. checking and defining constants vs $_GLOBALS is about 8x slower, checking undefined constants is slightly slower. Don't use globals. * *note: auto loaders and require once long paths are likely to be much larger problems than defines. (require once requires php to stat(2) every directory in the path to check for sym links, this can be reduced by using full paths to your file so the PHP loader only has to stat the file path 1x and can use the stat cache) CODE: $loops = 90000; $m0 = microtime(true); for ($i=0; $i<$loops; $i++) { define("FOO$i", true); } $m1 = microtime(true); echo "Define new const {$loops}s: (" . ($m1-$m0) . ")\n"; // etc... OUTPUT: Define new const 90000s: (0.012847185134888) Define same const 90000s: (0.89289903640747) Define same super global 90000s: (0.0010528564453125) Define new super global 90000s: (0.0080759525299072) check same undefined 90000s: (0.0021710395812988) check same defined 90000s: (0.00087404251098633) check different defined 90000s: (0.0076708793640137) A: Not sure about efficiency, but it is more than creating a var: * *It is a constant: you can't redefine or reassign this SETTING. *If the define isn't found, $something is set to "SETTING", which is useful, for example, in i18n: if a translation is missing (ie. the corresponding define is the localization file), we see a big word in uppercase, quite visible... A: Main differences: * *define is constant, variable is variable *they different scope/visibility
{ "language": "en", "url": "https://stackoverflow.com/questions/148185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: SQL2005 Express slow from remote VB6 application I have a legacy VB6 application that was built using MSDE. As many client's database grow towards the MSDE 2 GB limit they are upgraded to SQL 2005 Express. This has proven very successful until today. I have spent the entire day troubleshooting a client's network on which our application runs unacceptably slowly, when connecting the a SQL 2005 Express named instance across the "network". I say "network" because it is only two XP SP2 machines - there is no dedicated server here. No AD. In trying to isolate this problem I have installed SQL 2005 Express on both machines and placed copies of our database on both machines. I have even completely reinstalled our application using the SQL2005 Express install routine we now have. It makes no difference whether I restore an old MSDE database or use a newly created SQL 2005 Express one. When running our application and connecting to either machine's local server performance is fine. Once you connect our application on either PC to the server on the other PC, it is unworkably slow. (Regardless of the combination). Now, I have rebuilt statistics (exec sp_updatestats), rebuilt ALL indexes, disabled (temporarily) firewalls and virus software and clutched and countless other straws. I have resorted to running FileMon and ProcessMon on both machines and have even written a little test application to simply connect and query a table in the database. It too runs slowly - (takes about 5 - 6 seconds to connect). The monitors (File and Process) show delays when SQL Server is writing to a log file (c:\program files\microsoft sql server\mssql.1\log files\log_12.trc). Other tools though, like SQL Management Studio Express and even SSEUtil (a SQL Server Express Diagnostic Utility I found) run perfectly when connecting from the client to the server. Queries (even large ones) run as you would expect. I feel sure this problem is environmental as we have so many sites running what would appear to be the same setup, with no such problems. Can someone tell me what I should be doing to isolate this problem or even offer any clues or suggestions that could help solve this? A: This might be due to a cached query plan which is not representative of the data, even thought you have rebuilt indexes and refreshed statistics. The symptom you describe (namely that a query runs fine from SSMS but not from an application) is often caused by a wrongly cached query plan. SSMS emits a "WITH RECOMPILE" under the covers. If you are calling a stored procedure, temporarily add 'WITH RECOMPILE' to its definition and check the results. A: Have you tried connecting to the "server" PC from another machine? What happens? Have you tried the "client" to another "server" machine? What happens? The problem could just be something as mundane as a flakey network card or cable. Probably worth checking before you beat your brains out any further... A: Make a checklist and systematiccaly work through it: Add all suggestions of all posts here and some I add below: * *Network Cables *Network Speed *Defrag Hard Disk *No Network errors - do a ping and look for missing packets *Ram per Machine *Processors *Virusses etc. etc, A: What networking protocols have you got enabled in the 'surface configuration' tool? Can you alter your connection strings to use (temporary) hardcoded ip addresses?
{ "language": "en", "url": "https://stackoverflow.com/questions/148190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Integrating jQuery into an existing ASP.NET Web Application? Microsoft recently announced that the Javascript/HTML DOM library jQuery will be integrated into the ASP.NET MVC framework and into ASP.NET / Visual Studio. What is the best practice or strategy adopting jQuery using ASP.NET 2.0? I'd like to prepare a large, existing ASP.NET Web Application (not MVC) for jQuery. How would I deal with versioning and related issues? Are the any caveats integrating jQuery and ASP.NET Ajax? Or 3rd party components like Telerik or Intersoft controls? A: For me, problems arise when using UpdatePanels and jQuery (no problem with MVC, which doesn't have a Page Life-Cycle and is truly stateless). For instance, the useful jQuery idiom $(function() { // some actions }); used to enhance your DOM or attach events to the DOM elements may not interact very well with the ASP.NET PostBack model if there are UpdatePanels in the page. By now, I circumvent it with the following code snippet if (Sys.WebForms.PageRequestManager) { Sys.WebForms.PageRequestManager.getInstance().add_endRequest(function() { $('#updateListView1').trigger("gridLoaded"); }); } where gridLoaded will be the replacement of $(document).ready. I think you have to take extra care and know very well the ASP.NET Page/Controls Life-Cycle in order to mix both technologies. A: There's a small issue which is mentioned by David Ward here: http://encosia.com/2008/09/28/avoid-this-tricky-conflict-between-aspnet-ajax-and-jquery/ But there should not be any major concerns about integrating jQuery into an existing application, you wouldn't notice major advantages unless you're planning a lot of updating/ reworking of existing code to take advantages of jQuerys power.
{ "language": "en", "url": "https://stackoverflow.com/questions/148202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: logging soap requests in flex 3 I'm trying to consume a SOAP webservice from an Adobe Flex 3 application, but the server tell me "Invalid SOAP Envelope. SOAP Body does not contain a message nor a fault". I already wrote other test clients (with both Delphi and C#) and I'm sure it's all ok on the server side, so I need to examine the SOAP envelope Flex is sending out to the server. How to do that? I think it should be some event to listen (in the BaseSys class?) to get the envelope before it will be sent. A: thanks for your replies but the problem was the status code 500 (flex can handle code 200 only) A: The easiest way is to run a proxy. Paros is an easy one, written in Java, and therefore multi-platform by nature : http://www.parosproxy.org/index.shtml Also, if you do not use it already, you should install firebug : https://addons.mozilla.org/fr/firefox/addon/1843 The network monitoring tab should fit your needs. A: I have two suggestions for you: * *If you are using Flex Builder you can try to generate a client for your web service using the Import Web Service feature from the Data menu and ether use it directly or just investigate the generated code for clues. *Check out the documentation for web services from Flex SDK as it may be a problem with the supported SOAP versions. Check to see that both Flex SDK and your server are using compatible versions.
{ "language": "en", "url": "https://stackoverflow.com/questions/148203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Equivalent of PHP's print_r in Java? last time I asked how to populate a data structure here. Now I would like to know if there's something in Java, like the print_r I use in PHP, to represent what I have populated in the Maps and lists without having to do my own algorithm. Any ideas? A: Depending on exactly what you want to do, the solution could be fairly simple. The following won't produce the formatted output that print_r provides, but it will allow you to output the structure of lists, arrays, and maps: // Output a list final List<String> list = new ArrayList<String>(); list.add("one"); list.add("two"); list.add("three"); list.add("four"); System.out.println(list); // Output an array final String[] array = {"four", "three", "two", "one"}; System.out.println(Arrays.asList(array)); // Output a map final Map<String, String> map = new HashMap<String, String>(); map.put("one", "value"); map.put("two", "value"); map.put("three", "value"); System.out.println(map.entrySet()); For other types of objects you could use reflection to create a utility for this purpose. A: You might try the good old Apache Commons Lang's ToStringBuilder. A: Others have mentioned the toString() method. This is really only useful when the object implements toString(). If it has not been implemented properly you will end up with something like this: java.lang.Object@1b9240e. Even if it is a little tedious it is still easy to implement toString() in your own classes, but third party classes will not always implement it. You are much better off using a debugger. The only reason things like print_r even exist in PHP is because of the lack of a real debugger. I think you will find that being able to set breakpoints instead of using a bunch of diagnostic print statements will result in a much faster workflow and cleaner code. A: My take for a Java print_r like functionality, is the class RecursiveDump that I wrote. It's not perfect, but it is working well for me. The usage is: String output = RecursiveDump.dump(...); It could be improved using generics, I wrote it many years ago, before I knew about them. Also while it tries to deal with some Collection types, with Objects it will just call the toString() method. I thought to use Reflection to extract field names for classes without a proper toString(), but since Spring Roo @RooToString can write the toString() method for you... Of course the alternative is to use the inspector tool of a debugger, but sometimes is quicker to use prints. A: Calling toString on the collection should return a string containing all the elements string representations. This won't work with built-in arrays though, as they don't have a toString override and will just give you a memory address. A: There is really difference between toString() in java and print_r() in PHP. Note that there is also __toString() in php which is equivalent to toString() in java, so this is not really the answer. print_r is used when we have a structure of objects and we like quickly to see the complete graph of objects with their values. Implementing toString in java for each object has not the power to compare with print_r. Instead use gson. It does the same job as print_r Gson gson = new GsonBuilder().setPrettyPrinting().create(); System.out.println(gson.toJson(someObject)); In this way you do not need to implement toString for every object you need to test it. Here is the documentation: http://sites.google.com/site/gson/gson-user-guide Demo classes (in java): public class A { int firstParameter = 0; B secondObject = new B(); } public class B { String myName = "this is my name"; } Here is the output in php with print_r: Object ( [firstParameter:private] => 0 [secondObject:private] => B Object ( [myName:private] => this is my name ) ) Here is the output in java with gson: { "firstParameter": 0, "secondObject": { "myName": "this is my name" } } A: I don't know of an equivalent of print_r in java. But... Every object has a default implementation of the toString() method, the list and map implementations print their contents if you call toString() for them. If you need any debug information printed out, toString() may be the place you are looking for. A: There is no equivalent for print_r in Java. But for maps or lists you can use the foreach-loop like this: List <String> list = …; // fills your list // print each list element for (String s : list) { System.out.println(s); } A: For arrays the easiest way is Arrays.asList(array).toString() . I generally implement toString() on objects I like to habe in debug outputs. For generated objects (JAXB etc.) though might need to make an utility class to print it.
{ "language": "en", "url": "https://stackoverflow.com/questions/148204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: ObjectDataSource Update method with dynamic parameters I have this DataTable that has a varying set of columns except for a sequence number. | Sequence | Value | Tax | Duty | Total | Any number of columns should be accepted with unique column names. To display that table, I need to use an ObjectDataSource mapped to a presenter class with a Select method. class Presenter { [DataObjectMethod(DataObjectMethodType.Select)] public DataView GetDutyAndTax() { ... } } The ObjectDataSource is then bound to a GridView with AutoGenerateColumns set to true. Sequence is the data key. So far, that works for selecting the table. The problem comes when I need to update the table. The ObjectDataSource keeps nagging me to have an update method with the exact same parameters with that of the columns in the table. public void EditDutyAndTax(string Value, string Tax, string Duty, string original_Sequence) { ... } But I can not create a method like that because I don't know the set of columns needed. I tried using a method with variable parameter list but it doesn't want to use it. public void EditDutyAndTax(params object[] values) { ... } On idea I have now is to create a set of update methods like this in Presenter: public void EditDutyAndTax(string value1, string original_Sequence) { ... } public void EditDutyAndTax(string value1, string value2, string original_Sequence) { ... } public void EditDutyAndTax(string value1, string value2, string value3, string original_Sequence) { ... } //an so on... But I neither think that's gonna get through code review nor like the idea. The other idea I have is to create a dynamic method and attach that (if possible) to the Presenter class or wherever at runtime, but I'm not really sure if that would work. So if you guys have any solution, please help. Thanks so much! Carlos A: It sounds to me like you're going to have to scrap using the ObjectDataSource declarative model, and go to the "old-school" setting of the datasource & binding the grid manually in postback (or load, as the case may be), and then handling edit/update manually as well. The DataSource objects are very particular about how you use them - and don't work well, if at all, if you try to go outside the lines.
{ "language": "en", "url": "https://stackoverflow.com/questions/148205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: coordination between Design and Development Background: I am developing a site in Asp.net 2.0 . Up until now i was handling both the design and development of the site. I used css for the design part. Now the company wants to outsource the design work to a web designer. Question: How exactly are a designer and developer supposed to coordinate What specifications should i give to the web designer, Do i need to provide him with the aspx pages i have developed, does he also need to look at the code behind. And how do i incorporate the designed pages in the existing aspx pages. I would appreciate if someone who is experienced in this provides some insight thanks. A: Mayn people may not like my answer, but in my experience it works best if the designer gives you the stylesheet together with a template for the HTML/XML page. Then you incorporate your ASP.NET into that template. You can see this as one of the few cases where function follows form ;-) A: How will the designer deliver their designs? Will they just provide the necessary graphic elements or will they deliver a valid HTML page? I would suggest you both agree on the DOM for the page and work from there.If they just provide graphics you have full control of the DOM. The code behind model for .Net was supposed to help separate design and development but IMHO nothing has yet managed a perfect separation because you can't always make a 100% distinction. A: Make sure the designer doesn't put anything that he thinks is fancy and well animated. Many a times, it becomes a big pain to simulate such effects in ASP.net unless you are a pro with CSS or Javascript. Coordination between designer and developer is a must according to me, so I would prefer to sit close to a designer till the base design of the page is finalized.
{ "language": "en", "url": "https://stackoverflow.com/questions/148208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Run macro in visual studio when solution is closed Is it possible to run a macro in Visual Studio 2008 the moment a solution is closed? I would like to delete the .suo-files of that solution, so that the current user settings are not presented to me the next time I open the solution. A: (C#) Use the _applicationObject provided in the Connection class in a new Addin project. In the OnConnection event, type the code to add new event handers, as below _applicationObject.Events.SolutionEvents.AfterClosing += _applicationObject.Events.SolutionEvents.BeforeClosing += Let the IDE complete the lines and create the methods for you. In the beforeClosing handler, catch the name of the solution file, and store it on class member. (e.g. this._storeSolutionFile = _applicationObject.Solution.FileName;) Then in the afterClosing handler, simply pick up the solution file name again (from this._storeSolutionFile), change the extension to sou, and delete it. Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/148218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to extract four unsigned short ints from one long long int? Suppose I have one long long int and want to take its bits and construct four unsigned short ints out of it. Particular order doesn't matter much here. I generally know that I need to shift bits and truncate to the size of unsigned short int. But I think I may make some weird mistake somewhere, so I ask. A: (unsigned short)((((unsigned long long int)value)>>(x))&(0xFFFF)) where value is your long long int, and x is 0, 16, 32 or 48 for the four shorts. A: union LongLongIntToThreeUnsignedShorts { long long int long_long_int; unsigned short int short_ints[sizeof(long long int) / sizeof(short int)]; }; That should do what you are thinking about, without having to mess around with bit shifting. A: #include <stdint.h> #include <stdio.h> union ui64 { uint64_t one; uint16_t four[4]; }; int main() { union ui64 number = {0x123456789abcdef0}; printf("%x %x %x %x\n", number.four[0], number.four[1], number.four[2], number.four[3]); return 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/148225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: LOAD SQL Table from flat file I am am trying to load a SQL table from a flat file. The flat i am talking about is a comma separated file. This has all the data required to populate a table will each column separated by a comma ",". I need some way by which i can load this content into the table faster. A: If you are using SQL Server, use BULK INSERT If you are using Oracle, see my answer here A: Regardless of what database management system you are using, you could use a scripting language (such as perl or php) to set up a connection to your database, parse the file, and then insert the data into your database. Of course, you would have to know a scripting language... A: use mysqldump? mysqldump -u username -p database_name < sql_file.sql A: take a look at these speed comparisons and decide what suits you best: http://weblogs.sqlteam.com/mladenp/archive/2006/07/22/10742.aspx A: For SQL Server 2005, another option would be Integration Services (SSIS); Using SSIS you would be able to do a lot more work on the data during the import process (for example, looking up values in other tables, filtering out rows, importing multiple tables, etc). A: This sounds a little bit old-fashioned, but I use an editor which has the capability to record and replay macros for such works. I use Textpad (www.textpad.com) for this (yes, I bought a license), you might also use UltraEdit (www.ultraedit.com) or something familiar. It's as simple as starting the maro recorder, edit the first line so that it is SQL compatible, go to the next line and stop the recorder. Then you let the editor repeat your macro to the end of the file. The main advantage is: After you processed the file you can store it and get it into your version control. If done properly, it works for every database (or tool) that can execute files including SQL commands.
{ "language": "en", "url": "https://stackoverflow.com/questions/148239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is there any way to check which kind of RAM my computer uses without opening it up? I would like to check which type of RAM my computer uses before I order an upgrade. I'm fairly sure its DDR2 but I would like to double check this. Is there any way to check this in Windows XP without opening the case up and looking? EDIT The content police seem to have gotten the wrong end of the stick, I was looking for a piece of software or a command that would allow me to check this. I feel that this makes this question perfectly valid for StackOverflow and of interest to other programmers. A: CPU-Z can tell you. On the SPD tab you can view the DIMM specific information A: If it's a standard, vanilla box, then head over to Crucial and use their memory selector tool. A: Find out the motherboard/chipset from the device manager, google it, know what it takes. As Greg Hewgill says, you'd need to script that some how (to make this a valid question on SO) - but you'd have to do that part yourself =) A: CPU-Z Can tell you the type as well as the current clock speed and memory timings. http://www.cpuid.com/cpuz.php A: Let me know if it helps: Taskmanager > Performance A: In case CPUID not showing RAM details use HWiNFO, it will provide deep details about Hardware https://www.hwinfo.com/download/ A: try this: wmic memorychip list full In this you will get memorytype if it is 24 then you are using DDR3 and if it is 0 then there are chances that you are using DDR4 Ansh Sharma
{ "language": "en", "url": "https://stackoverflow.com/questions/148249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: CSS centering tricks My favorite equation for centering an xhtml element using only CSS is as follows: display: block; position: absolute; width: _insert width here_; left: 50%; margin-left: _insert width divided by two & multiplied by negative one here_ There's also the simpler margin:auto method in browsers that support it. Does anyone else have tricky ways to force content to display centered in its container? (bonus points for vertical centering) edit - oops, forgot the 'negative' part of one in the margin-left. fixed. A: Stick with Margin: 0 auto; for horizontal alignment; If you need vertical alignment as well use position: absolute; top: 50%; margin-top: -(width/2)px; Be aware though, If your container has more width than your screen a part of it will fall off screen on the left side using the Position: absolute method. A: Well that seems like massive overkill, I've got to say. I tend to set the container to text-align:center; for old browsers, margin:auto; for modern browsers, and leave it like that. Then reset text-align in the element (if it contains text). Of course, some things need setting as block, and widths need setting... But what on earth are you trying to style that needs that much hacking around? <div style="text-align:center"> <div style="width:30px; margin:auto; text-align:left"> <!-- this div is sitting in the middle of the other --> </div> </div> A: Margin:auto works in all browsers as long as you make sure IE is in standards mode. It's more picky than others and requires your doctype to be the very first in your document, which means no whitespace (space, tabs or linefeeds) before it. If you do that, margin:auto is the way to go! :) A: div #centered{ margin: 0 auto; } seems to be the most reliable from my experience. A: just a note that the margin:auto; method only works if the browser can calculate the width of the item to be centered and the width of the parent container. in many cases setting width:auto; works, but in some it does not. A: The absolute positioning with 50% approach has the severe side effect that if the browser window is narrower then the element then some of the content will appear off the left side of the browser - with no way to scroll to it. Stick to auto margins - they are far more reliable. If you are working in Standards mode (which you should be) then they are supported in all the browsers you are likely to care about. You can use the text-align hack if you really need to support Internet Explorer 5.5 and earlier. A: This is a handy bookmark for CSS tricks http://css-discuss.incutio.com/ Contains lots of centering tricks too. A: Try this; don't know if it works in IE, works fine in Fx though. It centers a DIV block on the page using CSS only (no JavaScript), no margin-auto and the text within the DIV block is still left aligned. I'm just trying to find out if vertical-centering could work that way, too, but so far without success. <html> <head> <title>Center Example</title> <style> .center { clear:both; width:100%; overflow:hidden; position:relative; } .center .helper { float:left; position:relative; left:50%; } .center .helper .content { float:left; position:relative; right:50%; border:thin solid red; } </style> </head> <body> <div class="center"> <div class="helper"> <div class="content">Centered on the page<br>and left aligned!</div> </div> </div> </body> </html> A: body { text-align: center; } #container { width: 770px; margin: 0 auto; text-align: left; } This works nicely in all the usual browsers. As already mentioned margin: 0 auto; won't work in all semi-current versions of IE.
{ "language": "en", "url": "https://stackoverflow.com/questions/148251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Oracle database startup problems In Oracle, I have set the log_archive_dest1='D:\app\administrator\orcl\archive' parameter and shutdown the database. When I tried to start up the db, I got the following error: SQL> startup mount; ORA-16032: parameter LOG_ARCHIVE_DEST_1 destination string cannot be translated ORA-09291: sksachk: invalid device specified for archive destination OSD-04018: Unable to access the specified directory or device. O/S-Error: (OS 3) The system cannot find the path specified. Does anyone have any ideas of how I might fix this? A: You probably need a trailing \ on the dir name ie D:\app\administrator\orcl\archive\ A: I've never used Oracle but some things you might try are * *Make sure the permissions on the file path you're using allow the database to read / write to it? *Make sure all the folders in the path already exist *On Windows you might find the '\' characters confuse the database. Do you specify other paths in the same way for Oracle? An alternative may be to use '/' instead of '\'. Different programs that originated in the Unix world handle Windows paths in different ways
{ "language": "en", "url": "https://stackoverflow.com/questions/148262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I draw transparent DirectX content in a transparent window? I want to draw DirectX content so that it appears to be floating over top of the desktop and any other applications that are running. I also need to be able to make the directx content semi-transparent, so other things show through. Is there a way of doing this? I am using Managed DX with C#. A: I found a solution which works on Vista, starting from the link provided by OregonGhost. This is the basic process, in C# syntax. This code is in a class inheriting from Form. It doesn't seem to work if in a UserControl: //this will allow you to import the necessary functions from the .dll using System.Runtime.InteropServices; //this imports the function used to extend the transparent window border. [DllImport("dwmapi.dll")] static extern void DwmExtendFrameIntoClientArea(IntPtr hWnd, ref Margins pMargins); //this is used to specify the boundaries of the transparent area internal struct Margins { public int Left, Right, Top, Bottom; } private Margins marg; //Do this every time the form is resized. It causes the window to be made transparent. marg.Left = 0; marg.Top = 0; marg.Right = this.Width; marg.Bottom = this.Height; DwmExtendFrameIntoClientArea(this.Handle, ref marg); //This initializes the DirectX device. It needs to be done once. //The alpha channel in the backbuffer is critical. PresentParameters presentParameters = new PresentParameters(); presentParameters.Windowed = true; presentParameters.SwapEffect = SwapEffect.Discard; presentParameters.BackBufferFormat = Format.A8R8G8B8; Device device = new Device(0, DeviceType.Hardware, this.Handle, CreateFlags.HardwareVertexProcessing, presentParameters); //the OnPaint functions maked the background transparent by drawing black on it. //For whatever reason this results in transparency. protected override void OnPaint(PaintEventArgs e) { Graphics g = e.Graphics; // black brush for Alpha transparency SolidBrush blackBrush = new SolidBrush(Color.Black); g.FillRectangle(blackBrush, 0, 0, Width, Height); blackBrush.Dispose(); //call your DirectX rendering function here } //this is the dx rendering function. The Argb clearing function is important, //as it makes the directx background transparent. protected void dxrendering() { device.Clear(ClearFlags.Target, Color.FromArgb(0, 0, 0, 0), 1.0f, 0); device.BeginScene(); //draw stuff here. device.EndScene(); device.Present(); } Lastly, a Form with default setting will have a glassy looking partially transparent background. Set the FormBorderStyle to "none" and it will be 100% transparent with only your content floating above everything. A: You can either use DirectComposition, LayeredWindows, DesktopWindowManager or WPF. All methods come with their advantages and disadvantages: -DirectComposition is the most efficient one, but needs Windows 8 and is limited to 60Hz. -LayeredWindows are tricky to get working with D3D via Direct2D-interop using DXGI. -WPF is relatively easy to use via D3DImage, but is also limited to 60Hz and DX9 and no MSAA. Interops to higher DX-Versions via DXGI are possible, also MSAA can be used when the MSAA-Rendertarget is resolved to the native nonMSAA surface. -DesktopWindowManager is great for high performance available since Windows Vista, but DirectX-Versions seem to be limited by the Version the DWM uses (still DX9 on Vista). Workarounds for higher DX-Versions should be possible via DXGI where available. If you don't need per pixel aplha, you can also use the opacity-value of a semi-transparent form. Or you use the native Win32 method for the Window global alpha (Remember a alpha of 0 will not catch the mouse input): SetWindowLong(hWnd, GWL_EXSTYLE, GetWindowLong(hWnd, GWL_EXSTYLE) | WS_EX_LAYERED); COLORREF color = 0; BYTE alpha = 128; SetLayeredWindowAttributes(hWnd, color, alpha, LWA_ALPHA); I have been able to use all of the described techniques with C# and SharpDX, but in case of DirectComposition, LayeredWindows and native Win32 a little C++-Wrappercode was needed. For starters I would suggest to go via WPF. A: I guess that will be hard without using the Desktop Window Manager, i.e. if you want to support Windows XP. With the DWM, it seems to be rather easy though. If speed is not an issue, you may get away with rendering to a surface and then copying the rendered image to a layered window. Don't expect that to be fast though. A: WPF is also another option. Developed by Microsoft, the Windows Presentation Foundation (or WPF) is a computer-software graphical subsystem for rendering user interfaces in Windows-based applications.
{ "language": "en", "url": "https://stackoverflow.com/questions/148275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Word spell check runs slow on Word 2007/Vista I have written a DLL that uses MS Word to spell check the content of a RichtextBox. The project uses Microsoft Word 11.0 Object Library. I have read that you can use that reference on machines using that version of Word or later, and that seem to be true. However ... When I run the dll in a test app on a machine with Windows Vista and Word 2007 then it runs very slow. Does the Word Object Library for the 2007 version differ in any way that makes it really slow during automation? Or is it some kind of re-interpetation at runtime that makes it behave like this? Should I make different version of the dll, One for machines with Word 2003 and one for machines with Word 2007? That would really make the whole point of making a spell checking dll for use in many different project kind of pointless. A: You should approach this like any other engineering problem: 1. Profile the code to see if it's your fault or not 2a. If it's your fault, correct as needed 2b. If it's that particular .dll, define your spell checking object as an interface or an abstract class and at runtime, use a concrete instance of that interface that is most appropriate for the environment in which you're running.
{ "language": "en", "url": "https://stackoverflow.com/questions/148279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Eclipse C++ pretty printing? The output we get when printing C++ sources from Eclipse is rather ugly. Is there are way/a plugin to pretty print C++ source code like e.g. with a2ps (which is probably using yet another filter for C source code)? A: See this DDJ article which uses enscript as the pretty print engine. A: I also use enscript for this. Here's an alias I often use: alias cpp2ps='enscript --color --pretty-print=cpp --language=PostScript' and I use it like this: cpp2ps -P main.ps main.cpp There are several other great options in enscript including rotating, 2-column output, line numbers, headers/footers, etc. Check out the enscript man page. Also, on Macs, XCode prints C++ code very nicely. A: I would like to expand on the Windows 7 response because some key steps are left out: This is for MinGW users with Eclipse CDT 0) If you don't have python GDB, open a shell/command and use MinGW-get.exe to 'install' Python-enabled GDB e.g. MinGw-get.exe install gdb-python 1a) Get Python 2.7.x from http://python.org/download/ and install 1b) Make sure PYTHONPATH and PYTHONHOME are set in your environment: PYTHONPATH should be C:\Python27\Lib (or similar) PYTHONHOME should be C:\Python27 1c) Add PYTHONHOME to your PATH %PYTHONHOME%;... 2a) Open a text enter, enter the following statements. Notice the 3rd line is pointing to where the python scripts are located. See notes below about this! python import sys sys.path.insert(0, 'C:/MinGW/share/gcc-4.6.1/python') from libstdcxx.v6.printers import register_libstdcxx_printers register_libstdcxx_printers (None) end 2b) Save as '.gdbinit' NOTE: Windows explorer will not let you name a file that starts with with a period from explorer. Most text edits (including Notepad) will let you. GDB init files are like 'scripts' of GDB commands that GBD will execute upon loading. 2c) The '.gdbinit' file needs to be in the working directory of GDB (most likely this is your projects root directory but your IDE can tell you. 3) Open your Eclipse (or other IDE) Preferences dialog. Go to the C++ Debugger sub-menu. 4) Configure Eclipse to use C:\MinGW\bin\gdb-python27.exe as the debugger and your .gdbinit as the config file. 5a) Re-create all your debug launch configurations (delete the old one and create a new one from scratch). --OR-- 5b) Edit each debug configuration and point it to the new gdb-python.exe AND point it to the. If you run into issues: --Don't forget to change the location to the python directory in the above python code! This directory is created by MinGW, so don't go looking to download the pretty printers, MinGW did it for you in step zero. Just goto your MinGW install director, the share folder, the GCC folder (has version number) and you will find python folder. This location is what should be in python script loaded by GDB. --Also, the .gdbinit is a PITA, make sure its named correctly and in the working folder of GDB which isn't necessarily where gdb-python.exe is located! Look at your GDB output when loading GDB to see if a) 'python-enabled' appears during load and that the statements in the .gdbinit are appearing. --Finally, I had alot of issues with the system variables. If python gives you 'ImportError' then most likely you have not set PYTHONPATH or PYTHONHOME. --The directory with 'gdb-python27' (e.g. C:\MinGW\bin') should also be on your path and if it is, it makes setting up eclipse a bit nicer because you don't need to put in absolute paths. But still, sometimes the .gbdinit needs an absoulte path. if it works you'll see output from gbd (console->gdb traces) like this on startup of debugger: 835,059 4^done 835,059 (gdb) 835,059 5-enable-pretty-printing 835,069 5^done .... 835,129 12^done 835,129 (gdb) 835,129 13source C:\MinGW\bin\.gdbinit 835,139 &"source C:\\MinGW\\bin\\.gdbinit\n" 835,142 13^done 835,142 (gdb)
{ "language": "en", "url": "https://stackoverflow.com/questions/148281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you implement audit trail for your objects (Programming)? I need to implement an audit trail for Add/Edit/Delete on my objects,I'm using an ORM (XPO) for defining my objects etc. I implemented an audit trail object that is triggered on * *OnSaving *OnDeleting Of the base object, and I store the changes in Audit-AuditTrail (Mast-Det) table, for field changes. etc. using some method services called. How do you implement audit trail in you OOP code? Please share your insights? Any patterns etc? Best practices etc? Another thing is that how to disable audit when running unit test,since I don't need to audit them but since base object has the code. Changes to object (edit/add/del) and what field changes need to be audited A: Database triggers are the preferred way to go here, if you can. However, recently I had to do this in client-side code and I ended up writing a class that created a deep (value) copy of the object when it was opened for editing, compared the two objects at save time (using ToString() only) and wrote any changes to an audit table. Edit: I had an [Audit] attribute on each property I wanted to consider auditable and used reflection to find them, making the method non-specific to the objects being audited. A: I don't know if it will fit seamlessly with your ORM, but i used Point-in-Time database design for an ERP application and really recommend it. You automatically get History and Audit from this architecture, as well as other benefits. A: I come more from the SW side that the DB side, if you create a set of DAOs (Data access objects) that you use for your interaction with the database. I would then insert the audit functionality into the respective functions in the DAOs that need to be trailed. The database trigger solution is also feasible, it depends where you like to put your functionality, in the DB or in the code There are a lot of ORM (Object relational Mapping) tools out there that create the DAO layer for you. A: We've implemented a similar solution, using AOP (aspectJ implementation). Using this particular points can be captured and specific operations can be performed. This can be plugged in and plugged off when we like. If you really want to do it in the app layer, i would suggest this. Hope it helps.. A: I've done this in Hibernate (another ORM) using an Interceptor for the Session. That way the audit code is seperate from your code. A: I know this doesn't answer your question, but for the record, I prefer to handle this type of auditing logic in the database. A: We have a table that all audit trail entries are stored in. A database trigger is on every table (it's put there by a stored procedure, but that's not relevant to this answer). When a value is changed, the old value is stored in the audit trail. Ours is a little complex in that we also have a look-up table that contains a list of every table we have, and another table that contains every field for each table. This allows us to look up an entry in audit trail, based on what table it's in by ID of that table in the first column. Then we also know exactly what field we are looking for based on the 2nd table's ID. This keeps us from having to store strings for the table name and the field name. To display it, our grids have an "audit trail" button next to the delete button. This opens a popup-grid with the history of that record. We use kendo grids, but none of this implementation is necessary for that. The popup is a bootstrap popup.
{ "language": "en", "url": "https://stackoverflow.com/questions/148291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to print values at points with Google Chart API? I have gone through their developer guide but haven't been able to find a way to print the value at points (in case of line charts) or after bars (bar charts).Is there a way to do it? A: You can achieve this with a label, it's in the documentation - it has an example with a bar chart. A: I think that's not possible at the moment. I looked at the Google Chart API myself few days ago. Instead I found (and probably will use) FusionCharts Free pacakge. It has the features you need, plus those points can be interactive.
{ "language": "en", "url": "https://stackoverflow.com/questions/148295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to check for equals? (0 == i) or (i == 0) Okay, we know that the following two lines are equivalent - * *(0 == i) *(i == 0) Also, the first method was encouraged in the past because that would have allowed the compiler to give an error message if you accidentally used '=' instead of '=='. My question is - in today's generation of pretty slick IDE's and intelligent compilers, do you still recommend the first method? In particular, this question popped into my mind when I saw the following code - if(DialogResult.OK == MessageBox.Show("Message")) ... In my opinion, I would never recommend the above. Any second opinions? A: * *(0 == i) I will always pick this one. It is true that most compilers today do not allow the assigment of a variable in a conditional statement, but the truth is that some do. In programming for the web today, I have to use myriad of langauges on a system. By using 0 == i, I always know that the conditional statement will be correct, and I am not relying on the compiler/interpreter to catch my mistake for me. Now if I have to jump from C# to C++, or JavaScript I know that I am not going to have to track down assignment errors in conditional statements in my code. For something this small and to have it save that amount of time, it's a no brainer. A: I used to be convinced that the more readable option (i == 0) was the better way to go with. Then we had a production bug slip through (not mine thankfully), where the problem was a ($var = SOME_CONSTANT) type bug. Clients started getting email that was meant for other clients. Sensitive type data as well. You can argue that Q/A should have caught it, but they didn't, that's a different story. Since that day I've always pushed for the (0 == i) version. It basically removes the problem. It feels unnatural, so you pay attention, so you don't make the mistake. There's simply no way to get it wrong here. It's also a lot easier to catch that someone didn't reverse the if statement in a code review than it is that someone accidentally assigned a value in an if. If the format is part of the coding standards, people look for it. People don't typically debug code during code reviews, and the eye seems to scan over a (i = 0) vs an (i == 0). I'm also a much bigger fan of the java "Constant String".equals(dynamicString), no null pointer exceptions is a good thing. A: I prefer the second one, (i == 0), because it feel much more natural when reading it. You ask people, "Are you 21 or older?", not, "Is 21 less than or equal to your age?" A: You know, I always use the if (i == 0) format of the conditional and my reason for doing this is that I write most of my code in C# (which would flag the other one anyway) and I do a test-first approach to my development and my tests would generally catch this mistake anyhow. I've worked in shops where they tried to enforce the 0==i format but I found it awkward to write, awkward to remember and it simply ended up being fodder for the code reviewers who were looking for low-hanging fruit. A: Actually, the DialogResult example is a place where I WOULD recommend that style. It places the important part of the if() toward the left were it can be seen. If it's is on the right and the MessageBox have more parameters (which is likely), you might have to scroll right to see it. OTOH, I never saw much use in the "(0 == i) " style. If you could remember to put the constant first, you can remember to use two equals signs, A: I'm trying always use 1st case (0==i), and this saved my life a few times! A: I think it's just a matter of style. And it does help with accidentally using assignment operator. I absolutely wouldn't ask the programmer to grow up though. A: I prefer (i == 0), but I still sort of make a "rule" for myself to do (0 == i), and then break it every time. "Eh?", you think. Well, if I'm making a concious decision to put an lvalue on the left, then I'm paying enough attention to what I'm typing to notice if I type "=" for "==". I hope. In C/C++ I generally use -Wall for my own code, which generates a warning on gcc for most "=" for "==" errors anyway. I don't recall seeing that warning recently, perhaps because the longer I program the more reflexively paranoid I am about errors I've made before... if(DialogResult.OK == MessageBox.Show("Message")) seems misguided to me. The point of the trick is to avoid accidentally assigning to something. But who is to say whether DialogResult.OK is more, or less likely to evaluate to an assignable type than MessageBox.Show("Message")? In Java a method call can't possibly be assignable, whereas a field might not be final. So if you're worried about typing = for ==, it should actually be the other way around in Java for this example. In C++ either, neither or both could be assignable. (0==i) is only useful because you know for absolute certain that a numeric literal is never assignable, whereas i just might be. When both sides of your comparison are assignable you can't protect yourself from accidental assignment in this way, and that goes for when you don't know which is assignable without looking it up. There's no magic trick that says "if you put them the counter-intuitive way around, you'll be safe". Although I suppose it draws attention to the issue, in the same way as my "always break the rule" rule. A: I use (i == 0) for the simple reason that it reads better. It makes a very smooth flow in my head. When you read through the code back to yourself for debugging or other purposes, it simply flows like reading a book and just makes more sense. A: It doesn't matter in C# if you put the variable first or last, because assignments don't evaluate to a bool (or something castable to bool) so the compiler catches any errors like "if (i = 0) EntireCompanyData.Delete()" So, in the C# world at least, its a matter of style rather than desperation. And putting the variable last is unnatural to english speakers. Therefore, for more readable code, variable first. A: My company has just dropped the requirement to do if (0 == i) from its coding standards. I can see how it makes a lot of sense but in practice it just seems backwards. It is a bit of a shame that by default a C compiler probably won't give you a warning about if (i = 0). A: Third option - disallow assignment inside conditionals entirely: In high reliability situations, you are not allowed (without good explanation in the comments preceeding) to assign a variable in a conditional statement - it eliminates this question entirely because you either turn it off at the compiler or with LINT and only under very controlled situations are you allowed to use it. Keep in mind that generally the same code is generated whether the assignment occurs inside the conditional or outside - it's simply a shortcut to reduce the number of lines of code. There are always exceptions to the rule, but it never has to be in the conditional - you can always write your way out of that if you need to. So another option is merely to disallow such statements, and where needed use the comments to turn off the LINT checking for this common error. -Adam A: If you have a list of ifs that can't be represented well by a switch (because of a language limitation, maybe), then I'd rather see: if (InterstingValue1 == foo) { } else if (InterstingValue2 == foo) { } else if (InterstingValue3 == foo) { } because it allows you to quickly see which are the important values you need to check. In particular, in Java I find it useful to do: if ("SomeValue".equals(someString)) { } because someString may be null, and in this way you'll never get a NullPointerException. The same applies if you are comparing constants that you know will never be null against objects that may be null. A: I'd say that (i == 0) would sound more natural if you attempted to phrase a line in plain (and ambiguous) english. It really depends on the coding style of the programmer or the standards they are required to adhere to though. A: Personally I don't like (1) and always do (2), however that reverses for readability when dealing with dialog boxes and other methods that can be extra long. It doesn't look bad how it is not, but if you expand out the MessageBox to it's full length. You have to scroll all the way right to figure out what kind of result you are returning. So while I agree with your assertions of the simplistic comparison of value types, I don't necessarily think it should be the rule for things like message boxes. A: both are equal, though i would prefer the 0==i variant slightly. when comparing strings, it is more error-prone to compare "MyString".equals(getDynamicString()) since, getDynamicString() might return null. to be more conststent, write 0==i A: Well, it depends on the language and the compiler in question. Context is everything. In Java and C#, the "assignment instead of comparison" typo ends up with invalid code apart from the very rare situation where you're comparing two Boolean values. I can understand why one might want to use the "safe" form in C/C++ - but frankly, most C/C++ compilers will warn you if you make the typo anyway. If you're using a compiler which doesn't, you should ask yourself why :) The second form (variable then constant) is more readable in my view - so anywhere that it's definitely not going to cause a problem, I use it. A: Rule 0 for all coding standards should be "write code that can be read easily by another human." For that reason I go with (most-rapidly-changing value) test-against (less-rapidly-changing-value, or constant), i.e "i == 0" in this case. Even where this technique is useful, the rule should be "avoid putting an lvalue on the left of the comparison", rather than the "always put any constant on the left", which is how it's usually interpreted - for example, there is nothing to be gained from writing if (DateClass.SATURDAY == dateObject.getDayOfWeek()) if getDayOfWeek() is returning a constant (and therefore not an lvalue) anyway! I'm lucky (in this respect, at least) in that these days in that I'm mostly coding in Java and, as has been mentioned, if (someInt = 0) won't compile. The caveat about comparing two booleans is a bit of a red-herring, as most of the time you're either comparing two boolean variables (in which case swapping them round doesn't help) or testing whether a flag is set, and woe-betide-you if I catch you comparing anything explicitly with true or false in your conditionals! Grrrr! A: In C, yes, but you should already have turned on all warnings and be compiling warning-free, and many C compilers will help you avoid the problem. I rarely see much benefit from a readability POV. A: Code readability is one of the most important things for code larger than a few hundred lines, and definitely i == 0 reads much easier than the reverse A: Maybe not an answer to your question. I try to use === (checking for identical) instead of equality. This way no type conversion is done and it forces the programmer do make sure the right type is passed, A: You are right that placing the important component first helps readability, as readers tend to browse the left column primarily, and putting important information there helps ensure it will be noticed. However, never talk down to a co-worker, and implying that would be your action even in jest will not get you high marks here. A: I always go with the second method. In C#, writing if (i = 0) { } results in a compiler error (cannot convert int to bool) anyway, so that you could make a mistake is not actually an issue. If you test a bool, the compiler is still issuing a warning and you shouldn't compare a bool to true or false. Now you know why. A: I personally prefer the use of variable-operand-value format in part because I have been using it so long that it feels "natural" and in part because it seems to the predominate convention. There are some languages that make use of assignment statements such as the following: :1 -> x So in the context of those languages it can become quite confusing to see the following even if it is valid: :if(1=x) So that is something to consider as well. I do agree with the message box response being one scenario where using a value-operand-variable format works better from a readability stand point, but if you are looking for constancy then you should forgo its use. A: This is one of my biggest pet peeves. There is no reason to decrease code readability (if (0 == i), what? how can the value of 0 change?) to catch something that any C compiler written in the last twenty years can catch automatically. Yes, I know, most C and C++ compilers don't turn this on by default. Look up the proper switch to turn it on. There is no excuse for not knowing your tools. It really gets on my nerves when I see it creeping into other languages (C#,Python) which would normally flag it anyway! A: I believe the only factor to ever force one over the other is if the tool chain does not provide warnings to catch assignments in expressions. My preference as a developer is irrelevant. An expression is better served by presenting business logic clearly. If (0 == i) is more suitable than (i == 0) I will choose it. If not I will choose the other. Many constants in expressions are represented by symbolic names. Some style guides also limit the parts of speech that can be used for identifiers. I use these as a guide to help shape how the expression reads. If the resulting expression reads loosely like pseudo code then I'm usually satisfied. I just let the expression express itself and If I'm wrong it'll usually get caught in a peer review. A: if(DialogResult.OK == MessageBox.Show("Message")) ... I would always recommend writing the comparison this way. If the result of MessageBox.Show("Message") can possibly be null, then you risk a NPE/NRE if the comparison is the other way around. Mathematical and logical operations aren't reflexive in a world that includes NULLs. A: We might go on and on about how good our IDEs have gotten, but I'm still shocked by the number of people who turn the warning levels on their IDE down. Hence, for me, it's always better to ask people to use (0 == i), as you never know, which programmer is doing what. It's better to be "safe than sorry"
{ "language": "en", "url": "https://stackoverflow.com/questions/148298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Best Way to Manage Configuration Data I'm working on a SaaS application where each customer will have different configurations depending on the edition they have purchased, additional features they have purchased, etc. For example, a customer might have a limit of 3 custom reports. Obviously I want to store this configuration in the database, but I am unsure of the best approach. We want to be able to add additional features in the future without requiring a change to the database schema, so a single table with a column per configuration option isn't sensible. Possible options are a table with one entry per customer, with an XML field containing the entire configuration for that customer, but that adds complexity when the XML schema changes to add additional features. We could use a table with key value pairs, and store all configuration settings as strings and then parse to the correct data type, but that seems a bit of a cludge, as does having a seperate table for string config options, integer config options, etc. Is there a good pattern for this type of scenario which people are using? A: Actually, I don't see the need for different configurations here. What you need is authorization levels and proper user interface not to show the functions the user hasn't paid for. A good authorization data model for such application would be Role Based Access Control (RBAC). Google is your friend. A: If your database is SQL Server 2005+, your key / value table can use the SQLVARIANT data type for the value field - with a third column to store the data type you need to cast it to for use. That way you can literally insert numbers & text values of varying sizes into the same field. A: I think this would depend on how your product was sold to the customer. If you only sell it in packages... PACKAGE 1 -> 3 reports, date entry, some other stuff. PACKAGE 2 -> 6 reports, more stuff PACKAGE 3 -> 12 reports, almost all the stuff UBER PACKAGE -> everything I would think it would be easier to setup a table of those packages and link to that. If you sell each module by itself with variations... Customer wants 4 reports a week with an additional report every other tuesday if it's a full moon. Then I would -- Create a table with all the product features. Create a link table for customers and the features they want. In that link table add an additional field for modification if needed. CUSTOMERS customer_id (pk) MODULES module_id (pk) module_name (reports!) CUSTOMER_MODULES module_id (pk) (fk -> modules) customer_id (pk) (fk -> customers) customization (configuration file or somesuch?) This makes the most sense to me. A: Why are you so afraid of schema change? When you change your application, you will doubtless require additional configuration data. This will entail other schema changes, so why be afraid? Schema change is something that you should be able to tolerate, incorporate into your development, testing and release process, and make use of in design changes in the future. Schema changes happen; get used to it :) A: The key value pair table, but with everything is stored as a string and with another column (if necessary) saying which type should the value be casted to. CREATE TABLE configKVP(clientId int, key varchar, value varchar, type varchar) If the value cannot be casted to the type, then you know it's a misconfiguration and there's no ambiguity.
{ "language": "en", "url": "https://stackoverflow.com/questions/148305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Portal Style & Theme I have integrated SRM 5.0 into Portal. Most of the iviews are IAC i.e., all are ITS based services. The issue is that the Portal Theme does not get reflected on these services after integration. When a BSP or Webdynpro is integrated then the application reflects the Portal Theme when executed from Portal but the ITS services are not getting this. I tried using SE80 and editing EBPApplication.css. In BBPGLOBAL i changed all color attributes to custom colour but no effect. Whch property should i change to remove the blue colour. A: You can use FireFox and Firebug to determine the location of CSS values and test them online. A: There is an ITS Theme generator for your corresponding Portal theme.
{ "language": "en", "url": "https://stackoverflow.com/questions/148314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How to upload files from ASP.NET to another web application I have a scenario where I need to upload a file from one web application and use it in another one. My setup is the following. * *One server, hosting two web applications in IIS - both are ASP.NET *One of the applications is used to administer the other one + a bunch more stuff *I need to upload a file from this admin app, save the path in DB through the DAL and then access the file from the other web app, which would provide the file for download *I keep files on disk, only the path in DB So where and how can I upload the file so that it can be accessed from both web applications? Should I use a service or is there some other way? Here are some related questions I found, but I don't think they cover my particular scenario: How to handle file uploads to a dedicated image server? How to upload a file to a WCF Service? A: Since both applications are on the same server this should be straightforward: * *Save the uploaded file somewhere on the server. *Create a virtual directory in any application needing to expose the files pointing to the physical path. *Save the virtual path in the db for flexibility A: You could setup a new virtual directory in each application that points to the same folder on your server where you would upload the files to. Lets say you created a new folder on your c: drive called "uploads" i.e. c:\uploads. Then in IIS setup a new virtual directory called "uploads" that points to c:\uploads for each web application. That should give both sites access to the files. A: Can I ask why you are not keeping the file in the DB? This would make passing it around much easier. A: Assuming the file path you put in the DB is accessible from the non-admin web app (which it sounds like it is), the file just needs to go somewhere that both applications have access rights to. Only the admin app would need to have write access. You can configure what user account an IIS web site will run under Website properties > Directory Security in the IIS management console. Then just make sure to set appropriate directory permissions.
{ "language": "en", "url": "https://stackoverflow.com/questions/148322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: how to register url handler for apache commons httpclient I want to be able to access custom URLs with apache httpclient. Something like this: HttpClient client = new HttpClient(); HttpMethod method = new GetMethod("media:///squishy.jpg"); int statusCode = client.executeMethod(method); Can I somehow register a custom URL handler? Or should I just register one with Java, using URL.setURLStreamHandlerFactory(...) Regards. A: We do it like this: org.apache.commons.httpclient.protocol.Protocol.registerProtocol("ss-https", new Protocol("ss-https", (ProtocolSocketFactory)new EasySSLProtocolSocketFactory(), 443)); A: I don't think there's a way to do this in commons httpclient. It doesn't make a whole lot of sense either, after all it is a HTTP client and "media:///squishy.jpg" is not HTTP, so all the code to implement the HTTP protocol probably couldn't be used anyways. URL.setURLStreamHandlerFactory(...) could be the way to go, but you'll probably have to do a lot of protocol coding by hand, depending on your "media"-protocol.
{ "language": "en", "url": "https://stackoverflow.com/questions/148350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is it possible to create and then upload an Image file to a web server from silverlight? I have just started using silverlight 2 beta and cannot find how to or if it is possible to render a canvas to an bitmap image and then upload it to my web server? Is this possible and if so how would I complete this task? Update: This is now possible under silverlight 3 using a writable bitmap to save the XAML as a JPEG see the blog post here: http://blog.blueboxes.co.uk/2009/07/21/rendering-xaml-to-a-jpeg-using-silverlight-3/ A: You can't render a canvas to a bitmap in Silverlight 2, but if you could generate a XAML version of your Canvas, you could pass it to the server and do something like this server side: http://www.thedatafarm.com/blog/2008/01/31/ConvertingSilverlightInkPresenterImagesToAPNGFile.aspx A: The only option you have now (if you want it done in the Silverlight CLR on the client side) is to start with fjcore http://code.google.com/p/fjcore/ It's only a starting point, you will have to write a lot of code -- it mainly will give you an Image representation and a JPEG Encoder. You can't get the pixels of the canvas, so if you need that, then I think you are out of luck. But fjcore would give you an Image object that you could write drawing routines for and then you would have to draw on that instead (not sure what you are trying to do, but if it's simple, it might be ok).
{ "language": "en", "url": "https://stackoverflow.com/questions/148354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Where do you check to if a hotfix from Microsoft have been applied to any service packs? I have a interest in a reported bug which Microsoft have made available a hotfix for. When looking on the site, I'm not able to figure out if this fix is included in a service pack or not. Do anyone know where I can find this out? A: You didn't mention your product but: List of fixes in Service Pack 1 for XP: http://support.microsoft.com/kb/324720 List of fixes in Service Pack 2 for XP: http://support.microsoft.com/kb/811113 List of fixes in Service Pack 3 for XP: http://support.microsoft.com/kb/946480 List of fixes in Service Pack 1 for Vista: http://technet.microsoft.com/en-us/library/cc749061.aspx A: For security-related patches, try MBSA: http://technet.microsoft.com/en-us/security/cc184924.aspx Also try running a WMI query against installed hotfixes. I believe that Service Packs register bundled hotfixes.
{ "language": "en", "url": "https://stackoverflow.com/questions/148358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I give keyboard focus to a DIV and attach keyboard event handlers to it? I am building an application where I want to be able to click a rectangle represented by a DIV, and then use the keyboard to move that DIV by listing for keyboard events. Rather than using an event listener for those keyboard events at the document level, can I listen for keyboard events at the DIV level, perhaps by giving it keyboard focus? Here's a simplified sample to illustrate the problem: <html> <head> </head> <body> <div id="outer" style="background-color:#eeeeee;padding:10px"> outer <div id="inner" style="background-color:#bbbbbb;width:50%;margin:10px;padding:10px;"> want to be able to focus this element and pick up keypresses </div> </div> <script language="Javascript"> function onClick() { document.getElementById('inner').innerHTML="clicked"; document.getElementById('inner').focus(); } //this handler is never called function onKeypressDiv() { document.getElementById('inner').innerHTML="keypress on div"; } function onKeypressDoc() { document.getElementById('inner').innerHTML="keypress on doc"; } //install event handlers document.getElementById('inner').addEventListener("click", onClick, false); document.getElementById('inner').addEventListener("keypress", onKeypressDiv, false); document.addEventListener("keypress", onKeypressDoc, false); </script> </body> </html> On clicking the inner DIV I try to give it focus, but subsequent keyboard events are always picked up at the document level, not my DIV level event listener. Do I simply need to implement an application-specific notion of keyboard focus? I should add I only need this to work in Firefox. A: Paul's answer works fine, but you could also use contentEditable, like this... document.getElementById('inner').contentEditable=true; document.getElementById('inner').focus(); Might be preferable in some cases. A: Sorted - I added tabindex attribute to the target DIV, which causes it to pick up keyboard events, for example <div id="inner" tabindex="0"> this div can now have focus and receive keyboard events </div> Information gleaned from http://www.w3.org/WAI/GL/WCAG20/WD-WCAG20-TECHS/SCR29.html
{ "language": "en", "url": "https://stackoverflow.com/questions/148361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "86" }
Q: Restrict Template Function I wrote a sample program at http://codepad.org/ko8vVCDF that uses a template function. How do I retrict the template function to only use numbers? (int, double etc.) #include <vector> #include <iostream> using namespace std; template <typename T> T sum(vector<T>& a) { T result = 0; int size = a.size(); for(int i = 0; i < size; i++) { result += a[i]; } return result; } int main() { vector<int> int_values; int_values.push_back(2); int_values.push_back(3); cout << "Integer: " << sum(int_values) << endl; vector<double> double_values; double_values.push_back(1.5); double_values.push_back(2.1); cout << "Double: " << sum(double_values); return 0; } A: That is how you do it. Comment the template specialization for double for example.. and it will not allow you to call that function with double as parameter. The trick is that if you try to call sum with a type that is not among the specializations of IsNumber, then the generic implementation is called, and that implementation makes something not allowed (call a private constructor). The error message is NOT intuitive unless you rename the IsNumber class to something that sounds like an error message. #include <vector> #include <iostream> using namespace std; template<class T> struct IsNumber{ private: IsNumber(){} }; template<> struct IsNumber<float>{ IsNumber(){}; }; template<> struct IsNumber<double>{ IsNumber(){}; }; template<> struct IsNumber<int>{ IsNumber(){}; }; template <typename T> T sum(vector<T>& a) { IsNumber<T> test; T result = 0; int size = a.size(); for(int i = 0; i < size; i++) { result += a[i]; } return result; } int main() { vector<int> int_values; int_values.push_back(2); int_values.push_back(3); cout << "Integer: " << sum(int_values) << endl; vector<double> double_values; double_values.push_back(1.5); double_values.push_back(2.1); cout << "Double: " << sum(double_values); return 0; } A: This is possible by using SFINAE, and made easier by using helpers from either Boost or C++11 Boost: #include <vector> #include <boost/utility/enable_if.hpp> #include <boost/type_traits/is_arithmetic.hpp> template<typename T> typename boost::enable_if<typename boost::is_arithmetic<T>::type, T>::type sum(const std::vector<T>& vec) { typedef typename std::vector<T>::size_type size_type; T result; size_type size = vec.size(); for(size_type i = 0; i < size; i++) { result += vec[i]; } return result; } C++11: #include <vector> #include <type_traits> template<typename T> typename std::enable_if<std::is_arithmetic<T>::value, T>::type sum(const std::vector<T>& vec) { T result; for (auto item : vec) result += item; return result; } A: You can do something like this: template <class T> class NumbersOnly { private: void ValidateType( int &i ) const {} void ValidateType( long &l ) const {} void ValidateType( double &d ) const {} void ValidateType( float &f ) const {} public: NumbersOnly() { T valid; ValidateType( valid ); }; }; You will get an error if you try to create a NumbersOnly that doesn't have a ValidateType overload: NumbersOnly<int> justFine; NumbersOnly<SomeClass> noDeal; A: The only way to restrict a template is to make it so that it uses something from the types that you want, that other types don't have. So, you construct with an int, use + and +=, call a copy constructor, etc. Any type that has all of these will work with your function -- so, if I create a new type that has these features, your function will work on it -- which is great, isn't it? If you want to restrict it more, use more functions that only are defined for the type you want. Another way to implement this is by creating a traits template -- something like this template<class T> SumTraits { public: const static bool canUseSum = false; } And then specialize it for the classes you want to be ok: template<> class SumTraits<int> { public: const static bool canUseSum = true; }; Then in your code, you can write if (!SumTraits<T>::canUseSum) { // throw something here } edit: as mentioned in the comments, you can use BOOST_STATIC_ASSERT to make it a compile-time check instead of a run-time one A: Why would you want to restrict the types in this case? Templates allow "static duck typing", so anything allowed by what your sum function in this case should be allowed. Specifically, the only operation required of T is add-assignment and initialisation by 0, so any type that supports those two operations would work. That's the beauty of templates. (If you changed your initialiser to T result = T(); or the like, then it would work for both numbers and strings, too.) A: You could look into type traits (use boost, wait for C++0x or create your own). I found the following on google: http://artins.org/ben/programming/mactechgrp-artin-cpp-type-traits.pdf A: Indeed, there's no need to make it more stringent. Have a look at the string version (using the default constructor style advised by Chris Jester-Young) here... Take care, too, for overflows - you might need a bigger type to contain intermediate results (or output results). Welcome to the realm of meta-programming, then :) A: Suppose we want our templated add function can only accepts int and floats, We can do something like below. Can be seen here: https://godbolt.org/z/qa4z968hP #include <fmt/format.h> template <typename T> struct restrict_type {}; template<> struct restrict_type<float> {typedef float type;}; template<> struct restrict_type<int> {typedef int type;}; template<typename T> typename restrict_type<T>::type add(T val1, T val2){ return val1 + val2; } int main() { fmt::print("{}\n", add(12, 30)); fmt::print("{}\n", add(12.5f, 30.9f)); }
{ "language": "en", "url": "https://stackoverflow.com/questions/148373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Nifty live web tracking I'm looking for a real time web log watcher that can visually display visitors as they browse around different pages etc. I'd like it to be a web application, it's going to be shown on a big screen in the office. Any tips? A: Check out glTail.rb http://www.fudgie.org/ The site includes a video of it in operation too. This is a great looking app that I would highly recommenced the link includes a video of it in action it really is a great thing to have on a big screen. Not web based but I think you'll agree its cool none the less. A: You could write one yourself - just track visitors using a serverside database insert, then periodically refresh your info using ajax. A simple version of this could be written in very short amount of time. A: There is a CMS called EPiServer that has a very nice application called EPiTrace (http://r.ep.se/projects/EPiTrace/) which has a similar approach to what I want. This application is open source, but requires you to have a license for the CMS to run it. I was hoping there was some commerical or open source application with a similar approach. The marketing people (who are going to have this on their screen) isn't very interesting in low-level numbers and protocols. They want to see who's browsing the site, where they are, where they go, etc. And they want to see it in real time in a "easy to understand"-way. A: Never tried other ones, but I was amazed when I saw my site in Woopra for the first time. You don't just see your users as they browse the site (Where they are, for how long, where did they come from etc.), but you can even talk to them if you want (through a chat client that will pop up directly on the site just for the user you want to talk to -- never tried this option for obvious reasons :D) The only con right now is that you have to wait for a while to get your site approved (the app is still in beta; or at least it was when I registered my site). Good luck! A: this is the easiest and quickest tool for web tracking live... http://www.realtimevisits.com no fancy stats very simple and effective. A: Reinvigorate is a cool one. Not a web based app though. You include a js on the file you want monitored and that is reported to their servers, Then you can download an app that shows every visit.
{ "language": "en", "url": "https://stackoverflow.com/questions/148381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I inspect the Asp.Net request pipeline? When I measure request times on "the inside" of an Asp.Net application and compare it to timings on "the outside" of the app, I get different values -- 1000-5000ms strange overheads from time to time. Maybe the requests are beeing queued up in front of IIS? Or something strange is going on in an HttpModule? The question: Is there a way to inspect the request pipeline for tracing exactly where the time is spent before the app is hit? A: As Dan said, you need enable tracing at the application level (web.config): <!-- pageOutput enables trace output from the page itself --> <system.web> <trace enabled="true" pageOutput="true" traceMode="SortByTime"/> </system.web> Or you can enable tracing at the page level. This can be done by setting the trace = "true" in Page directive. <%@ Page Language="C#" Trace="true" Inherits="System.Web.UI.Page" CodeFile="Default.aspx.cs" %> The application level tracing can be viewed from http://localhost/appname/trace.axd. This will show list of requests: When you click on details of each page you can see how much time each event in the life cycle of the page took. This should help you to figure out where exactly your page is taking more than expected time. [Image referenced from http://www.brainbell.com/tutorials/ASP/Built-in_Handlers.html] A: You can create your own module and register it on top to trace every request with more accuracy, but the measure will start once the IIS delegates the request to the ASP.NET ISAPI module. To get more accuracy you can go to IIS logs. A: You can turn on tracing in your web.config file. The line should say something like <trace enabled="true" pageOutput="true" />. The MSDN page is here. A: You want to also try out Glimpse. It allows you to see Modules in the request pipeline as well as a host of other information.
{ "language": "en", "url": "https://stackoverflow.com/questions/148384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Are there any disadvantages to always using nvarchar(MAX)? In SQL Server 2005, are there any disadvantages to making all character fields nvarchar(MAX) rather than specifying a length explicitly, e.g. nvarchar(255)? (Apart from the obvious one that you aren't able to limit the field length at the database level) A: A reason NOT to use max or text fields is that you cannot perform online index rebuilds i.e. REBUILD WITH ONLINE= ON even with SQL Server Enterprise Edition. A: Based on the link provided in the accepted answer it appears that: * *100 characters stored in an nvarchar(MAX) field will be stored no different to 100 characters in an nvarchar(100) field - the data will be stored inline and you will not have the overhead of reading and writing data 'out of row'. So no worries there. *If the size is greater than 4000 the data would be stored 'out of row' automatically, which is what you would want. So no worries there either. However... *You cannot create an index on an nvarchar(MAX) column. You can use full-text indexing, but you cannot create an index on the column to improve query performance. For me, this seals the deal...it is a definite disadvantage to always use nvarchar(MAX). Conclusion: If you want a kind of "universal string length" throughout your whole database, which can be indexed and which will not waste space and access time, then you could use nvarchar(4000). A: It's a fair question and he did state apart from the obvious… Disadvantages could include: Performance implications Query optimizer uses field size to determine most efficent exectution plan "1. The space alloction in extends and pages of the database are flexible. Thus when adding information to the field using update, your database would have to create a pointer if the new data is longer than the previous inserted. This the database files would become fragmented = lower performance in almost everything, from index to delete, update and inserts. " http://sqlblogcasts.com/blogs/simons/archive/2006/02/28/Why-use-anything-but-varchar_2800_max_2900_.aspx Integration implications - hard for other systems to know how to integrate with your database Unpredictable growth of data Possible security issues e.g. you could crash a system by taking up all disk space There is good article here: http://searchsqlserver.techtarget.com/tip/1,289483,sid87_gci1098157,00.html A: As of SQL Server 2019, NVARCHAR(MAX) still does not support SCSU “Unicode compression” — even when stored using In-Row data storage. SCSU was added in SQL Server 2008 and applies to any ROW/PAGE-compressed tables and indices. As such, NVARCHAR(MAX) can take up to twice as much physical disk space as a NVARCHAR(1..4000) field with the same text content+ — even when not stored in the LOB. The non-SCSU waste depends on data and language represented. Unicode Compression Implementation: SQL Server uses an implementation of the Standard Compression Scheme for Unicode (SCSU) algorithm to compress Unicode values that are stored in row or page compressed objects. For these compressed objects, Unicode compression is automatic for nchar(n) and nvarchar(n) columns [and is never used with nvarchar(max)]. On the other hand, PAGE compression (since 2014) still applies to NVARCHAR(MAX) columns if they are written as In-Row data.. so lack of SCSU feels like a “missing optimization”. Unlike SCSU, page compression results can vary dramatically based on shared leading prefixes (ie. duplicate values). However, it may still be “faster” to use NVARCHAR(MAX) even with the higher IO costs with functions like OPENJSON due to avoiding the implicit conversion. This is implicit conversion overhead depends on the relative cost of usage and if the field is touched before or after filtering. This same conversion issue exists when using 2019’s UTF-8 collation in a VARCHAR(MAX) column. Using NVARCHAR(1-4000) also requires N*2 bytes of the ~8000 byte row quota, while NVARCHAR(MAX) only requires 24 bytes. Overall design and usage need to be considered together to account for specific implementation details. +In my database / data / schema, by using two columns (coalesced on read) it was possible to reduce disk space usage by ~40% while still supporting overflowing text values. SCSU, while with its flaws, is an amazingly clever and underutilized method of storing Unicode more space-efficiently. A: The only problem I found was that we develop our applications on SQL Server 2005, and in one instance, we have to support SQL Server 2000. I just learned, the hard way that SQL Server 2000 doesn't like the MAX option for varchar or nvarchar. A: Bad idea when you know the field will be in a set range- 5 to 10 character for example. I think I'd only use max if I wasn't sure what the length would be. For example a telephone number would never be more than a certain number of characters. Can you honestly say you are that uncertain about the approximate length requirements for every field in your table? I do get your point though- there are some fields I'd certainly consider using varchar(max). Interestingly the MSDN docs sum it up pretty well: Use varchar when the sizes of the column data entries vary considerably. Use varchar(max) when the sizes of the column data entries vary considerably, and the size might exceed 8,000 bytes. There's an interesting discussion on the issue here. A: The job of the database is to store data so that it can be used by the enterprise. Part of making that data useful is ensuring that it is meaningful. Allowing someone to enter an unlimited number of characters for their first name isn't ensuring meaningful data. Building these constraints into the business layer is a good idea, but that doesn't ensure that the database will remain intact. The only way to guarantee that the data rules are not violated is to enforce them at the lowest level possible in the database. A: As was pointed out above, it is primarily a tradeoff between storage and performance. At least in most cases. However, there is at least one other factor that should be considered when choosing n/varchar(Max) over n/varchar(n). Is the data going to be indexed (such as, say, a last name)? Since the MAX definition is considered a LOB, then anything defined as MAX is not available for indexing. and without an index, any lookup involving the data as predicate in a WHERE clause is going to be forced into a Full Table scan, which is the worst performance you can get for data lookups. A: One problem is that if you are having to work with multiple versions of SQL Server, the MAX will not always work. So if you are working with legacy DB's or any other situation that involves multiple versions, you better be very careful. A: Sometimes you want the data type to enforce some sense on the data in it. Say for example you have a column that really shouldn't be longer than, say, 20 characters. If you define that column as VARCHAR(MAX), some rogue application could insert a long string into it and you'd never know, or have any way of preventing it. The next time your application uses that string, under the assumption that the length of the string is modest and reasonable for the domain it represents, you will experience an unpredictable and confusing result. A: I checked some articles and find useful test script from this: http://www.sqlservercentral.com/Forums/Topic1480639-1292-1.aspx Then changed it to compare between NVARCHAR(10) vs NVARCHAR(4000) vs NVARCHAR(MAX) and I don't find speed difference when using specified numbers but when using MAX. You can test by yourself. Hope This help. SET NOCOUNT ON; --===== Test Variable Assignment 1,000,000 times using NVARCHAR(10) DECLARE @SomeString NVARCHAR(10), @StartTime DATETIME; --===== SELECT @startTime = GETDATE(); SELECT TOP 1000000 @SomeString = 'ABC' FROM master.sys.all_columns ac1, master.sys.all_columns ac2; SELECT testTime='10', Duration = DATEDIFF(ms,@StartTime,GETDATE()); GO --===== Test Variable Assignment 1,000,000 times using NVARCHAR(4000) DECLARE @SomeString NVARCHAR(4000), @StartTime DATETIME; SELECT @startTime = GETDATE(); SELECT TOP 1000000 @SomeString = 'ABC' FROM master.sys.all_columns ac1, master.sys.all_columns ac2; SELECT testTime='4000', Duration = DATEDIFF(ms,@StartTime,GETDATE()); GO --===== Test Variable Assignment 1,000,000 times using NVARCHAR(MAX) DECLARE @SomeString NVARCHAR(MAX), @StartTime DATETIME; SELECT @startTime = GETDATE(); SELECT TOP 1000000 @SomeString = 'ABC' FROM master.sys.all_columns ac1, master.sys.all_columns ac2; SELECT testTime='MAX', Duration = DATEDIFF(ms,@StartTime,GETDATE()); GO A: 1) The SQL server will have to utilize more resources (allocated memory and cpu time) when dealing with nvarchar(max) vs nvarchar(n) where n is a number specific to the field. 2) What does this mean in regards to performance? On SQL Server 2005, I queried 13,000 rows of data from a table with 15 nvarchar(max) columns. I timed the queries repeatedly and then changed the columns to nvarchar(255) or less. The queries prior to the optimization averaged at 2.0858 seconds. The queries after the change returned in an average of 1.90 seconds. That was about 184 milliseconds of improvement to the basic select * query. That is an 8.8% improvement. 3) My results are in concurrence with a few other articles that indicated that there was a performance difference. Depending on your database and the query, the percentage of improvement can vary. If you don't have a lot of concurrent users or very many records, then the performance difference won't be an issue for you. However, the performance difference will increase as more records and concurrent users increase. A: Same question was asked on MSDN Forums: * *Varchar(max) vs Varchar(255) From the original post (much more information there): When you store data to a VARCHAR(N) column, the values are physically stored in the same way. But when you store it to a VARCHAR(MAX) column, behind the screen the data is handled as a TEXT value. So there is some additional processing needed when dealing with a VARCHAR(MAX) value. (only if the size exceeds 8000) VARCHAR(MAX) or NVARCHAR(MAX) is considered as a 'large value type'. Large value types are usually stored 'out of row'. It means that the data row will have a pointer to another location where the 'large value' is stored... A: Think of it as just another safety level. You can design your table without foreign key relationships - perfectly valid - and ensure existence of associated entities entirely on the business layer. However, foreign keys are considered good design practice because they add another constraint level in case something messes up on the business layer. Same goes for field size limitation and not using varchar MAX. A: I had a udf which padded strings and put the output to varchar(max). If this was used directly instead of casting back to the appropriate size for the column being adjusted, the performance was very poor. I ended up putting the udf to an arbitrary length with a big note instead of relying on all the callers of the udf to re-cast the string to a smaller size. A: legacy system support. If you have a system that is using the data and it is expected to be a certain length then the database is a good place to enforce the length. This is not ideal but legacy systems are sometime not ideal. =P A: If all of the data in a row (for all the columns) would never reasonably take 8000 or fewer characters then the design at the data layer should enforce this. The database engine is much more efficient keeping everything out of blob storage. The smaller you can restrict a row the better. The more rows you can cram in a page the better. The database just performs better when it has to access fewer pages. A: My tests have shown that there are differences when selecting. CREATE TABLE t4000 (a NVARCHAR(4000) NULL); CREATE TABLE tmax (a NVARCHAR(MAX) NULL); DECLARE @abc4 NVARCHAR(4000) = N'ABC'; INSERT INTO t4000 SELECT TOP 1000000 @abc4 FROM master.sys.all_columns ac1, master.sys.all_columns ac2; DECLARE @abc NVARCHAR(MAX) = N'ABC'; INSERT INTO tmax SELECT TOP 1000000 @abc FROM master.sys.all_columns ac1, master.sys.all_columns ac2; SET STATISTICS TIME ON; SET STATISTICS IO ON; SELECT * FROM dbo.t4000; SELECT * FROM dbo.tmax; A: Interesting link: Why use a VARCHAR when you can use TEXT? It's about PostgreSQL and MySQL, so the performance analysis is different, but the logic for "explicitness" still holds: Why force yourself to always worry about something that's relevant a small percentage of the time? If you saved an email address to a variable, you'd use a 'string' not a 'string limited to 80 chars'. A: The main disadvantage I can see is that let's say you have this: Which one gives you the most information about the data needed for the UI? This CREATE TABLE [dbo].[BusData]( [ID] [int] IDENTITY(1,1) NOT NULL, [RecordId] [nvarchar](MAX) NULL, [CompanyName] [nvarchar](MAX) NOT NULL, [FirstName] [nvarchar](MAX) NOT NULL, [LastName] [nvarchar](MAX) NOT NULL, [ADDRESS] [nvarchar](MAX) NOT NULL, [CITY] [nvarchar](MAX) NOT NULL, [County] [nvarchar](MAX) NOT NULL, [STATE] [nvarchar](MAX) NOT NULL, [ZIP] [nvarchar](MAX) NOT NULL, [PHONE] [nvarchar](MAX) NOT NULL, [COUNTRY] [nvarchar](MAX) NOT NULL, [NPA] [nvarchar](MAX) NULL, [NXX] [nvarchar](MAX) NULL, [XXXX] [nvarchar](MAX) NULL, [CurrentRecord] [nvarchar](MAX) NULL, [TotalCount] [nvarchar](MAX) NULL, [Status] [int] NOT NULL, [ChangeDate] [datetime] NOT NULL ) ON [PRIMARY] Or This? CREATE TABLE [dbo].[BusData]( [ID] [int] IDENTITY(1,1) NOT NULL, [RecordId] [nvarchar](50) NULL, [CompanyName] [nvarchar](50) NOT NULL, [FirstName] [nvarchar](50) NOT NULL, [LastName] [nvarchar](50) NOT NULL, [ADDRESS] [nvarchar](50) NOT NULL, [CITY] [nvarchar](50) NOT NULL, [County] [nvarchar](50) NOT NULL, [STATE] [nvarchar](2) NOT NULL, [ZIP] [nvarchar](16) NOT NULL, [PHONE] [nvarchar](18) NOT NULL, [COUNTRY] [nvarchar](50) NOT NULL, [NPA] [nvarchar](3) NULL, [NXX] [nvarchar](3) NULL, [XXXX] [nvarchar](4) NULL, [CurrentRecord] [nvarchar](50) NULL, [TotalCount] [nvarchar](50) NULL, [Status] [int] NOT NULL, [ChangeDate] [datetime] NOT NULL ) ON [PRIMARY] A: One disadvantage is that you will be designing around an unpredictable variable, and you will probably ignore instead of take advantage of the internal SQL Server data structure, progressively made up of Row(s), Page(s), and Extent(s). Which makes me think about data structure alignment in C, and that being aware of the alignment is generally considered to be a Good Thing (TM). Similar idea, different context. MSDN page for Pages and Extents MSDN page for Row-Overflow Data A: firstly I thought about this, but then thought again. There are performance implications, but equally it does serve as a form of documentation to have an idea what size the fields really are. And it does enforce when that database sits in a larger ecosystem. In my opinion the key is to be permissive but only within reason. ok, here's my feelings simply on the issue of business and data layer logic. It depends, if your DB is a shared resource between systems that share business logic then of course it seems a natural place to enforce such logic, but its not the BEST way to do it, the BEST way is to provide an API, this allows the interaction to be tested and keeps business logic where it belongs, it keeps systems decoupled, it keeps your tiers within a system decoupled. If however your database is supposed to be serving only one application, then lets get AGILE in thinking, what's true now? design for now. If and when such access is needed, provide an API to that data. obviously though, this is just the ideal, if you are working with an existing system the likelyhood is that you will need to do it differently at least in the short term. A: It will make screen design harder as you will no longer be able to predict how wide your controls should be. A: This will cause a performance problem, although it may never cause any actual issues if your database is small. Each record will take up more space on the hard drive and the database will need to read more sectors of the disk if you're searching through a lot of records at once. For example, a small record could fit 50 to a sector and a large record could fit 5. You'd need to read 10 times as much data from the disk using the large record.
{ "language": "en", "url": "https://stackoverflow.com/questions/148398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "377" }
Q: UTF8 to/from wide char conversion in STL Is it possible to convert UTF8 string in a std::string to std::wstring and vice versa in a platform independent manner? In a Windows application I would use MultiByteToWideChar and WideCharToMultiByte. However, the code is compiled for multiple OSes and I'm limited to standard C++ library. A: I've asked this question 5 years ago. This thread was very helpful for me back then, I came to a conclusion, then I moved on with my project. It is funny that I needed something similar recently, totally unrelated to that project from the past. As I was researching for possible solutions, I stumbled upon my own question :) The solution I chose now is based on C++11. The boost libraries that Constantin mentions in his answer are now part of the standard. If we replace std::wstring with the new string type std::u16string, then the conversions will look like this: UTF-8 to UTF-16 std::string source; ... std::wstring_convert<std::codecvt_utf8_utf16<char16_t>,char16_t> convert; std::u16string dest = convert.from_bytes(source); UTF-16 to UTF-8 std::u16string source; ... std::wstring_convert<std::codecvt_utf8_utf16<char16_t>,char16_t> convert; std::string dest = convert.to_bytes(source); As seen from the other answers, there are multiple approaches to the problem. That's why I refrain from picking an accepted answer. A: The problem definition explicitly states that the 8-bit character encoding is UTF-8. That makes this a trivial problem; all it requires is a little bit-twiddling to convert from one UTF spec to another. Just look at the encodings on these Wikipedia pages for UTF-8, UTF-16, and UTF-32. The principle is simple - go through the input and assemble a 32-bit Unicode code point according to one UTF spec, then emit the code point according to the other spec. The individual code points need no translation, as would be required with any other character encoding; that's what makes this a simple problem. Here's a quick implementation of wchar_t to UTF-8 conversion and vice versa. It assumes that the input is already properly encoded - the old saying "Garbage in, garbage out" applies here. I believe that verifying the encoding is best done as a separate step. std::string wchar_to_UTF8(const wchar_t * in) { std::string out; unsigned int codepoint = 0; for (in; *in != 0; ++in) { if (*in >= 0xd800 && *in <= 0xdbff) codepoint = ((*in - 0xd800) << 10) + 0x10000; else { if (*in >= 0xdc00 && *in <= 0xdfff) codepoint |= *in - 0xdc00; else codepoint = *in; if (codepoint <= 0x7f) out.append(1, static_cast<char>(codepoint)); else if (codepoint <= 0x7ff) { out.append(1, static_cast<char>(0xc0 | ((codepoint >> 6) & 0x1f))); out.append(1, static_cast<char>(0x80 | (codepoint & 0x3f))); } else if (codepoint <= 0xffff) { out.append(1, static_cast<char>(0xe0 | ((codepoint >> 12) & 0x0f))); out.append(1, static_cast<char>(0x80 | ((codepoint >> 6) & 0x3f))); out.append(1, static_cast<char>(0x80 | (codepoint & 0x3f))); } else { out.append(1, static_cast<char>(0xf0 | ((codepoint >> 18) & 0x07))); out.append(1, static_cast<char>(0x80 | ((codepoint >> 12) & 0x3f))); out.append(1, static_cast<char>(0x80 | ((codepoint >> 6) & 0x3f))); out.append(1, static_cast<char>(0x80 | (codepoint & 0x3f))); } codepoint = 0; } } return out; } The above code works for both UTF-16 and UTF-32 input, simply because the range d800 through dfff are invalid code points; they indicate that you're decoding UTF-16. If you know that wchar_t is 32 bits then you could remove some code to optimize the function. std::wstring UTF8_to_wchar(const char * in) { std::wstring out; unsigned int codepoint; while (*in != 0) { unsigned char ch = static_cast<unsigned char>(*in); if (ch <= 0x7f) codepoint = ch; else if (ch <= 0xbf) codepoint = (codepoint << 6) | (ch & 0x3f); else if (ch <= 0xdf) codepoint = ch & 0x1f; else if (ch <= 0xef) codepoint = ch & 0x0f; else codepoint = ch & 0x07; ++in; if (((*in & 0xc0) != 0x80) && (codepoint <= 0x10ffff)) { if (sizeof(wchar_t) > 2) out.append(1, static_cast<wchar_t>(codepoint)); else if (codepoint > 0xffff) { out.append(1, static_cast<wchar_t>(0xd800 + (codepoint >> 10))); out.append(1, static_cast<wchar_t>(0xdc00 + (codepoint & 0x03ff))); } else if (codepoint < 0xd800 || codepoint >= 0xe000) out.append(1, static_cast<wchar_t>(codepoint)); } } return out; } Again if you know that wchar_t is 32 bits you could remove some code from this function, but in this case it shouldn't make any difference. The expression sizeof(wchar_t) > 2 is known at compile time, so any decent compiler will recognize dead code and remove it. A: UTF8-CPP: UTF-8 with C++ in a Portable Way A: You can extract utf8_codecvt_facet from Boost serialization library. Their usage example: typedef wchar_t ucs4_t; std::locale old_locale; std::locale utf8_locale(old_locale,new utf8_codecvt_facet<ucs4_t>); // Set a New global locale std::locale::global(utf8_locale); // Send the UCS-4 data out, converting to UTF-8 { std::wofstream ofs("data.ucd"); ofs.imbue(utf8_locale); std::copy(ucs4_data.begin(),ucs4_data.end(), std::ostream_iterator<ucs4_t,ucs4_t>(ofs)); } // Read the UTF-8 data back in, converting to UCS-4 on the way in std::vector<ucs4_t> from_file; { std::wifstream ifs("data.ucd"); ifs.imbue(utf8_locale); ucs4_t item = 0; while (ifs >> item) from_file.push_back(item); } Look for utf8_codecvt_facet.hpp and utf8_codecvt_facet.cpp files in boost sources. A: You can use the codecvt locale facet. There's a specific specialisation defined, codecvt<wchar_t, char, mbstate_t> that may be of use to you, although, the behaviour of that is system-specific, and does not guarantee conversion to UTF-8 in any way. A: There are several ways to do this, but the results depend on what the character encodings are in the string and wstring variables. If you know the string is ASCII, you can simply use wstring's iterator constructor: string s = "This is surely ASCII."; wstring w(s.begin(), s.end()); If your string has some other encoding, however, you'll get very bad results. If the encoding is Unicode, you could take a look at the ICU project, which provides a cross-platform set of libraries that convert to and from all sorts of Unicode encodings. If your string contains characters in a code page, then may $DEITY have mercy on your soul. A: Created my own library for utf-8 to utf-16/utf-32 conversion - but decided to make a fork of existing project for that purpose. https://github.com/tapika/cutf (Originated from https://github.com/noct/cutf ) API works with plain C as well as with C++. Function prototypes looks like this: (For full list see https://github.com/tapika/cutf/blob/master/cutf.h ) // // Converts utf-8 string to wide version. // // returns target string length. // size_t utf8towchar(const char* s, size_t inSize, wchar_t* out, size_t bufSize); // // Converts wide string to utf-8 string. // // returns filled buffer length (not string length) // size_t wchartoutf8(const wchar_t* s, size_t inSize, char* out, size_t outsize); #ifdef __cplusplus std::wstring utf8towide(const char* s); std::wstring utf8towide(const std::string& s); std::string widetoutf8(const wchar_t* ws); std::string widetoutf8(const std::wstring& ws); #endif Sample usage / simple test application for utf conversion testing: #include "cutf.h" #define ok(statement) \ if( !(statement) ) \ { \ printf("Failed statement: %s\n", #statement); \ r = 1; \ } int simpleStringTest() { const wchar_t* chineseText = L"主体"; auto s = widetoutf8(chineseText); size_t r = 0; printf("simple string test: "); ok( s.length() == 6 ); uint8_t utf8_array[] = { 0xE4, 0xB8, 0xBB, 0xE4, 0xBD, 0x93 }; for(int i = 0; i < 6; i++) ok(((uint8_t)s[i]) == utf8_array[i]); auto ws = utf8towide(s); ok(ws.length() == 2); ok(ws == chineseText); if( r == 0 ) printf("ok.\n"); return (int)r; } And if this library does not satisfy your needs - feel free to open following link: http://utf8everywhere.org/ and scroll down at the end of page and pick up any heavier library which you like. A: I don't think there's a portable way of doing this. C++ doesn't know the encoding of its multibyte characters. As Chris suggested, your best bet is to play with codecvt.
{ "language": "en", "url": "https://stackoverflow.com/questions/148403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "83" }
Q: Why does the code below return true only for a = 1? Why does the code below return true only for a = 1? main(){ int a = 10; if (true == a) cout<<"Why am I not getting executed"; } A: Your boolean is promoted to an integer, and becomes 1. A: When a Bool true is converted to an int, it's always converted to 1. Your code is thus, equivalent to: main(){ int a = 10; if (1 == a) cout<<"y i am not getting executed"; } This is part of the C++ standard, so it's something you would expect to happen with every C++ standards compliant compiler. A: in C and C++, 0 is false and anything but zero is true: if ( 0 ) { // never run } if ( 1 ) { // always run } if ( var1 == 1 ) { // run when var1 is "1" } When compiler calculates a boolean expression it is obliged to produce 0 or 1. Also, there's a couple handy typedefs and defines, which allow you to use "true" and "false" instead of 1 and 0 in your expressions. So your code actually looks like this: main(){ int a = 10; if (1 == a) cout<<"y i am not getting executed"; } You probably want: main(){ int a = 10; if (true == (bool)a) cout<<"if you want to explicitly use true/false"; } or really just: main(){ int a = 10; if ( a ) cout<<"usual C++ style"; } A: The reason your print statement is not getting executed is because your boolean is getting implicitly converted to a number instead of the other way around. I.e. your if statement is equivalent to this: if (1 == a) You could get around this by first explicitly converting it to a boolean: main(){ int a = 10; if (((bool)a) == true) cout<<"I am definitely getting executed"; } In C/C++ false is represented as 0. Everything else is represented as non zero. That is sometimes 1, sometimes anything else. So you should never test for equality (==) to something that is true. Instead you should test for equality to something that is false. Since false has only 1 valid value. Here we are testing for all non false values, any of them is fine: main(){ int a = 10; if (a) cout<<"I am definitely getting executed"; } And one third example just to prove that it is safe to compare any integer that is considered false to a false (which is only 0): main(){ int a = 0; if (0 == false) cout<<"I am definitely getting executed"; } A: Because true is 1. If you want to test a for a non-zero value, just write if(a). A: I suggest you switch to a compiler that warns you about this... (VC++ yields this: warning C4806: '==' : unsafe operation: no value of type 'bool' promoted to type 'int' can equal the given constant; I don't have another compiler at hand.) I agree with Lou Franco - you want to know if a variable is bigger than zero (or unequal to it), test for that. Everything that's done implicitly by the compiler is hazardous if you don't know the last detail. A: Here is the way most people write that kind of code: main(){ int a = 10; if (a) // all non-zero satisfy 'truth' cout<<"y i am not getting executed"; } I have also seen: main(){ int a = 10; if (!!a == true) // ! result is guaranteed to be == true or == false cout<<"y i am not getting executed"; } A: Because a boolean is a bit in C/C++ and true is represented by 1, false by 0. Update: as said in the comment my original Answer is false. So bypass it. A: Because true is equal to 1. It is defined in a pre-proccesor directive, so all code with true in it is turnbed into 1 before compile time. A: I wouldn't expect that code to be defined and you shouldn't depend on whatever behavior your compiler is giving you. Probably the true is being converted to an int (1), and a is not being converted to a bool (true) as you expect. Better to write what you mean (a != 0) then to depend on this (even if it turns out to be defined). A: something different from 0 (that is false) is not necessary true (that is 1)
{ "language": "en", "url": "https://stackoverflow.com/questions/148407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Removing button while progress is displayed I have a button on an ASP.NET wep application form and when clicked goes off and posts information a third party web service. I have an UpdateProgress associated with the button. how do disable/hide the button while the progress is visible (i.e. the server has not completed the operation) I am looking at doing this to stop users clicking again when the information is being sent (as this results in duplicate information being sent) A: You'll have to hook a javascript method to the page request manager (Sys.WebForms.PageRequestManager.getInstance().add_initializeRequest). Here is the code I would use to hide the buttons, I would prefer the disable them (see how that's done in the link at the bottom). ASP.NET <div id="ButtonBar"> <asp:Button id= ............ </div> Javascript <script language="javascript"> // Get a reference to the PageRequestManager. var prm = Sys.WebForms.PageRequestManager.getInstance(); // Using that prm reference, hook _initializeRequest // and _endRequest, to run our code at the begin and end // of any async postbacks that occur. prm.add_initializeRequest(InitializeRequest); prm.add_endRequest(EndRequest); // Executed anytime an async postback occurs. function InitializeRequest(sender, args) { $get('ButtonBar').style.visibility = "hidden"; } // Executed when the async postback completes. function EndRequest(sender, args) { $get('ButtonBar').style.visibility = "visible"; } </script> See more about this at Why my ASP.NET AJAX forms are never submitted twice by Dave Ward. A: Easiest way it to put a semi-transparent png over the entire page -- then they can't send events to the page below. It looks kind of nice too, in my opinion. You see that kind of thing in the modal dialog box implementations of various AJAX toolkits and in lightbox. If you don't like the look, you just need to make it almost fully tranparent (an alpha value 1 off of fully transparent isn't noticeable). A: I also wrote a blog post about this which I hope is helpful to you: http://www.fitnessconnections.com/blog/post/2008/01/Disabling-a-submit-button-UpdatePanel-Update-method.aspx Cheers :)
{ "language": "en", "url": "https://stackoverflow.com/questions/148421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I get the content of the file specified as the 'src' of a A: I don't think the contents will be available via the DOM. You could get the value of the src attribute and use AJAX to request the file from the server. A: tl;dr script tags are not subject to CORS and same-origin-policy and therefore javascript/DOM cannot offer access to the text content of the resource loaded via a <script> tag, or it would break same-origin-policy. long version: Most of the other answers (and the accepted answer) indicate correctly that the "correct" way to get the text content of a javascript file inserted via a <script> loaded into the page, is using an XMLHttpRequest to perform another seperate additional request for the resource indicated in the scripts src property, something which the short javascript code below will demonstrate. I however found that the other answers did not address the point why to get the javascript files text content, which is that allowing to access content of the file included via the <script src=[url]></script> would break the CORS policies, e.g. modern browsers prevent the XHR of resources that do not provide the Access-Control-Allow-Origin header, hence browsers do not allow any other way than those subject to CORS, to get the content. With the following code (as mentioned in the other questions "use XHR/AJAX") it is possible to do another request for all not inline script tags in the document. function printScriptTextContent(script) { var xhr = new XMLHttpRequest(); xhr.open("GET",script.src) xhr.onreadystatechange = function () { if(xhr.readyState === XMLHttpRequest.DONE && xhr.status === 200) { console.log("the script text content is",xhr.responseText); } }; xhr.send(); } Array.prototype.slice.call(document.querySelectorAll("script[src]")).forEach(printScriptTextContent); and so I will not repeat that, but instead would like to add via this answer upon the aspect why itthat A: Do you want to get the contents of the file http://www.example.com/script.js? If so, you could turn to AJAX methods to fetch its content, assuming it resides on the same server as the page itself. A: Update: HTML Imports are now deprecated (alternatives). --- I know it's a little late but some browsers support the tag LINK rel="import" property. http://www.html5rocks.com/en/tutorials/webcomponents/imports/ <link rel="import" href="/path/to/imports/stuff.html"> For the rest, ajax is still the preferred way. A: yes, Ajax is the way to do it, as in accepted answer. If you get down to the details, there are many pitfalls. If you use jQuery.load(...), the wrong content type is assumed (html instead of application/javascript), which can mess things up by putting unwanted <br> into your (scriptNode).innerText, and things like that. Then, if you use jQuery.getScript(...), the downloaded script is immediately executed, which might not be what you want (might screw up the order in which you want to load the files, in case you have several of those.) I found it best to use jQuery.ajax with dataType: "text" I used this Ajax technique in a project with a frameset, where the frameset and/or several frames need the same JavaScript, in order to avoid having the server send that JavaScript multiple times. Here is code, tested and working: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN" "http://www.w3.org/TR/html4/frameset.dtd"> <html> <head> <script id="scriptData"> var scriptData = [ { name: "foo" , url: "path/to/foo" }, { name: "bar" , url: "path/to/bar" } ]; </script> <script id="scriptLoader"> var LOADER = { loadedCount: 0, toBeLoadedCount: 0, load_jQuery: function (){ var jqNode = document.createElement("script"); jqNode.setAttribute("src", "/path/to/jquery"); jqNode.setAttribute("onload", "LOADER.loadScripts();"); jqNode.setAttribute("id", "jquery"); document.head.appendChild(jqNode); }, loadScripts: function (){ var scriptDataLookup = this.scriptDataLookup = {}; var scriptNodes = this.scriptNodes = {}; var scriptNodesArr = this.scriptNodesArr = []; for (var j=0; j<scriptData.length; j++){ var theEntry = scriptData[j]; scriptDataLookup[theEntry.name] = theEntry; } //console.log(JSON.stringify(scriptDataLookup, null, 4)); for (var i=0; i<scriptData.length; i++){ var entry = scriptData[i]; var name = entry.name; var theURL = entry.url; this.toBeLoadedCount++; var node = document.createElement("script"); node.setAttribute("id", name); scriptNodes[name] = node; scriptNodesArr.push(node); jQuery.ajax({ method : "GET", url : theURL, dataType : "text" }).done(this.makeHandler(name, node)).fail(this.makeFailHandler(name, node)); } }, makeFailHandler: function(name, node){ var THIS = this; return function(xhr, errorName, errorMessage){ console.log(name, "FAIL"); console.log(xhr); console.log(errorName); console.log(errorMessage); debugger; } }, makeHandler: function(name, node){ var THIS = this; return function (fileContents, status, xhr){ THIS.loadedCount++; //console.log("loaded", name, "content length", fileContents.length, "status", status); //console.log("loaded:", THIS.loadedCount, "/", THIS.toBeLoadedCount); THIS.scriptDataLookup[name].fileContents = fileContents; if (THIS.loadedCount >= THIS.toBeLoadedCount){ THIS.allScriptsLoaded(); } } }, allScriptsLoaded: function(){ for (var i=0; i<this.scriptNodesArr.length; i++){ var scriptNode = this.scriptNodesArr[i]; var name = scriptNode.id; var data = this.scriptDataLookup[name]; var fileContents = data.fileContents; var textNode = document.createTextNode(fileContents); scriptNode.appendChild(textNode); document.head.appendChild(scriptNode); // execution is here //console.log(scriptNode); } // call code to make the frames here } }; </script> </head> <frameset rows="200pixels,*" onload="LOADER.load_jQuery();"> <frame src="about:blank"></frame> <frame src="about:blank"></frame> </frameset> </html> related question A: if you want the contents of the src attribute, you would have to do an ajax request and look at the responsetext. If you where to have the js between and you could access it through innerHTML. This might be of interest: http://ejohn.org/blog/degrading-script-tags/ A: .text did get you contents of the tag, it's just that you have nothing between your open tag and your end tag. You can get the src attribute of the element using .src, and then if you want to get the javascript file you would follow the link and make an ajax request for it. A: In a comment to my previous answer: I want to store the content of the script so that I can cache it and use it directly some time later without having to fetch it from the external web server (not on the same server as the page) In that case you're better off using a server side script to fetch and cache the script file. Depending on your server setup you could just wget the file (periodically via cron if you expect it to change) or do something similar with a small script inthe language of your choice. A: I had a same issue, so i solve it this way: * *The js file contains something like window.someVarForReturn = `content for return` *On html <script src="file.js"></script> <script>console.log(someVarForReturn)</script> In my case the content was html template. So i did something like this: * *On js file window.someVarForReturn = `<did>My template</div>` *On html <script src="file.js"></script> <script> new DOMParser().parseFromString(someVarForReturn, 'text/html').body.children[0] </script> A: You cannot directly get what browser loaded as the content of your specific script tag (security hazard); But you can request the same resource (src) again ( which will succeed immediately due to cache ) and read it's text: const scriptSrc = document.querySelector('script#yours').src; // re-request the same location const scriptContent = await fetch(scriptSrc).then((res) => res.text()); A: Not sure why you would need to do this? Another way round would be to hold the script in a hidden element somewhere and use Eval to run it. You could then query the objects innerHtml property. A: If a src attribute is provided, user agents are required to ignore the content of the element, if you need to access it from the external script, then you are probably doing something wrong. Update: I see you've added a comment to the effect that you want to cache the script and use it later. To what end? Assuming your HTTP is cache friendly, then your caching needs are likely taken care of by the browser already. A: Using 2008-style DOM-binding it would rather be: document.getElementById('myscript').getAttribute("src"); document.getElementById('myscript').getAttribute("type"); A: You want to use the innerHTML property to get the contents of the script tag: document.getElementById("myscript").innerHTML But as @olle said in another answer you probably want to have a read of: http://ejohn.org/blog/degrading-script-tags/ A: I'd suggest the answer to this question is using the "innerHTML" property of the DOM element. Certainly, if the script has loaded, you do not need to make an Ajax call to get it. So Sugendran should be correct (not sure why he was voted down without explanation). var scriptContent = document.getElementById("myscript").innerHTML; The innerHTML property of the script element should give you the scripts content as a string provided the script element is: * *an inline script, or *that the script has loaded (if using the src attribute) olle also gives the answer, but I think it got 'muddled' by his suggesting it needs to be loaded through ajax first, and i think he meant "inline" instead of between. if you where to have the js between and you could access it through innerHTML. Regarding the usefulness of this technique: I've looked to use this technique for client side error logging (of javascript exceptions) after getting "undefined variables" which aren't contained within my own scripts (such as badly injected scripts from toolbars or extensions) - so I don't think it's such a way out idea. A: If you're looking to access the attributes of the <script> tag rather than the contents of script.js, then XPath may well be what you're after. It will allow you to get each of the script attributes. If it's the example.js file contents you're after, then you can fire off an AJAX request to fetch it. A: It's funny but we can't, we have to fetch them again over the internet. Likely the browser will read his cache, but a ping is still sent to verify the content-length. [...document.scripts].forEach((script) => { fetch(script.src) .then((response) => response.text() ) .then((source) => console.log(source) ) })
{ "language": "en", "url": "https://stackoverflow.com/questions/148441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "77" }
Q: How to use sed to replace only the first occurrence in a file? I would like to update a large number of C++ source files with an extra include directive before any existing #includes. For this sort of task, I normally use a small bash script with sed to re-write the file. How do I get sed to replace just the first occurrence of a string in a file rather than replacing every occurrence? If I use sed s/#include/#include "newfile.h"\n#include/ it replaces all #includes. Alternative suggestions to achieve the same thing are also welcome. A: An overview of the many helpful existing answers, complemented with explanations: The examples here use a simplified use case: replace the word 'foo' with 'bar' in the first matching line only. Due to use of ANSI C-quoted strings ($'...') to provide the sample input lines, bash, ksh, or zsh is assumed as the shell. GNU sed only: Ben Hoffstein's anwswer shows us that GNU provides an extension to the POSIX specification for sed that allows the following 2-address form: 0,/re/ (re represents an arbitrary regular expression here). 0,/re/ allows the regex to match on the very first line also. In other words: such an address will create a range from the 1st line up to and including the line that matches re - whether re occurs on the 1st line or on any subsequent line. * *Contrast this with the POSIX-compliant form 1,/re/, which creates a range that matches from the 1st line up to and including the line that matches re on subsequent lines; in other words: this will not detect the first occurrence of an re match if it happens to occur on the 1st line and also prevents the use of shorthand // for reuse of the most recently used regex (see next point).1 If you combine a 0,/re/ address with an s/.../.../ (substitution) call that uses the same regular expression, your command will effectively only perform the substitution on the first line that matches re. sed provides a convenient shortcut for reusing the most recently applied regular expression: an empty delimiter pair, //. $ sed '0,/foo/ s//bar/' <<<$'1st foo\nUnrelated\n2nd foo\n3rd foo' 1st bar # only 1st match of 'foo' replaced Unrelated 2nd foo 3rd foo A POSIX-features-only sed such as BSD (macOS) sed (will also work with GNU sed): Since 0,/re/ cannot be used and the form 1,/re/ will not detect re if it happens to occur on the very first line (see above), special handling for the 1st line is required. MikhailVS's answer mentions the technique, put into a concrete example here: $ sed -e '1 s/foo/bar/; t' -e '1,// s//bar/' <<<$'1st foo\nUnrelated\n2nd foo\n3rd foo' 1st bar # only 1st match of 'foo' replaced Unrelated 2nd foo 3rd foo Note: * *The empty regex // shortcut is employed twice here: once for the endpoint of the range, and once in the s call; in both cases, regex foo is implicitly reused, allowing us not to have to duplicate it, which makes both for shorter and more maintainable code. *POSIX sed needs actual newlines after certain functions, such as after the name of a label or even its omission, as is the case with t here; strategically splitting the script into multiple -e options is an alternative to using an actual newlines: end each -e script chunk where a newline would normally need to go. 1 s/foo/bar/ replaces foo on the 1st line only, if found there. If so, t branches to the end of the script (skips remaining commands on the line). (The t function branches to a label only if the most recent s call performed an actual substitution; in the absence of a label, as is the case here, the end of the script is branched to). When that happens, range address 1,//, which normally finds the first occurrence starting from line 2, will not match, and the range will not be processed, because the address is evaluated when the current line is already 2. Conversely, if there's no match on the 1st line, 1,// will be entered, and will find the true first match. The net effect is the same as with GNU sed's 0,/re/: only the first occurrence is replaced, whether it occurs on the 1st line or any other. NON-range approaches potong's answer demonstrates loop techniques that bypass the need for a range; since he uses GNU sed syntax, here are the POSIX-compliant equivalents: Loop technique 1: On first match, perform the substitution, then enter a loop that simply prints the remaining lines as-is: $ sed -e '/foo/ {s//bar/; ' -e ':a' -e '$!{n;ba' -e '};}' <<<$'1st foo\nUnrelated\n2nd foo\n3rd foo' 1st bar Unrelated 2nd foo 3rd foo Loop technique 2, for smallish files only: read the entire input into memory, then perform a single substitution on it. $ sed -e ':a' -e '$!{N;ba' -e '}; s/foo/bar/' <<<$'1st foo\nUnrelated\n2nd foo\n3rd foo' 1st bar Unrelated 2nd foo 3rd foo 1 1.61803 provides examples of what happens with 1,/re/, with and without a subsequent s//: * *sed '1,/foo/ s/foo/bar/' <<<$'1foo\n2foo' yields $'1bar\n2bar'; i.e., both lines were updated, because line number 1 matches the 1st line, and regex /foo/ - the end of the range - is then only looked for starting on the next line. Therefore, both lines are selected in this case, and the s/foo/bar/ substitution is performed on both of them. *sed '1,/foo/ s//bar/' <<<$'1foo\n2foo\n3foo' fails: with sed: first RE may not be empty (BSD/macOS) and sed: -e expression #1, char 0: no previous regular expression (GNU), because, at the time the 1st line is being processed (due to line number 1 starting the range), no regex has been applied yet, so // doesn't refer to anything. With the exception of GNU sed's special 0,/re/ syntax, any range that starts with a line number effectively precludes use of //. A: A possible solution: /#include/!{p;d;} i\ #include "newfile.h" :a n ba Explanation: * *read lines until we find the #include, print these lines then start new cycle *insert the new include line *enter a loop that just reads lines (by default sed will also print these lines), we won't get back to the first part of the script from here A: sed '0,/pattern/s/pattern/replacement/' filename this worked for me. example sed '0,/<Menu>/s/<Menu>/<Menu><Menu>Sub menu<\/Menu>/' try.txt > abc.txt Editor's note: both work with GNU sed only. A: I know this is an old post but I had a solution that I used to use: grep -E -m 1 -n 'old' file | sed 's/:.*$//' - | sed 's/$/s\/old\/new\//' - | sed -f - file Basically use grep to print the first occurrence and stop there. Additionally print line number ie 5:line. Pipe that into sed and remove the : and anything after so you are just left with a line number. Pipe that into sed which adds s/.*/replace to the end number, which results in a 1 line script which is piped into the last sed to run as a script on the file. so if regex = #include and replace = blah and the first occurrence grep finds is on line 5 then the data piped to the last sed would be 5s/.*/blah/. Works even if first occurrence is on the first line. A: A sed script that will only replace the first occurrence of "Apple" by "Banana" Example Input: Output: Apple Banana Apple Apple Orange Orange Apple Apple This is the simple script: Editor's note: works with GNU sed only. sed '0,/Apple/{s/Apple/Banana/}' input_filename The first two parameters 0 and /Apple/ are the range specifier. The s/Apple/Banana/ is what is executed within that range. So in this case "within the range of the beginning (0) up to the first instance of Apple, replace Apple with Banana. Only the first Apple will be replaced. Background: In traditional sed the range specifier is also "begin here" and "end here" (inclusive). However the lowest "begin" is the first line (line 1), and if the "end here" is a regex, then it is only attempted to match against on the next line after "begin", so the earliest possible end is line 2. So since range is inclusive, smallest possible range is "2 lines" and smallest starting range is both lines 1 and 2 (i.e. if there's an occurrence on line 1, occurrences on line 2 will also be changed, not desired in this case). GNU sed adds its own extension of allowing specifying start as the "pseudo" line 0 so that the end of the range can be line 1, allowing it a range of "only the first line" if the regex matches the first line. Or a simplified version (an empty RE like // means to re-use the one specified before it, so this is equivalent): sed '0,/Apple/{s//Banana/}' input_filename And the curly braces are optional for the s command, so this is also equivalent: sed '0,/Apple/s//Banana/' input_filename All of these work on GNU sed only. You can also install GNU sed on OS X using homebrew brew install gnu-sed. A: You could use awk to do something similar.. awk '/#include/ && !done { print "#include \"newfile.h\""; done=1;}; 1;' file.c Explanation: /#include/ && !done Runs the action statement between {} when the line matches "#include" and we haven't already processed it. {print "#include \"newfile.h\""; done=1;} This prints #include "newfile.h", we need to escape the quotes. Then we set the done variable to 1, so we don't add more includes. 1; This means "print out the line" - an empty action defaults to print $0, which prints out the whole line. A one liner and easier to understand than sed IMO :-) A: Quite a comprehensive collection of answers on linuxtopia sed FAQ. It also highlights that some answers people provided won't work with non-GNU version of sed, eg sed '0,/RE/s//to_that/' file in non-GNU version will have to be sed -e '1s/RE/to_that/;t' -e '1,/RE/s//to_that/' However, this version won't work with gnu sed. Here's a version that works with both: -e '/RE/{s//to_that/;:a' -e '$!N;$!ba' -e '}' ex: sed -e '/Apple/{s//Banana/;:a' -e '$!N;$!ba' -e '}' filename A: i would do this with an awk script: BEGIN {i=0} (i==0) && /#include/ {print "#include \"newfile.h\""; i=1} {print $0} END {} then run it with awk: awk -f awkscript headerfile.h > headerfilenew.h might be sloppy, I'm new to this. A: As an alternative suggestion you may want to look at the ed command. man 1 ed teststr=' #include <stdio.h> #include <stdlib.h> #include <inttypes.h> ' # for in-place file editing use "ed -s file" and replace ",p" with "w" # cf. http://wiki.bash-hackers.org/howto/edit-ed cat <<-'EOF' | sed -e 's/^ *//' -e 's/ *$//' | ed -s <(echo "$teststr") H /# *include/i #include "newfile.h" . ,p q EOF A: I finally got this to work in a Bash script used to insert a unique timestamp in each item in an RSS feed: sed "1,/====RSSpermalink====/s/====RSSpermalink====/${nowms}/" \ production-feed2.xml.tmp2 > production-feed2.xml.tmp.$counter It changes the first occurrence only. ${nowms} is the time in milliseconds set by a Perl script, $counter is a counter used for loop control within the script, \ allows the command to be continued on the next line. The file is read in and stdout is redirected to a work file. The way I understand it, 1,/====RSSpermalink====/ tells sed when to stop by setting a range limitation, and then s/====RSSpermalink====/${nowms}/ is the familiar sed command to replace the first string with the second. In my case I put the command in double quotation marks becauase I am using it in a Bash script with variables. A: Using FreeBSD ed and avoid ed's "no match" error in case there is no include statement in a file to be processed: teststr=' #include <stdio.h> #include <stdlib.h> #include <inttypes.h> ' # using FreeBSD ed # to avoid ed's "no match" error, see # *emphasized text*http://codesnippets.joyent.com/posts/show/11917 cat <<-'EOF' | sed -e 's/^ *//' -e 's/ *$//' | ed -s <(echo "$teststr") H ,g/# *include/u\ u\ i\ #include "newfile.h"\ . ,p q EOF A: This might work for you (GNU sed): sed -si '/#include/{s//& "newfile.h\n&/;:a;$!{n;ba}}' file1 file2 file.... or if memory is not a problem: sed -si ':a;$!{N;ba};s/#include/& "newfile.h\n&/' file1 file2 file... A: If anyone came here to replace a character for the first occurrence in all lines (like myself), use this: sed '/old/s/old/new/1' file -bash-4.2$ cat file 123a456a789a 12a34a56 a12 -bash-4.2$ sed '/a/s/a/b/1' file 123b456a789a 12b34a56 b12 By changing 1 to 2 for example, you can replace all the second a's only instead. A: # sed script to change "foo" to "bar" only on the first occurrence 1{x;s/^/first/;x;} 1,/foo/{x;/first/s///;x;s/foo/bar/;} #---end of script--- or, if you prefer: Editor's note: works with GNU sed only. sed '0,/foo/s//bar/' file Source A: With GNU sed's -z option you could process the whole file as if it was only one line. That way a s/…/…/ would only replace the first match in the whole file. Remember: s/…/…/ only replaces the first match in each line, but with the -z option sed treats the whole file as a single line. sed -z 's/#include/#include "newfile.h"\n#include' In the general case you have to rewrite your sed expression since the pattern space now holds the whole file instead of just one line. Some examples: * *s/text.*// can be rewritten as s/text[^\n]*//. [^\n] matches everything except the newline character. [^\n]* will match all symbols after text until a newline is reached. *s/^text// can be rewritten as s/(^|\n)text//. *s/text$// can be rewritten as s/text(\n|$)//. A: #!/bin/sed -f 1,/^#include/ { /^#include/i\ #include "newfile.h" } How this script works: For lines between 1 and the first #include (after line 1), if the line starts with #include, then prepend the specified line. However, if the first #include is in line 1, then both line 1 and the next subsequent #include will have the line prepended. If you are using GNU sed, it has an extension where 0,/^#include/ (instead of 1,) will do the right thing. A: Just add the number of occurrence at the end: sed s/#include/#include "newfile.h"\n#include/1 A: The use case can perhaps be that your occurences are spread throughout your file, but you know your only concern is in the first 10, 20 or 100 lines. Then simply adressing those lines fixes the issue - even if the wording of the OP regards first only. sed '1,10s/#include/#include "newfile.h"\n#include/' A: The following command removes the first occurrence of a string, within a file. It removes the empty line too. It is presented on an xml file, but it would work with any file. Useful if you work with xml files and you want to remove a tag. In this example it removes the first occurrence of the "isTag" tag. Command: sed -e 0,/'<isTag>false<\/isTag>'/{s/'<isTag>false<\/isTag>'//} -e 's/ *$//' -e '/^$/d' source.txt > output.txt Source file (source.txt) <xml> <testdata> <canUseUpdate>true</canUseUpdate> <isTag>false</isTag> <moduleLocations> <module>esa_jee6</module> <isTag>false</isTag> </moduleLocations> <node> <isTag>false</isTag> </node> </testdata> </xml> Result file (output.txt) <xml> <testdata> <canUseUpdate>true</canUseUpdate> <moduleLocations> <module>esa_jee6</module> <isTag>false</isTag> </moduleLocations> <node> <isTag>false</isTag> </node> </testdata> </xml> ps: it didn't work for me on Solaris SunOS 5.10 (quite old), but it works on Linux 2.6, sed version 4.1.5 A: Nothing new but perhaps a little more concrete answer: sed -rn '0,/foo(bar).*/ s%%\1%p' Example: xwininfo -name unity-launcher produces output like: xwininfo: Window id: 0x2200003 "unity-launcher" Absolute upper-left X: -2980 Absolute upper-left Y: -198 Relative upper-left X: 0 Relative upper-left Y: 0 Width: 2880 Height: 98 Depth: 24 Visual: 0x21 Visual Class: TrueColor Border width: 0 Class: InputOutput Colormap: 0x20 (installed) Bit Gravity State: ForgetGravity Window Gravity State: NorthWestGravity Backing Store State: NotUseful Save Under State: no Map State: IsViewable Override Redirect State: no Corners: +-2980+-198 -2980+-198 -2980-1900 +-2980-1900 -geometry 2880x98+-2980+-198 Extracting window ID with xwininfo -name unity-launcher|sed -rn '0,/^xwininfo: Window id: (0x[0-9a-fA-F]+).*/ s%%\1%p' produces: 0x2200003 A: POSIXly (also valid in sed), Only one regex used, need memory only for one line (as usual): sed '/\(#include\).*/!b;//{h;s//\1 "newfile.h"/;G};:1;n;b1' Explained: sed ' /\(#include\).*/!b # Only one regex used. On lines not matching # the text `#include` **yet**, # branch to end, cause the default print. Re-start. //{ # On first line matching previous regex. h # hold the line. s//\1 "newfile.h"/ # append ` "newfile.h"` to the `#include` matched. G # append a newline. } # end of replacement. :1 # Once **one** replacement got done (the first match) n # Loop continually reading a line each time b1 # and printing it by default. ' # end of sed script. A: A possible solution here might be to tell the compiler to include the header without it being mentioned in the source files. IN GCC there are these options: -include file Process file as if "#include "file"" appeared as the first line of the primary source file. However, the first directory searched for file is the preprocessor's working directory instead of the directory containing the main source file. If not found there, it is searched for in the remainder of the "#include "..."" search chain as normal. If multiple -include options are given, the files are included in the order they appear on the command line. -imacros file Exactly like -include, except that any output produced by scanning file is thrown away. Macros it defines remain defined. This allows you to acquire all the macros from a header without also processing its declarations. All files specified by -imacros are processed before all files specified by -include. Microsoft's compiler has the /FI (forced include) option. This feature can be handy for some common header, like platform configuration. The Linux kernel's Makefile uses -include for this. A: I needed a solution that would work both on GNU and BSD, and I also knew that the first line would never be the one I'd need to update: sed -e "1,/pattern/s/pattern/replacement/" Trying the // feature to not repeat the pattern did not work for me, hence needing to repeat it. A: sed -e 's/pattern/REPLACEMENT/1' <INPUTFILE A: I will make a suggestion that is not exactly what the original question asks for, but for those who also want to specifically replace perhaps the second occurrence of a match, or any other specifically enumerated regular expression match. Use a python script, and a for loop, call it from a bash script if needed. Here's what it looked like for me, where I was replacing specific lines containing the string --project: def replace_models(file_path, pixel_model, obj_model): # find your file --project matches pattern = re.compile(r'--project.*') new_file = "" with open(file_path, 'r') as f: match = 1 for line in f: # Remove line ending before we do replacement line = line.strip() # replace first --project line match with pixel if match == 1: result = re.sub(pattern, "--project='" + pixel_model + "'", line) # replace second --project line match with object elif match == 2: result = re.sub(pattern, "--project='" + obj_model + "'", line) else: result = line # Check that a substitution was actually made if result is not line: # Add a backslash to the replaced line result += " \\" print("\nReplaced ", line, " with ", result) # Increment number of matches found match += 1 # Add the potentially modified line to our new file new_file = new_file + result + "\n" # close file / save output f.close() fout = open(file_path, "w") fout.write(new_file) fout.close()
{ "language": "en", "url": "https://stackoverflow.com/questions/148451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "306" }
Q: Java 2D Drawing Optimal Performance I'm in the process of writing a Java 2D game. I'm using the built-in Java 2D drawing libraries, drawing on a Graphics2D I acquire from a BufferStrategy from a Canvas in a JFrame (which is sometimes full-screened). The BufferStrategy is double-buffered. Repainting is done actively, via a timer. I'm having some performance issues though, especially on Linux. And Java2D has so very many ways of creating graphics buffers and drawing graphics that I just don't know if I'm doing the right thing. I've been experimenting with graphics2d.getDeviceConfiguration().createCompatibleVolatileImage, which looks promising, but I have no real proof it it's going to be any faster if I switch the drawing code to that. In your experience, what is the fastest way to render 2D graphics onto the screen in Java 1.5+? Note that the game is quite far ahead, so I don't want to switch to a completely different method of drawing, like OpenGL or a game engine. I basically want to know how to get the fastest way of using a Graphics2D object to draw stuff to the screen. A: A synthesis of the answers to this post, the answers to Consty's, and my own research: What works: * *Use GraphicsConfiguration.createCompatibleImage to create images compatible with what you're drawing on. This is absolutely essential! *Use double-buffered drawing via Canvas.createBufferStrategy. *Use -Dsun.java2d.opengl=True where available to speed up drawing. *Avoid using transforms for scaling. Instead, cache scaled versions of the images you are going to use. *Avoid translucent images! Bitmasked images are fine, but translucency is very expensive in Java2D. In my tests, using these methods, I got a speed increase of 10x - 15x, making proper Java 2D graphics a possibility. A: There's an important one which hasn't been mentioned yet, I think: cache images of everything you can. If you have an object that is moving across the screen, and it's appearance doesn't change much, draw it to an image and then render the image to the screen in a new position each frame. Do that even if the object is a very simple one - you might be surprised at how much time you save. Rendering a bitmap is much faster than rendering primitives. You might also want to look at setting rendering hints explicitly, and turning off things like antialiasing if quality considerations permit. A: I'm having the same issues as you are I think. Check out my post here: Java2D Performance Issues It shows the reason for the performance degradation and how to fix it. It's not guaranteed to work well on all platforms though. You'll see why in the post. A: Here are some tips off the top of my head. If you were more specific and what you were trying to do I may be able to help more. Sounds like a game, but I don't want to assume. Only draw what you need to! Don't blindly call repaint() all of the time, try some of the siblings like repaint(Rect) or repaint(x,y,w,h). Be very careful with alpha blending as it can be an expensive operation to blending images / primitives. Try to prerender / cache as much as possible. If you find yourself drawing a circle the same way, over and over, consider drawing in into a BufferedImage and then just draw the BufferedImage. You're sacrificing memory for speed (typical of games / high perf graphics) Consider using OpenGL, use JOGL of LWJGL. JOGL is more Java-like whereas LWJGL provides more gaming functionality on top of OpenGL access. OpenGL can draw orders of magnitude (with proper hardware and drivers) than Swing can. A: I've done some basic drawing applications using Java. I haven't worked on anything too graphic-intensive, but I would recommend that you have a good handle on all the 'repaint' invocations. An extra repaint call on a parent container could double the amount of rendering your doing. A: I've been watching this question, hoping someone would offer you a better answer than mine. In the meantime, I found the following Sun white paper which was written after a beta release of jdk 1.4. There are some interesting recommendations here on fine-tuning, including the runtime flags (at the bottom of the article): "Runtime Flag For Solaris and Linux Starting with the Beta 3 release of the SDK, version 1.4, Java 2D stores images in pixmaps by default when DGA is not available, whether you are working in a local or remote display environment. You can override this behavior with the pmoffscreen flag: -Dsun.java2d.pmoffscreen=true/false If you set this flag to true, offscreen pixmap support is enabled even if DGA is available. If you set this flag to false, offscreen pixmap support is disabled. Disabling offscreen pixmap support can solve some rendering problems. " A: make sure you use double buffering, draw first to one big buffer in memory that you then flush to screen when all drawing is done. A: There are couple of things you will need to keep in mind 1) is refreshing this link shows how to use swingtimers which might be a good option for calling repaint. Getting repaint figured out (as the previous posters have said is important so you're not doing too much work). 2) Make sure you're only doing drawing in one thread. Updating UI from mulitple threads can lead to nasty things. 3) Double Buffering. This makes the rendering smoother. this site has some more good info for you A: An alternative if you don't want to go pure Java 2D is to use a game library such as GTGE or JGame (search for them on Google), then offer easy access to graphics and also offer double buffering and much simpler drawing commands.
{ "language": "en", "url": "https://stackoverflow.com/questions/148478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Advantages and drawbacks of chainable methods? I really love the philosophy of chaining methods, like jQuery emphasys in its library. I found it quite elegant and clear. Being primarily Java developper, I've always wondering myself why this practice was not more used in this language. For example, the Collection interface was not designed in the that way (for adding/removing methods), and I found it quite sad. Is there real cons against this practice or is it just something that has not enough "sex-appeal" before? A: IMO it is painful to debug as you tend to have no intermediary variables for inspection. A: There's a common problem with chainable methods and inheritance. Assume you have a class C whose methods F1(), F2(), etc. return a C. When you derive class D from C, you want the methods F1, F2, etc to now return a D so that D's chainable methods can be called anywhere in the chain. A: I really like this approach as well. The only downside I can think of is that it seems a bit awkward at times to 'return this' at the end of every method. For JQuery, for example, it makes it slightly awkward to allow plugins, because you have to say "make sure you don't forget your returns!!" but there's no good way to catch it at compile time. A: The only con is you loose the return type, so Chaining is good for operations that do things, but not good for operations that calculate things. Another issue is, with chaining the compiler can't as easily determine trivial function calls for inlining. But as I said if your chaining does operations, and not calculations, then its most likely that the compiler wouldn't change anything anyways. A: JavaScript is (more or less) a functional language, with functions as first-class citizens. Adding/removing methods to objects, passing functions as parameters, all this is natural for this language. On the other hand, Java is strictly OO, a function cannot exist outside of a class. Using inheritance, composition and interfaces is a more natural way for this language. A: Chainable methods are another great tool in our design toolbag. Just make sure you don't run into the common "I have a hammer, therefore every problem is a nail" design mess. Every design problem isn't solved by chainable methods. Sometimes it does help make an interface easier to use (ie. the collection issue you mentioned). Sometimes it doesn't. The trick is figuring out which case applies. A: Martin Fowler discusses this topic as 'fluent interfaces' at http://www.martinfowler.com/bliki/FluentInterface.html. One main issue is that fluent interfaces are designed for humans, therefore frameworks like Spring are not able to understand them. Simplistically using a fluent interface provides maintainability in one sense (readability) but loses maintainability in another (flexibility).
{ "language": "en", "url": "https://stackoverflow.com/questions/148493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: JQuery Flickr file upload not working I am trying to upload a file using to Flickr using JQuery. I have a form (which works if I dont use JQuery) which I am submitting using the Form Plugin. My code is as follows: <html> <head> <title>Test Upload</title> <script type="text/javascript" src="jquery-1.2.6.js"></script> <script type="text/javascript" src="jquery.form.js"></script> <script type="text/javascript"> $(document).ready(function() { $('#myForm').bind('submit', function() { $(this).ajaxSubmit({ dataType: 'xml', success: processXml }); return false; // <-- important! }); }); function processXml(responseXML) { var message = $('message', responseXML).text(); document.getElementById('output').innerHTML = message; } </script> </head> <body> <form id="myForm" method="post" action="http://api.flickr.com/services/upload/" enctype="multipart/form-data"> <input type="file" name="photo" id="photo"/> <input type="text" name="api_key" id="api_key" value="..snip.."/> <input type="text" name="auth_token" id="auth_token" value="..snip.."/> <input type="text" name="api_sig" id="api_sig" value="..snip.."/> <input type="submit" value="Upload"/> </form> <div id="output">AJAX response will replace this content.</div> </body> </html> The problem is I get the following text as a response: <rsp stat="fail"> <err code="100" msg="Invalid API Key (Key not found)" /> </rsp> even though the file uploads with no problems. This means my div is not updated as it doesnt run the success function. Any one have any ideas. Thanks A: See this other thread about uploading files with AJAX: How can I upload files asynchronously? I've never tried it, but it seems that you can't get the server response (not easily, anyway) A: You will not be able to upload a file via AJAX this way. A pure AJAX file upload system is not possible because of security limitations of JavaScript. A: I see that you're using ajaxSubmit. That's the jQuery Form Plugin, right? Is it possible that the issue is something with that? Have you tried using jQuery.post instead? A: ajax does not work cross-domain. You can not submit a form using ajax from one domain to another domain. A: what you can do is - use a proxy.php file on your domain. submit the form using ajax to proxy.php. The code in your proxy.php will submit the form using CURL to flickr. You'll get the CURL code on php.net or many other sites
{ "language": "en", "url": "https://stackoverflow.com/questions/148503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Limiting range of value types in C++ Suppose I have a LimitedValue class which holds a value, and is parameterized on int types 'min' and 'max'. You'd use it as a container for holding values which can only be in a certain range. You could use it such: LimitedValue< float, 0, 360 > someAngle( 45.0 ); someTrigFunction( someAngle ); so that 'someTrigFunction' knows that it is guaranteed to be supplied a valid input (The constructor would throw an exception if the parameter is invalid). Copy-construction and assignment are limited to exactly equal types, though. I'd like to be able to do: LimitedValue< float, 0, 90 > smallAngle( 45.0 ); LimitedValue< float, 0, 360 > anyAngle( smallAngle ); and have that operation checked at compile-time, so this next example gives an error: LimitedValue< float, -90, 0 > negativeAngle( -45.0 ); LimitedValue< float, 0, 360 > postiveAngle( negativeAngle ); // ERROR! Is this possible? Is there some practical way of doing this, or any examples out there which approach this? A: The Boost Constrained Value library(1) allows you to add constrains to data types. But you have to read the advice "Why C++'s floating point types shouldn't be used with bounded objects?" when you like to use it with float types (as illustrated in your example). (1) The Boost Constrained Value library is not an official Boost library yet. A: The bounded::integer library does what you want (for integer types only). http://doublewise.net/c++/bounded/ (In the interests of full disclosure, I am the author of this library) It differs from other libraries that attempt to provide "safe integers" in a significant way: it tracks integer bounds. I think this is best shown by example: auto x = bounded::checked_integer<0, 7>(f()); auto y = 7_bi; auto z = x + y; // decltype(z) == bounded::checked_integer<7, 14> static_assert(z >= 7_bi); static_assert(z <= 14_bi); x is an integer type that is between 0 and 7. y is an integer type between 7 and 7. z is an integer type between 7 and 14. All of this information is known at compile time, which is why we are able to static_assert on it, even though the value of z is not a compile-time constant. z = 10_bi; z = x; static_assert(!std::is_assignable<decltype((z)), decltype(0_bi)>::value); The first assignment, z = 10_bi, is unchecked. This is because the compiler can prove that 10 falls within the range of z. The second assignment, z = x, checks that the value of x is within the range of z. If not, it throws an exception (the exact behavior depends on the type of integer you use, there are many policies of what to do). The third line, the static_assert, shows that it is a compile-time error to assign from a type that has no overlap at all. The compiler already knows this is an error and stops you. The library does not implicitly convert to the underlying type, as this can cause many situations where you try to prevent something but it happens due to conversions. It does allow explicit conversion. A: This is actually a complex matter and I have tackled it for a while... Now I have a publicly available library that will allow you to limit floating points and integers in your code so you can make more sure that they are valid at all time. Not only that you can turn off the limits in your final release version and that means the types pretty much become the same as a typedef. Define your type as: typedef controlled_vars::limited_fauto_init<float, 0, 360> angle_t; And when you don't define the CONTROLLED_VARS_DEBUG and CONTROLLED_VARS_LIMITED flags, you get pretty much the same as this: typedef float angle_t; These classes are generated so they include all the necessary operators for you to not suffer too much when using them. That means you can see your angle_t nearly as a float. angle_t a; a += 35; Will work as expected (and throw if a + 35 > 360). http://snapwebsites.org/project/controlled-vars I know this was posted in 2008... but I don't see any good link to a top library that offers this functionality!? As a side note for those who want to use this library, I've noticed that in some cases the library will silently resize values (i.e. float a; double b; a = b; and int c; long d; c = d;) and that can cause all sorts of issues in your code. Be careful using the library. A: I wrote a C++ class that imitates the functionality of Ada's range. It is based on templates, similar to the solutions provided here. If something like this is to be used in a real project, it will be used in a very fundamental way. Subtle bugs or misunderstandings can be disastrous. Therefore, although it is a small library without a lot of code, in my opinion provision of unit tests and clear design philosophy are very important. Feel free to try it and please tell me if you find any problems. https://github.com/alkhimey/ConstrainedTypes http://www.nihamkin.com/2014/09/05/range-constrained-types-in-c++/ A: OK, this is C++11 with no Boost dependencies. Everything guaranteed by the type system is checked at compile time, and anything else throws an exception. I've added unsafe_bounded_cast for conversions that may throw, and safe_bounded_cast for explicit conversions that are statically correct (this is redundant since the copy constructor handles it, but provided for symmetry and expressiveness). Example Use #include "bounded.hpp" int main() { BoundedValue<int, 0, 5> inner(1); BoundedValue<double, 0, 4> outer(2.3); BoundedValue<double, -1, +1> overlap(0.0); inner = outer; // ok: [0,4] contained in [0,5] // overlap = inner; // ^ error: static assertion failed: "conversion disallowed from BoundedValue with higher max" // overlap = safe_bounded_cast<double, -1, +1>(inner); // ^ error: static assertion failed: "conversion disallowed from BoundedValue with higher max" overlap = unsafe_bounded_cast<double, -1, +1>(inner); // ^ compiles but throws: // terminate called after throwing an instance of 'BoundedValueException<int>' // what(): BoundedValueException: !(-1<=2<=1) - BOUNDED_VALUE_ASSERT at bounded.hpp:56 // Aborted inner = 0; overlap = unsafe_bounded_cast<double, -1, +1>(inner); // ^ ok inner = 7; // terminate called after throwing an instance of 'BoundedValueException<int>' // what(): BoundedValueException: !(0<=7<=5) - BOUNDED_VALUE_ASSERT at bounded.hpp:75 // Aborted } Exception Support This is a bit boilerplate-y, but gives fairly readable exception messages as above (the actual min/max/value are exposed as well, if you choose to catch the derived exception type and can do something useful with it). #include <stdexcept> #include <sstream> #define STRINGIZE(x) #x #define STRINGIFY(x) STRINGIZE( x ) // handling for runtime value errors #define BOUNDED_VALUE_ASSERT(MIN, MAX, VAL) \ if ((VAL) < (MIN) || (VAL) > (MAX)) { \ bounded_value_assert_helper(MIN, MAX, VAL, \ "BOUNDED_VALUE_ASSERT at " \ __FILE__ ":" STRINGIFY(__LINE__)); \ } template <typename T> struct BoundedValueException: public std::range_error { virtual ~BoundedValueException() throw() {} BoundedValueException() = delete; BoundedValueException(BoundedValueException const &other) = default; BoundedValueException(BoundedValueException &&source) = default; BoundedValueException(int min, int max, T val, std::string const& message) : std::range_error(message), minval_(min), maxval_(max), val_(val) { } int const minval_; int const maxval_; T const val_; }; template <typename T> void bounded_value_assert_helper(int min, int max, T val, char const *message = NULL) { std::ostringstream oss; oss << "BoundedValueException: !(" << min << "<=" << val << "<=" << max << ")"; if (message) { oss << " - " << message; } throw BoundedValueException<T>(min, max, val, oss.str()); } Value Class template <typename T, int Tmin, int Tmax> class BoundedValue { public: typedef T value_type; enum { min_value=Tmin, max_value=Tmax }; typedef BoundedValue<value_type, min_value, max_value> SelfType; // runtime checking constructor: explicit BoundedValue(T runtime_value) : val_(runtime_value) { BOUNDED_VALUE_ASSERT(min_value, max_value, runtime_value); } // compile-time checked constructors: BoundedValue(SelfType const& other) : val_(other) {} BoundedValue(SelfType &&other) : val_(other) {} template <typename otherT, int otherTmin, int otherTmax> BoundedValue(BoundedValue<otherT, otherTmin, otherTmax> const &other) : val_(other) // will just fail if T, otherT not convertible { static_assert(otherTmin >= Tmin, "conversion disallowed from BoundedValue with lower min"); static_assert(otherTmax <= Tmax, "conversion disallowed from BoundedValue with higher max"); } // compile-time checked assignments: BoundedValue& operator= (SelfType const& other) { val_ = other.val_; return *this; } template <typename otherT, int otherTmin, int otherTmax> BoundedValue& operator= (BoundedValue<otherT, otherTmin, otherTmax> const &other) { static_assert(otherTmin >= Tmin, "conversion disallowed from BoundedValue with lower min"); static_assert(otherTmax <= Tmax, "conversion disallowed from BoundedValue with higher max"); val_ = other; // will just fail if T, otherT not convertible return *this; } // run-time checked assignment: BoundedValue& operator= (T const& val) { BOUNDED_VALUE_ASSERT(min_value, max_value, val); val_ = val; return *this; } operator T const& () const { return val_; } private: value_type val_; }; Cast Support template <typename dstT, int dstMin, int dstMax> struct BoundedCastHelper { typedef BoundedValue<dstT, dstMin, dstMax> return_type; // conversion is checked statically, and always succeeds template <typename srcT, int srcMin, int srcMax> static return_type convert(BoundedValue<srcT, srcMin, srcMax> const& source) { return return_type(source); } // conversion is checked dynamically, and could throw template <typename srcT, int srcMin, int srcMax> static return_type coerce(BoundedValue<srcT, srcMin, srcMax> const& source) { return return_type(static_cast<srcT>(source)); } }; template <typename dstT, int dstMin, int dstMax, typename srcT, int srcMin, int srcMax> auto safe_bounded_cast(BoundedValue<srcT, srcMin, srcMax> const& source) -> BoundedValue<dstT, dstMin, dstMax> { return BoundedCastHelper<dstT, dstMin, dstMax>::convert(source); } template <typename dstT, int dstMin, int dstMax, typename srcT, int srcMin, int srcMax> auto unsafe_bounded_cast(BoundedValue<srcT, srcMin, srcMax> const& source) -> BoundedValue<dstT, dstMin, dstMax> { return BoundedCastHelper<dstT, dstMin, dstMax>::coerce(source); } A: You can do this using templates -- try something like this: template< typename T, int min, int max >class LimitedValue { template< int min2, int max2 >LimitedValue( const LimitedValue< T, min2, max2 > &other ) { static_assert( min <= min2, "Parameter minimum must be >= this minimum" ); static_assert( max >= max2, "Parameter maximum must be <= this maximum" ); // logic } // rest of code }; A: At the moment, that is impossible in a portable manner due to the C++ rules on how methods (and by extension, constructors) are called even with constant arguments. In the C++0x standard, you could have a const-expr that would allow such an error to be produced though. (This is assuming you want it to throw an error only if the actual value is illegal. If the ranges do not match, you can achieve this) A: One thing to remember about templates is that each invocation of a unique set of template parameters will wind up generating a "unique" class for which comparisons and assignments will generate a compile error. There may be some meta-programming gurus that might know how to work around this but I am not one of them. My approach would be to implement these in a class with run-time checks and overloaded comparison and assignment operators. A: I'd like to offer an alternate version for Kasprzol's solution: The proposed approach always uses bounds of type int. You can get some more flexibility and type safety with an implementation such as this: template<typename T, T min, T max> class Bounded { private: T _value; public: Bounded(T value) : _value(min) { if (value <= max && value >= min) { _value = value; } else { // XXX throw your runtime error/exception... } } Bounded(const Bounded<T, min, max>& b) : _value(b._value){ } }; This will allow the type checker to catch obvious miss assignments such as: Bounded<int, 1, 5> b1(1); Bounded<int, 1, 4> b2(b1); // <-- won't compile: type mismatch However, the more advanced relationships where you want to check whether the range of one template instance is included within the range of another instance cannot be expressed in the C++ template mechanism. Every Bounded specification becomes a new type. Thus the compiler can check for type mismatches. It cannot check for more advanced relationships that might exist for those types.
{ "language": "en", "url": "https://stackoverflow.com/questions/148511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Does nmake have build tasks? Using Ant I could unzip an archive before proceeding with the build per-se ... Is this possible using nmake? Could I call an external application? Or even a batch script? A: Any variant on make has the ability to perform any task that can be done from the command line. Indeed, most of the build functionality of any makefile is going to depend upon the onvocation of external processes such as the compiler, linker, librarian, etc. The only downside to make is that there are so many variations of syntax (nmake, borland make, GNU make, etc.) that make it practically impossible to write a single cross-platform makefile. In answer to your particular question consider the following: main.cpp: archive.zip unzip archive.zip This basically states that main.cpp depends upon archive.zip and states that this dependency can be satisfied by invoking the "unzip" command. A: You can call an external application from nmake Makefiles, just as from any other Makefile. However, what to call? You'll need to have WinZip command line tools or something installed, right? I'd recommend looking at SCons. It is a wonderful build engine, fully supports Windows and MSVC++, and has unzipping built in.
{ "language": "en", "url": "https://stackoverflow.com/questions/148513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to Regex search/replace only first occurrence in a string in .NET? It seems the .NET Regex.Replace method automatically replaces all matching occurrences. I could provide a MatchEvaluator delegate that returns the matched string after the first replacement, rendering no change, but that sounds very inefficient to me. What is the most efficient way to stop after the first replacement? A: You were probably using the static method. There is no (String, String, Int32) overload for that. Construct a regex object first and use myRegex.Replace. A: From MSDN: Replace(String, String, Int32) Within a specified input string, replaces a specified maximum number of strings that match a regular expression pattern with a specified replacement string. Isn't this what you want? A: In that case you can't use: string str ="abc546_$defg"; str = Regex.Replace(str,"[^A-Za-z0-9]", ""); Instead you need to declare new Regex instance and use it like this: string str ="abc546_$defg"; Regex regx = new Regex("[^A-Za-z0-9]"); str = regx.Replace(str,"",1) Notice the 1, It represents the number of occurrences the replacement should occur. A: Just to answer the original question... The following regex matches only the first instance of the word foo: (?<!foo.*)foo This regex uses the negative lookbehind (?<!) to ensure no instance of foo is found prior to the one being matched.
{ "language": "en", "url": "https://stackoverflow.com/questions/148518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: Entlib versus ACA.Net - Does ACA.Net provide any advantage? My understanding is that Entlib has picked-up and included concepts from ACA.Net. Is there any point to using ACA.Net on a new .net project? A: We are actively using ACA.NET 4.1 where I work. ACA.NET actually uses EntLib at its core, and over the years Avanade have "retired" parts of their framework as EntLib functionality catches up. One thing which EntLib doesn't do, which ACA.NET does well is its use of Aspects over a machine boundary. I know EntLib has policy injection, but this works by manipulating the instantiation of a local object (i.e. the service). If you want to protect your remote service with an Authorization Aspect, then an ACA.NET Aspect declared as a ReceiversOnly container will ensure the service is protected where the service runs. If you are across physical layers on any of these service calls, ACA.NET will do the job, EntLib doesn't cut it just yet. If your application doesn't need to be deployed to multiple physical tiers, then this advantage of ACA.NET disappears and you can fall back to use EntLib only.
{ "language": "en", "url": "https://stackoverflow.com/questions/148520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: NoClassDefFoundError with a long classname on Tomcat with java 1.4.2_07-b05 I have a java class: it.eng.ancona.view.RuoliView$TabElaborazioneFattureValidazione$ElencoDettaglioElaborazioneFattureValidazione$RigaElencoDettaglioElaborazioneFattureValidazione It's so long for multiple inner class. If I use 1.4.2_07-b05 on Eclipse and I call this class, all goes fine. If I use 1.4.2_07-b05 on Tomcat 5.0 it throws NoClassDefFoundError. I try to cut the class name, and after this all works fine. I've searched the internet and I've found that the max length for a class name is 65000, so the length should be ok. And on eclipse all works. The OS is Vista. Someone know if it's a bug or anything else? A: This could be caused by the maximum path length of Windows. Try moving your Tomcat server to something like C:\TC to see if you still have a problem. Also check if the jar that this class should be in, actually does have it. A: isn't this more a Classpath problem? Within Eclipse it is fairly easy to get a correct classpath, since it manages its own build directory. Is the class in your WAR (or autodeploy-folder, or whatever you use to deploy to Tomcat)?
{ "language": "en", "url": "https://stackoverflow.com/questions/148530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Creating my own Iterators I'm trying to learn C++ so forgive me if this question demonstrates a lack of basic knowledge, you see, the fact is, I have a lack of basic knowledge. I want some help working out how to create an iterator for a class I have created. I have a class 'Shape' which has a container of Points. I have a class 'Piece' which references a Shape and defines a position for the Shape. Piece does not have a Shape it just references a Shape. I want it to seem like Piece is a container of Points which are the same as those of the Shape it references but with the offset of the Piece's position added. I want to be able to iterate through the Piece's Points just as if Piece was a container itself. I've done a little reading around and haven't found anything which has helped me. I would be very grateful for any pointers. A: /EDIT: I see, an own iterator is actually necessary here (I misread the question first). Still, I'm letting the code below stand because it can be useful in similar circumstances. Is an own iterator actually necessary here? Perhaps it's sufficient to forward all required definitions to the container holding the actual Points: // Your class `Piece` class Piece { private: Shape m_shape; public: typedef std::vector<Point>::iterator iterator; typedef std::vector<Point>::const_iterator const_iterator; iterator begin() { return m_shape.container.begin(); } const_iterator begin() const { return m_shape.container.begin(); } iterator end() { return m_shape.container.end(); } const_iterator end() const { return m_shape.const_container.end(); } } This is assuming you're using a vector internally but the type can easily be adapted. A: You should use Boost.Iterators. It contains a number of templates and concepts to implement new iterators and adapters for existing iterators. I have written an article about this very topic; it's in the December 2008 ACCU magazine. It discusses an (IMO) elegant solution for exactly your problem: exposing member collections from an object, using Boost.Iterators. If you want to use the stl only, the Josuttis book has a chapter on implementing your own STL iterators. A: Writing custom iterators in C++ can be quite verbose and complex to understand. Since I could not find a minimal way to write a custom iterator I wrote this template header that might help. For example, to make the Piece class iterable: #include <iostream> #include <vector> #include "iterator_tpl.h" struct Point { int x; int y; Point() {} Point(int x, int y) : x(x), y(y) {} Point operator+(Point other) const { other.x += x; other.y += y; return other; } }; struct Shape { std::vector<Point> vec; }; struct Piece { Shape& shape; Point offset; Piece(Shape& shape, int x, int y) : shape(shape), offset(x,y) {} struct it_state { int pos; inline void next(const Piece* ref) { ++pos; } inline void begin(const Piece* ref) { pos = 0; } inline void end(const Piece* ref) { pos = ref->shape.vec.size(); } inline Point get(Piece* ref) { return ref->offset + ref->shape.vec[pos]; } inline bool equal(const it_state& s) const { return pos == s.pos; } }; SETUP_ITERATORS(Piece, Point, it_state); }; Then you would be able to use it as a normal STL Container: int main() { Shape shape; shape.vec.emplace_back(1,2); shape.vec.emplace_back(2,3); shape.vec.emplace_back(3,4); Piece piece(shape, 1, 1); for (Point p : piece) { std::cout << p.x << " " << p.y << std::endl; // Output: // 2 3 // 3 4 // 4 5 } return 0; } It also allows for adding other types of iterators like const_iterator or reverse_const_iterator. I hope it helps. A: Here Designing a STL like Custom Container is an excellent article which explains some of the basic concepts of how an STL like container class can be designed along with the iterator class for it. Reverse iterator (little tougher) though is left as an exercise :-) HTH, A: You can read this ddj article Basically, inherit from std::iterator to get most of the work done for you. A: The solution to your problem is not the creation of your own iterators, but the use of existing STL containers and iterators. Store the points in each shape in a container like vector. class Shape { private: vector <Point> points; What you do from then on depends on your design. The best approach is to iterate through points in methods inside Shape. for (vector <Point>::iterator i = points.begin(); i != points.end(); ++i) /* ... */ If you need to access points outside Shape (this could be a mark of a deficient design) you can create in Shape methods that will return the iterator access functions for points (in that case also create a public typedef for the points container). Look at the answer by Konrad Rudolph for details of this approach.
{ "language": "en", "url": "https://stackoverflow.com/questions/148540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "146" }
Q: Access a 2nd project in the same solution with MSBuild I'm new to MSBuild, and am learning as I need to know how to do things. Currently, I am working form the MSBuild file that is generated from the Web Deployment Project extension for Visual Studio. I have been ab;e to access and manipulate files which are directly in my Web project by creating properties form this block of XML: <PropertyGroup> <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration> <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform> <ProductVersion>9.0.21022</ProductVersion> <SchemaVersion>2.0</SchemaVersion> <ProjectGuid>{0B9F9B60-7AD7-49F0-A168-9D4D29FB1A21}</ProjectGuid> <SourceWebPhysicalPath>..\ARP_FORMS</SourceWebPhysicalPath> <SourceWebProject>{7FCA4A38-0FEE-4D46-82EF-AD0089F9CAA2}|ARP_FORMS\ARP_FORMS.csproj</SourceWebProject> <SourceWebVirtualPath>/ARP_FORMS.csproj</SourceWebVirtualPath> <TargetFrameworkVersion>v3.5</TargetFrameworkVersion> </PropertyGroup> I need to create properties do the same thing to manipulate other files from additional projects in my solution. Can anyone point me to the proper syntax for this? A: Sayed Ibrahim Hashimi answers this question very well, and he includes sample source code. Basically, you create an MSBuild project that executes other MSBuild projects.
{ "language": "en", "url": "https://stackoverflow.com/questions/148556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Storing database data in files? I'm currently working on a school project, in java, and I'm coding a database application. Something like the MySQL Monitor where you type in queries and get results / whatever. In applications I've coded before, I used databases to store data, like user profiles, settings, etc. Now, obviously, I can't use a database to store data generated from this school project, otherwise what's the point? I'm thinking about storing the data in files but that's the only idea I have in my mind right now and I'm kinda running dry.. and to be honest, I don't want to start banging at code and then I discover a better way of doing it. So if anyone has any idea how to store the data (like CSV?), or has some kind of knowledge of how database applications work internally, can you please shed some light? -- EDIT: just to be more clear, I can't use database engines to store the data, to put it this way, I'm coding a simple database engine. Ideas like what Galwegian, jkramer and Joe Skora suggested is what I'm looking for. A: Sure, you could create your own database with a file system since that is how actual databases are implemented. For example, you could decide to store your data in fixed or variable length raw data files, and then create a separate index file with file pointers into that other file for quick indexed access for any queries based on what type of index information you want stored in your Index file So yes, look at creating 2 files - 1 to store the data and the other to store file pointers into that file keyed by whatever indexes you are wanting to provide quick index access by. Best of luck - you will come to learn alot about database construction with this project I am betting. A: What you probably want is to use are random access files. Once you have a set of fields for a record, you can write them to disk as a block. You can keep an index separately on disk on in memory and access any record directly at any time. Hopefully that gives you enough to get started. A: I am not sure I understand your requirement, but wouldn't 'SQLite' work for you (though it is still a database engine, which is what you may be avoiding in the first place, so I am not so sure)? A: I would create a database that uses binary tables, one file per table. Take a look at the very handy DataInputStream and DataOutputStream classes. Using them you can easily go back and forth from binary files to Java types. I would define a simple structure for the table: a header that describes the contents of the table, followed by the row data. Have each column in the table defined in the header - its name, data type, and maximum length. Keep it simple. Only handle a few data types using the capabilities of DataInput/OutputStream as your guide. Use a simple file-naming convention to associate table names to file names. Create a test table with enough columns to have at least one of each data type. Then, create a simple way to populate tables with data, either by processing input files or via console input. Finally, create a simple way to display the contents of entire tables to the console. After that, you can add on a very simple version of a SQL-like dialect to do queries. A simple query like this: SELECT * FROM EMPLOYEES ...would require opening up the file containing the EMPLOYEES table (via your table filename naming convention), parsing the header, and reading through the entire table, returning the contents. After you get that working, it will be simple to add other functionality such as processing of simple WHERE clauses, returning only the rows (or columns within rows) that match certain criteria. If it's not necessary to have such a general-purpose solution (any number of tables, any number of columns, an actual query language, etc.) you can simply add methods to your API like: Employee[] result = EmployeeDataManager.select("LASTNAME", "Smith"); ...or something like that. If you build up slowly, dividing your functionality up into several small tasks as I have suggested, soon you will have implemented all of the features you need. A: I suppose you could do a very simple proof of principle 'database' application using xml files and maybe use xpath to query it. Would be very slow compared to a database (depending on file size and hardware of course), but would work. A: The basics of storing records in blocks in data files have been around for decades. Obviously there are a great many variations on a theme, and all of them are designed to work around the fact that we have slow disk drives. But the fundamentals are not difficult. Combining fixed length columns with a fixed number of columns can give you very rapid access to any record in your database. From there, it's all offsets. Let's take the example of a simple row containing 10 32-Bit integers. A single row would be 40 bytes (4 bytes per integer * 10). If you want row 123, simply multiply it by 40. 123 * 40, gives you an offset of 4920. Seek that far in to the database file, read 40 bytes, and voila, you have a row from your database. Indexes are stored in B+-Trees, with tree nodes distributed across blocks on the disk. The power of the B+Tree is that you can easily find a single key value within the tree, and then simply walk the leaf nodes to scroll through the data in key order. For a simple format that's useful and popular, consider looking up the original DBase format -- DBF Files. It's evolved some over the years, but the foundation is quite simple, well documented, and there are lots of utilities that can work on it. It's a perfectly workable database format that deals with all of the fundamental issues with the problem. A: If you're using C#, you might consider writing a simple linq to xml type ORM. A: You could use a serialization format like YAML, and store an array of hashes, where each hash is a table record and the keys in each hash are column names. You could then just load the serialized file into memory, work with arrays and hashes, and then store everything back. I hope that's what you meant. A: Can't you use a file based database like hsqldb to store your user settings etc.? This way you have a familiar interface to your data and are able to store it in the filesystem. A: StackOverflow isn't for homework. Having said that, here's the Quick and Dirty way to an efficient, flexible database. * *Design a nice Map (HashMap, TreeMap, whatever) that does what you want to do. Often, you'll have a "Record" class with your data, and a number of "Index" objects which are effectively Map<String,List<Record>> collections. (Why a list of records? What about an index on a not-very-selective field?) *Write a class to serialize your collections into files. *Write a class to deserialize your collections from files. *Write your query processing or whatever around the in-memory Java objects. In-memory database. Don't like Java's serialization? Get a JSON or YAML library and use those formats to serialize and deserialize. "But an in-memory database won't scale," the purists whine. Take that up with SQLite, not me. My PC has 2GB of RAM, that's a pretty big database. SQLite works.
{ "language": "en", "url": "https://stackoverflow.com/questions/148568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Using pre-compiled headers with CMake I have seen a few (old) posts on the 'net about hacking together some support for pre-compiled headers in CMake. They all seem a bit all-over the place and everyone has their own way of doing it. What is the best way of doing it currently? A: There is a third party CMake module named 'Cotire' which automates the use of precompiled headers for CMake based build systems and also supports unity builds. A: if you don't wanna reinvent the wheel, just use either Cotire as the top answer suggests or a simpler one - cmake-precompiled-header here. To use it just include the module and call: include( cmake-precompiled-header/PrecompiledHeader.cmake ) add_precompiled_header( targetName StdAfx.h FORCEINCLUDE SOURCE_CXX StdAfx.cpp ) A: CMake has just gained support for PCHs (pre-compiled headers), it is available from 3.16 (released October 2019) onwards: https://gitlab.kitware.com/cmake/cmake/merge_requests/3553 target_precompile_headers(<target> <INTERFACE|PUBLIC|PRIVATE> [header1...] [<INTERFACE|PUBLIC|PRIVATE> [header2...] ...]) Sharing PCHs between targets is supported via the REUSE_FROM keyword such as here. There is some additional context (motivation, numbers) available at https://blog.qt.io/blog/2019/08/01/precompiled-headers-and-unity-jumbo-builds-in-upcoming-cmake/ A: An example of usage precompiled header with cmake and Visual Studio 2015 "stdafx.h", "stdafx.cpp" - precompiled header name. Put the following below in the root cmake file. if (MSVC) # For precompiled header. # Set # "Precompiled Header" to "Use (/Yu)" # "Precompiled Header File" to "stdafx.h" set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /Yustdafx.h /FIstdafx.h") endif() Put the following below in the project cmake file. "src" - a folder with source files. set_source_files_properties(src/stdafx.cpp PROPERTIES COMPILE_FLAGS "/Ycstdafx.h" ) A: Im using the following macro to generate and use precompiled headers: MACRO(ADD_MSVC_PRECOMPILED_HEADER PrecompiledHeader PrecompiledSource SourcesVar) IF(MSVC) GET_FILENAME_COMPONENT(PrecompiledBasename ${PrecompiledHeader} NAME_WE) SET(PrecompiledBinary "${CMAKE_CURRENT_BINARY_DIR}/${PrecompiledBasename}.pch") SET(Sources ${${SourcesVar}}) SET_SOURCE_FILES_PROPERTIES(${PrecompiledSource} PROPERTIES COMPILE_FLAGS "/Yc\"${PrecompiledHeader}\" /Fp\"${PrecompiledBinary}\"" OBJECT_OUTPUTS "${PrecompiledBinary}") SET_SOURCE_FILES_PROPERTIES(${Sources} PROPERTIES COMPILE_FLAGS "/Yu\"${PrecompiledHeader}\" /FI\"${PrecompiledHeader}\" /Fp\"${PrecompiledBinary}\"" OBJECT_DEPENDS "${PrecompiledBinary}") # Add precompiled header to SourcesVar LIST(APPEND ${SourcesVar} ${PrecompiledSource}) ENDIF(MSVC) ENDMACRO(ADD_MSVC_PRECOMPILED_HEADER) Lets say you have a variable ${MySources} with all your sourcefiles, the code you would want to use would be simply be ADD_MSVC_PRECOMPILED_HEADER("precompiled.h" "precompiled.cpp" MySources) ADD_LIBRARY(MyLibrary ${MySources}) The code would still function just fine on non-MSVC platforms too. Pretty neat :) A: IMHO the best way is to set PCH for whole project, as martjno suggested, combined with ability of ignoring PCH for some sources if needed (e.g. generated sources): # set PCH for VS project function(SET_TARGET_PRECOMPILED_HEADER Target PrecompiledHeader PrecompiledSource) if(MSVC) SET_TARGET_PROPERTIES(${Target} PROPERTIES COMPILE_FLAGS "/Yu${PrecompiledHeader}") set_source_files_properties(${PrecompiledSource} PROPERTIES COMPILE_FLAGS "/Yc${PrecompiledHeader}") endif(MSVC) endfunction(SET_TARGET_PRECOMPILED_HEADER) # ignore PCH for a specified list of files function(IGNORE_PRECOMPILED_HEADER SourcesVar) if(MSVC) set_source_files_properties(${${SourcesVar}} PROPERTIES COMPILE_FLAGS "/Y-") endif(MSVC) endfunction(IGNORE_PRECOMPILED_HEADER) So, if you have some target MY_TARGET, and list of generated sources IGNORE_PCH_SRC_LIST you'll simply do: SET_TARGET_PRECOMPILED_HEADER(MY_TARGET stdafx.h stdafx.cpp) IGNORE_PRECOMPILED_HEADER(IGNORE_PCH_SRC_LIST) This aproach is tested and works perfectly. A: Here is a code snippet to allow you to use precompiled header for your project. Add the following to your CMakeLists.txt replacing myprecompiledheaders and myproject_SOURCE_FILES as appropriate: if (MSVC) set_source_files_properties(myprecompiledheaders.cpp PROPERTIES COMPILE_FLAGS "/Ycmyprecompiledheaders.h" ) foreach( src_file ${myproject_SOURCE_FILES} ) set_source_files_properties( ${src_file} PROPERTIES COMPILE_FLAGS "/Yumyprecompiledheaders.h" ) endforeach( src_file ${myproject_SOURCE_FILES} ) list(APPEND myproject_SOURCE_FILES myprecompiledheaders.cpp) endif (MSVC) A: I ended up using an adapted version of larsm macro. Using $(IntDir) for pch path keeps precompiled headers for debug and release builds separate. MACRO(ADD_MSVC_PRECOMPILED_HEADER PrecompiledHeader PrecompiledSource SourcesVar) IF(MSVC) GET_FILENAME_COMPONENT(PrecompiledBasename ${PrecompiledHeader} NAME_WE) SET(PrecompiledBinary "$(IntDir)/${PrecompiledBasename}.pch") SET(Sources ${${SourcesVar}}) SET_SOURCE_FILES_PROPERTIES(${PrecompiledSource} PROPERTIES COMPILE_FLAGS "/Yc\"${PrecompiledHeader}\" /Fp\"${PrecompiledBinary}\"" OBJECT_OUTPUTS "${PrecompiledBinary}") SET_SOURCE_FILES_PROPERTIES(${Sources} PROPERTIES COMPILE_FLAGS "/Yu\"${PrecompiledHeader}\" /FI\"${PrecompiledHeader}\" /Fp\"${PrecompiledBinary}\"" OBJECT_DEPENDS "${PrecompiledBinary}") # Add precompiled header to SourcesVar LIST(APPEND ${SourcesVar} ${PrecompiledSource}) ENDIF(MSVC) ENDMACRO(ADD_MSVC_PRECOMPILED_HEADER) ADD_MSVC_PRECOMPILED_HEADER("stdafx.h" "stdafx.cpp" MY_SRCS) ADD_EXECUTABLE(MyApp ${MY_SRCS}) A: Adapted from Dave, but more efficient (sets target properties, not for each file): if (MSVC) set_target_properties(abc PROPERTIES COMPILE_FLAGS "/Yustd.h") set_source_files_properties(std.cpp PROPERTIES COMPILE_FLAGS "/Ycstd.h") endif(MSVC) A: Well when builds take 10+ minutes on a quad core machine every time you change a single line in any of the project files it tells you its time to add precompiled headers for windows. On *nux I would just use ccache and not worry about that. I have implemented in my main application and a few of the libraries that it uses. It works great to this point. One thing that also is needed is you have to create the pch source and header file and in the source file include all the headers that you want to be precompiled. I did this for 12 years with MFC but it took me a few minutes to recall that.. A: The cleanest way is to add the precompiled option as a global option. In the vcxproj file this will show up as <PrecompiledHeader>Use</PrecompiledHeader> and not do this for every individual file. Then you need to add the Create option to the StdAfx.cpp. The following is how I use it: MACRO(ADD_MSVC_PRECOMPILED_HEADER SourcesVar) SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /YuStdAfx.h") set_source_files_properties(StdAfx.cpp PROPERTIES COMPILE_FLAGS "/YcStdAfx.h" ) list(APPEND ${${SourcesVar}} StdAfx.cpp) ENDMACRO(ADD_MSVC_PRECOMPILED_HEADER) file(GLOB_RECURSE MYDLL_SRC "*.h" "*.cpp" "*.rc") ADD_MSVC_PRECOMPILED_HEADER(MYDLL_SRC) add_library(MyDll SHARED ${MYDLL_SRC}) This is tested and works for MSVC 2010 and will create a MyDll.pch file, I am not bothered what file name is used so I didn't make any effort to specify it. A: As the precompiled header option doesnt work for rc files, i needed to adjust the macro supplied by jari. ####################################################################### # Makro for precompiled header ####################################################################### MACRO(ADD_MSVC_PRECOMPILED_HEADER PrecompiledHeader PrecompiledSource SourcesVar) IF(MSVC) GET_FILENAME_COMPONENT(PrecompiledBasename ${PrecompiledHeader} NAME_WE) SET(PrecompiledBinary "$(IntDir)/${PrecompiledBasename}.pch") SET(Sources ${${SourcesVar}}) # generate the precompiled header SET_SOURCE_FILES_PROPERTIES(${PrecompiledSource} PROPERTIES COMPILE_FLAGS "/Zm500 /Yc\"${PrecompiledHeader}\" /Fp\"${PrecompiledBinary}\"" OBJECT_OUTPUTS "${PrecompiledBinary}") # set the usage of this header only to the other files than rc FOREACH(fname ${Sources}) IF ( NOT ${fname} MATCHES ".*rc$" ) SET_SOURCE_FILES_PROPERTIES(${fname} PROPERTIES COMPILE_FLAGS "/Zm500 /Yu\"${PrecompiledHeader}\" /FI\"${PrecompiledHeader}\" /Fp\"${PrecompiledBinary}\"" OBJECT_DEPENDS "${PrecompiledBinary}") ENDIF( NOT ${fname} MATCHES ".*rc$" ) ENDFOREACH(fname) # Add precompiled header to SourcesVar LIST(APPEND ${SourcesVar} ${PrecompiledSource}) ENDIF(MSVC) ENDMACRO(ADD_MSVC_PRECOMPILED_HEADER) Edit: The usage of this precompiled headers reduced the Overall build time of my main Project from 4min 30s down to 1min 40s. This is for me a really good thing. In the precompile header are only headers like boost/stl/Windows/mfc. A: Don't even go there. Precompiled headers mean that whenever one of the headers changes, you have to rebuild everything. You're lucky if you have a build system that realizes this. More often than never, your build will just fail until you realize that you changed something that is being precompiled, and therefore you need to do a full rebuild. You can avoid this mostly by precompiling the headers that you are absolutely positive won't change, but then you're giving up a large part of the speed gain as well. The other problem is that your namespace gets polluted with all kinds of symbols that you don't know or care about in many places where you'd be using the precompiled headers.
{ "language": "en", "url": "https://stackoverflow.com/questions/148570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "118" }