text
stringlengths
8
267k
meta
dict
Q: Sun Solaris IDE/Source Control Choices At my new job I'm getting acclimated with the IDE we have Visual SlickEdit 8.0 installed on our sun solaris workstation that we telnet into(lol yes ssh is turned off). Now it does the basics but is missing many things I considered must haves like code hiding and parenthesis matching. Are there any alternatives to constantly ftping my work from our remote sun machine so I can edit it with a local copy of Eclipse then ftp it back. We dont' have any source control for incremental updates locally so if I overwrite my changes I'm screwed and with lots of ftping I can only assume its a matter of time before I overwrite some of my work. Any ideas or suggestions? A: Ouch, I suggest you look into some type of Continuous Integration system ASAP! As far as version control, Subversion is pretty mature and stable, or you could go with the up and coming GIT system. A: I think you asked two questions. One about IDE and one about a version control system. I don't know of any IDEs that have good support for distributed development onto a system that has only a telnet connection. I would recommend a very fast, lightweight version control system. Then use your local IDE, and "push" your changes to the Solaris box for building. That's the simplest answer. Actually you might be a primate candidate for a new feature in NetBeans 6.5 called "Remote Development". NetBeans also supports development of C/C++ programs using gcc/gdb or Sun cc/dbx. Remote development is designed for this situation. You can edit your files on one machine and build/run them on another machine. I think it works best with NFS access between the two machines, I'm not sure it's smart enough to "push" changes using source control when you need to udpate the remote host. I haven't tried it much myself, but you might want to look into it. A: As for a native IDE to supplement/replace your existing solution, you have a few choices if you can run a local X server: * *Sun Studio Free "express" edition, or free full version for Sun Developer Network members. *Eclipse Allegedly multi-platform, but a Solaris package is no longer available at eclipse.org. However Blastwave does provide a Solaris package (and the multiple dependencies that you will also require). *Netbeans Native Solaris package, FOSS, supports many languages: http://www.netbeans.org/. Has strong backing and some nice features (eg Java GUI designer). *Vim Only for the die-hard UNIXer :) There is an older version of vim available on the Solaris Companion CD, otherwise the usual places have more recent packages. The best thing that can be said for this solution is that it's lightweight and will work directly over your telnet (ick) connection without needing you to export X11. A: Can't answer the IDE question, but for source control, I can heartily recommend GIT. We've recently transitioned from CVS to GIT in our Solaris environment, and it's excellent. We should have done it sooner.
{ "language": "en", "url": "https://stackoverflow.com/questions/150698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there any Visual Library alternative to wxPython that supports CSS/Style Sheets? I've developed a program that extensively uses wxPython - the wxWindow port for python. Even though it is as mature library it is still very primitive and very programming oriented. Which is time consuming and not flexible at all. I would love to see if there is something like Flex/Action Script where all the visual dimensions are configured by style sheets. Any thoughts? A: PyQt with Qt style sheets might be a good fit. Naturally, you'd need to re-write quite a bit of your GUI layer for the toolkit change. A: You could try XUL, the language the Firefox GUI uses. It's XML styled with CSS and scripted with Javascript. http://www.mozilla.org/projects/xul/ http://en.wikipedia.org/wiki/XUL http://developer.mozilla.org/en/XUL
{ "language": "en", "url": "https://stackoverflow.com/questions/150705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: C++ having cin read a return character I was wondering how to use cin so that if the user does not enter in any value and just pushes ENTER that cin will recognize this as valid input. A: I find that for user input std::getline works very well. You can use it to read a line and just discard what it reads. The problem with doing things like this, // Read a number: std::cout << "Enter a number:"; std::cin >> my_double; std::count << "Hit enter to continue:"; std::cin >> throwaway_char; // Hmmmm, does this work? is that if the user enters other garbage e.g. "4.5 - about" it is all too easy to get out of sync and to read what the user wrote the last time before printing the prompt that he needs to see the next time. If you read every complete line with std::getline( std::cin, a_string ) and then parse the returned string (e.g. using an istringstream or other technique) it is much easier to keep the printed prompts in sync with reading from std::cin, even in the face of garbled input. A: Does cin.getline solve your problem? A: To detect the user pressing the Enter Key rather than entering an integer: char c; int num; cin.get(c); // get a single character if (c == 10) return 0; // 10 = ascii linefeed (Enter Key) so exit else cin.putback(c); // else put the character back cin >> num; // get user input as expected Alternatively: char c; int num; c = cin.peek(); // read next character without extracting it if (c == '\n') return 0; // linefeed (Enter Key) so exit cin >> num; // get user input as expected A: You will probably want to try std::getline: #include <iostream> #include <string> std::string line; std::getline( std::cin, line ); if( line.empty() ) ... A: Try unbuffering cin (it's buffered by default).
{ "language": "en", "url": "https://stackoverflow.com/questions/150726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: What is the best way to write to a file in Ruby? I would like to write some data to a file in Ruby. What is the best way to do that? A: File.open("a_file", "w") do |f| f.write "some data" end You can also use f << "some data" or f.puts "some data" according to personal taste/necessity to have newlines. Change the "w" to "a" if you want to append to the file instead of truncating with each open. A: require 'rio' rio('foo.txt') < 'bar' http://rio.rubyforge.org/ A: Beyond File.new or File.open (and all the other fun IO stuff) you may wish, particularly if you're saving from and loading back into Ruby and your data is in objects, to look at using Marshal to save and load your objects directly. A: Using File::open is the best way to go: File.open("/path/to/file", "w") do |file| file.puts "Hello file!" end As previously stated, you can use "a" instead of "w" to append to the file. May other modes are available, listed under ri IO, or at the Ruby Quickref. A: filey = File.new("/path/to/the/file", APPEND) filey.puts "stuff to write"
{ "language": "en", "url": "https://stackoverflow.com/questions/150731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Getting the GUID of a VS 2008 tool window Does anybody have a short code sample that can be run in the VS macro editor on how to enumerate the tool windows in VS 2008 and show the GUID for each one? Or do you know another way to find this out? A: You can enumerate the child keys under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\ToolWindows
{ "language": "en", "url": "https://stackoverflow.com/questions/150735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you install an ssh server on qnx? I'm working on a qnx device, and I want to be able to ssh into it. Does anyone have a primer on getting something like openSSH up and running? A: QNX have removed support for packages since version 6.4. This means that it is difficult to install SSH and SSL from the 3rd Party Applications CD, because the utilities required arent there anymore. It turns out their qpk file package is really just a tgz in disguise. So what you can do is untar the openssl and openssh packages. It will create a file structure like public/core-//opt All you need to do is copy all of the contents from /opt to /, and then add /opt/bin:/opt/sbin to your path, and /opt/lib to your LD_LIBRARY_PATH. Other things to note are: * *your random number generator needs to be started (random -t) *you will need to set up a new /etc/openssh/sshd_config if you want to use the server, I copied mine from a Ubuntu machine *You will need to generate keys, there is lots of information on doing this online From what I have read, QNX 6.4.1 should come pre-installed with ssh. I am yet to confirm this A: Depending on whether it's 6.2, 6.3 or 6.4 you will actually go about it in a different manner. 6.2 has "Installer" or "Install Software from QNX" in Photon, a GUI program that lets you download and install it kind of like Fedora's Pup, YaST or the likes. The command-line equivalent is cl-installer. 6.3 does not have the 6.2 package filesystem, but supports it if needed. On 6.3, the easiest way would be to get the 6.2's package from http://download.qnx.com/contrib/repository621a/ , unpack it (it's just a tarball) - you should be able to figure out which file goes where. 6.4 has support for pkgsrc which would be my preferred way of doing it there. A: On a stock 6.5, 6.5.0SP1 or 6.6 system all you need to do is create your keys: ssh-keygen -tdsa -f/etc/ssh/ssh_host_dsa_key ssh-keygen -trsa -f/etc/ssh/ssh_host_rsa_key Then start the sshd server (you need to specify the full path): /usr/sbin/sshd If something isn't working start the server with debug output enabled and the problem should become obvious: /usr/sbin/sshd -ddd A: According to this you should be able to install it from the 3rd Party CD Rom, also available here: 3rd Party Apps. This requires the use of the qnxinstall app. A: If you want to start a SSH server to transfer files easily. The SSH daemon (sshd) is already installed, but the 'configuration' is missing. * *Create the keys (do NOT use a password):ΒΉ random -t ssh-keygen -t rsa -f /etc/ssh/ssh_host_key -b 1024 ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key *Create a user account different from root with a password.Β² *Add the user to the sshd group in: /etc/group => sshd:x:6:user1 *Start by executing: /usr/sbin/sshd For QNX 6.6.0, you have to do in addition: *Create another key ssh-keygen -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key (The ECDSA key generation is only necessary for QNX 6.6.0 - see also here) *Create folders accordingly to fit this path /var/chroot/sshd/ If you want to use SFTP: *Create/Use the file /etc/ssh/sshd_config and enable Subsystem sftp /usr/libexec/sftp-server by adding this line to the file Some steps are also covered here on the QNX manual about sshd command. ΒΉ Here: the folder ssh/ was created in /etc/ and make sure the files belong to the user running the sshd! Β² (i.e. direct root access via ssh is disabled by default but can be enabled by specifying PermitRootLogin yes in the /etc/ssh/sshd_config) file A: Open Source Applications for QNX provides ported open source tools/applications including their complete sources and/or ready to use binaries for QNX, like XFree86, Lesstif, DDD, VNC, Nedit and cluster middleware like PVM. I have no idea what that means, but I hope it gives you something to start with. A: Once you followed the steps presented on qnx website (click here) you need to deactivate the PAM module from sshd_config file (under /etc/ssh). Change the line "UsePAM yes" to "UsePAM no". A: FYI - you can start telnet with "inetd" which gets you on, and gets ftp started so you can then move the ssh libs on etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/150737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How much Application session data can you actually hold? I currently have an application that gets hit with over 20,000 users daily and they mostly look at one data table. This data table is filled with about 20 rows but is pulled from a "datatable" in a db with 200,000-600,000 records of information in the table. Edit: These 20 rows are "dynamic" and do change if the user enters in any information via a text box. I also currently hold user data along with profile data. I am currently making about 4 call backs each time a datatable is displayed and I am unable to get it down to 1 call. Question: I was wondering if I could actually fill the application state every 5 seconds with the 200,000-600,000 rows of data and would it actually speed up the system? Edit: do to the dynamic rows that a user or any other user enters in, the content needs to be refreshed often. Question 2: How much can I actually hold in application cache and still get away with it going faster? Edit: With over 20,000 users accessing these 200,000 rows, I would need to cache all of them or at least I think for best practices. When the user comes to my site, this is one of the main pages they look at and probably come back to 2-5 times per visit. Edit: The user does see a unique set of 20 rows that could be different than any other 20 rows the users see. It is a VERY Dynamic site which a couple of different rows can get updated around once a second. Edit: If stored in session state, then it will only speed up the amount of times a person views the page. Not the over all application because a person could only view the page once and then leave. A: Technically, I believe what you want to do is possible, but I wouldn't reccomend it. There are several factors you have to consider before going down this path. * *Do you have the hardware to support it? If you don't have the memory for such a configuration and you have to page swap, then you'll probably lose most of speed benefit of having it cached in memory. If you are using an out of process state server, then the system has the overhead of dealing with serialization. *How do you plan on searching for something in that many rows? A database server handles alot of searching and sorting for you behind the scenes. There's some pretty complex algorithms that they use which you're going to lose if you cache the data on the webserver. There is no real hard an fast rule as to when something is faster in the database as to opposed to in memory. It really depends on how the application is setup and how the data is stored. A: You say they mostly look at one table, and that table is pulled from 200 to 600K rows. How often is that table pulled? Is this a "homepage" type scenario where users are mostly looking at the first page of data? Why cache all 200K rows, why not cache the first 20? A: Are you sure you want to store that in Session state? I'd prefer Application state if they use the same database, this way just one dataset will be stored in memory. I think the memory limit is controlled by IIS. There are Maximum virtual memory and Maximum used memory limits. Don't forget to check availability of data. Check this: Configuring ASP.NET Applications in Worker Process Isolation Mode (IIS 6.0) A: Can you clarify for me - are you saying a user gets a datatable of 20 records that is unique to that user and is the result of querying the 600K table? Are the records static for the user? If there are only 20 records that remain static once they are associated with the user, can you create serialized objects that can streamed to the user on request? That is, put them in a state where they are ready to go so you don't have to hit the DB.
{ "language": "en", "url": "https://stackoverflow.com/questions/150739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do you generate a good ID in ATOM documents? Apparently using the URL is no good - why is this the case, and how do you generate a good one? A: Mark Pilgrim's article How to make a good ID in Atom is good. Here's part of it: Why you shouldn’t use your permalink as an Atom ID It’s valid to use your permalink URL as your <id>, but I discourage it because it can create confusion about which element should be treated as the permalink. Developers who don’t read specs will look at your Atom feed, and they see two identical pieces of information, and they pick one and use it as the permalink, and some of them will pick incorrectly. Then they go to another feed where the two elements are not identical, and they get confused. In Atom, <link rel="alternate"> is always the permalink of the entry. <id> is always a unique identifier for the entry. Both are required, but they serve different purposes. An entry ID should never change, even if the permalink changes. β€œPermalink changes”? Yes, permalinks are not as permanent as you might think. Here’s an example that happened to me. My permalink URLs were automatically generated from the title of my entry, but then I updated an entry and changed the title. Guess what, the β€œpermanent” link just changed! If you’re clever, you can use an HTTP redirect to redirect visitors from the old permalink to the new one (and I did). But you can’t redirect an ID. The ID of an Atom entry must never change! Ideally, you should generate the ID of an entry once, and store it somewhere. If you’re auto-generating it time after time from data that changes over time, then the entry’s ID will change, which defeats the purpose. A: Use a GUID for the ID. depends what language you use, but you could use System.Guid for .NET.
{ "language": "en", "url": "https://stackoverflow.com/questions/150741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: What exceptions might a Python function raise? Is there any way in Python to determine what exceptions a (built-in) function might raise? For example, the documentation (http://docs.python.org/lib/built-in-funcs.html) for the built-in int(s) says nothing about the fact that it might raise a ValueError if s is not a validly formatted int. This is a duplicate of Does re.compile() or any given Python library call throw an exception? A: The only way to tell what exceptions something can raise is by looking at the documentation. The fact that the int() documentation doesn't say it may raise ValueError is a bug in the documentation, but easily explained by ValueError being exactly for that purpose, and that being something "everybody knows". To belabour the point, though, documentation is the only way to tell what exceptions you should care about; in fact, any function can potentially raise any exception, even if it's just because signals may arrive and signal handlers may raise exceptions. You should not anticipate or handle those errors, however; you should just handle the errors you expect. A: I don't know of any definitive source, apart from the source.
{ "language": "en", "url": "https://stackoverflow.com/questions/150743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: HashSet vs. List performance It's clear that a search performance of the generic HashSet<T> class is higher than of the generic List<T> class. Just compare the hash-based key with the linear approach in the List<T> class. However calculating a hash key may itself take some CPU cycles, so for a small amount of items the linear search can be a real alternative to the HashSet<T>. My question: where is the break-even? To simplify the scenario (and to be fair) let's assume that the List<T> class uses the element's Equals() method to identify an item. A: A lot of people are saying that once you get to the size where speed is actually a concern that HashSet<T> will always beat List<T>, but that depends on what you are doing. Let's say you have a List<T> that will only ever have on average 5 items in it. Over a large number of cycles, if a single item is added or removed each cycle, you may well be better off using a List<T>. I did a test for this on my machine, and, well, it has to be very very small to get an advantage from List<T>. For a list of short strings, the advantage went away after size 5, for objects after size 20. 1 item LIST strs time: 617ms 1 item HASHSET strs time: 1332ms 2 item LIST strs time: 781ms 2 item HASHSET strs time: 1354ms 3 item LIST strs time: 950ms 3 item HASHSET strs time: 1405ms 4 item LIST strs time: 1126ms 4 item HASHSET strs time: 1441ms 5 item LIST strs time: 1370ms 5 item HASHSET strs time: 1452ms 6 item LIST strs time: 1481ms 6 item HASHSET strs time: 1418ms 7 item LIST strs time: 1581ms 7 item HASHSET strs time: 1464ms 8 item LIST strs time: 1726ms 8 item HASHSET strs time: 1398ms 9 item LIST strs time: 1901ms 9 item HASHSET strs time: 1433ms 1 item LIST objs time: 614ms 1 item HASHSET objs time: 1993ms 4 item LIST objs time: 837ms 4 item HASHSET objs time: 1914ms 7 item LIST objs time: 1070ms 7 item HASHSET objs time: 1900ms 10 item LIST objs time: 1267ms 10 item HASHSET objs time: 1904ms 13 item LIST objs time: 1494ms 13 item HASHSET objs time: 1893ms 16 item LIST objs time: 1695ms 16 item HASHSET objs time: 1879ms 19 item LIST objs time: 1902ms 19 item HASHSET objs time: 1950ms 22 item LIST objs time: 2136ms 22 item HASHSET objs time: 1893ms 25 item LIST objs time: 2357ms 25 item HASHSET objs time: 1826ms 28 item LIST objs time: 2555ms 28 item HASHSET objs time: 1865ms 31 item LIST objs time: 2755ms 31 item HASHSET objs time: 1963ms 34 item LIST objs time: 3025ms 34 item HASHSET objs time: 1874ms 37 item LIST objs time: 3195ms 37 item HASHSET objs time: 1958ms 40 item LIST objs time: 3401ms 40 item HASHSET objs time: 1855ms 43 item LIST objs time: 3618ms 43 item HASHSET objs time: 1869ms 46 item LIST objs time: 3883ms 46 item HASHSET objs time: 2046ms 49 item LIST objs time: 4218ms 49 item HASHSET objs time: 1873ms Here is that data displayed as a graph: Here's the code: static void Main(string[] args) { int times = 10000000; for (int listSize = 1; listSize < 10; listSize++) { List<string> list = new List<string>(); HashSet<string> hashset = new HashSet<string>(); for (int i = 0; i < listSize; i++) { list.Add("string" + i.ToString()); hashset.Add("string" + i.ToString()); } Stopwatch timer = new Stopwatch(); timer.Start(); for (int i = 0; i < times; i++) { list.Remove("string0"); list.Add("string0"); } timer.Stop(); Console.WriteLine(listSize.ToString() + " item LIST strs time: " + timer.ElapsedMilliseconds.ToString() + "ms"); timer = new Stopwatch(); timer.Start(); for (int i = 0; i < times; i++) { hashset.Remove("string0"); hashset.Add("string0"); } timer.Stop(); Console.WriteLine(listSize.ToString() + " item HASHSET strs time: " + timer.ElapsedMilliseconds.ToString() + "ms"); Console.WriteLine(); } for (int listSize = 1; listSize < 50; listSize+=3) { List<object> list = new List<object>(); HashSet<object> hashset = new HashSet<object>(); for (int i = 0; i < listSize; i++) { list.Add(new object()); hashset.Add(new object()); } object objToAddRem = list[0]; Stopwatch timer = new Stopwatch(); timer.Start(); for (int i = 0; i < times; i++) { list.Remove(objToAddRem); list.Add(objToAddRem); } timer.Stop(); Console.WriteLine(listSize.ToString() + " item LIST objs time: " + timer.ElapsedMilliseconds.ToString() + "ms"); timer = new Stopwatch(); timer.Start(); for (int i = 0; i < times; i++) { hashset.Remove(objToAddRem); hashset.Add(objToAddRem); } timer.Stop(); Console.WriteLine(listSize.ToString() + " item HASHSET objs time: " + timer.ElapsedMilliseconds.ToString() + "ms"); Console.WriteLine(); } Console.ReadLine(); } A: You can use a HybridDictionary which automaticly detects the breaking point, and accepts null-values, making it essentialy the same as a HashSet. A: You're looking at this wrong. Yes a linear search of a List will beat a HashSet for a small number of items. But the performance difference usually doesn't matter for collections that small. It's generally the large collections you have to worry about, and that's where you think in terms of Big-O. However, if you've measured a real bottleneck on HashSet performance, then you can try to create a hybrid List/HashSet, but you'll do that by conducting lots of empirical performance tests - not asking questions on SO. A: The answer, as always, is "It depends". I assume from the tags you're talking about C#. Your best bet is to determine * *A Set of data *Usage requirements and write some test cases. It also depends on how you sort the list (if it's sorted at all), what kind of comparisons need to be made, how long the "Compare" operation takes for the particular object in the list, or even how you intend to use the collection. Generally, the best one to choose isn't so much based on the size of data you're working with, but rather how you intend to access it. Do you have each piece of data associated with a particular string, or other data? A hash based collection would probably be best. Is the order of the data you're storing important, or are you going to need to access all of the data at the same time? A regular list may be better then. Additional: Of course, my above comments assume 'performance' means data access. Something else to consider: what are you looking for when you say "performance"? Is performance individual value look up? Is it management of large (10000, 100000 or more) value sets? Is it the performance of filling the data structure with data? Removing data? Accessing individual bits of data? Replacing values? Iterating over the values? Memory usage? Data copying speed? For example, If you access data by a string value, but your main performance requirement is minimal memory usage, you might have conflicting design issues. A: Whether to use a HashSet<> or List<> comes down to how you need to access your collection. If you need to guarantee the order of items, use a List. If you don't, use a HashSet. Let Microsoft worry about the implementation of their hashing algorithms and objects. A HashSet will access items without having to enumerate the collection (complexity of O(1) or near it), and because a List guarantees order, unlike a HashSet, some items will have to be enumerated (complexity of O(n)). A: It depends. If the exact answer really matters, do some profiling and find out. If you're sure you'll never have more than a certain number of elements in the set, go with a List. If the number is unbounded, use a HashSet. A: Just thought I'd chime in with some benchmarks for different scenarios to illustrate the previous answers: * *A few (12 - 20) small strings (length between 5 and 10 characters) *Many (~10K) small strings *A few long strings (length between 200 and 1000 characters) *Many (~5K) long strings *A few integers *Many (~10K) integers And for each scenario, looking up values which appear: * *In the beginning of the list ("start", index 0) *Near the beginning of the list ("early", index 1) *In the middle of the list ("middle", index count/2) *Near the end of the list ("late", index count-2) *At the end of the list ("end", index count-1) Before each scenario I generated randomly sized lists of random strings, and then fed each list to a hashset. Each scenario ran 10,000 times, essentially: (test pseudocode) stopwatch.start for X times exists = list.Contains(lookup); stopwatch.stop stopwatch.start for X times exists = hashset.Contains(lookup); stopwatch.stop Sample Output Tested on Windows 7, 12GB Ram, 64 bit, Xeon 2.8GHz ---------- Testing few small strings ------------ Sample items: (16 total) vgnwaloqf diwfpxbv tdcdc grfch icsjwk ... Benchmarks: 1: hashset: late -- 100.00 % -- [Elapsed: 0.0018398 sec] 2: hashset: middle -- 104.19 % -- [Elapsed: 0.0019169 sec] 3: hashset: end -- 108.21 % -- [Elapsed: 0.0019908 sec] 4: list: early -- 144.62 % -- [Elapsed: 0.0026607 sec] 5: hashset: start -- 174.32 % -- [Elapsed: 0.0032071 sec] 6: list: middle -- 187.72 % -- [Elapsed: 0.0034536 sec] 7: list: late -- 192.66 % -- [Elapsed: 0.0035446 sec] 8: list: end -- 215.42 % -- [Elapsed: 0.0039633 sec] 9: hashset: early -- 217.95 % -- [Elapsed: 0.0040098 sec] 10: list: start -- 576.55 % -- [Elapsed: 0.0106073 sec] ---------- Testing many small strings ------------ Sample items: (10346 total) dmnowa yshtrxorj vthjk okrxegip vwpoltck ... Benchmarks: 1: hashset: end -- 100.00 % -- [Elapsed: 0.0017443 sec] 2: hashset: late -- 102.91 % -- [Elapsed: 0.0017951 sec] 3: hashset: middle -- 106.23 % -- [Elapsed: 0.0018529 sec] 4: list: early -- 107.49 % -- [Elapsed: 0.0018749 sec] 5: list: start -- 126.23 % -- [Elapsed: 0.0022018 sec] 6: hashset: early -- 134.11 % -- [Elapsed: 0.0023393 sec] 7: hashset: start -- 372.09 % -- [Elapsed: 0.0064903 sec] 8: list: middle -- 48,593.79 % -- [Elapsed: 0.8476214 sec] 9: list: end -- 99,020.73 % -- [Elapsed: 1.7272186 sec] 10: list: late -- 99,089.36 % -- [Elapsed: 1.7284155 sec] ---------- Testing few long strings ------------ Sample items: (19 total) hidfymjyjtffcjmlcaoivbylakmqgoiowbgxpyhnrreodxyleehkhsofjqenyrrtlphbcnvdrbqdvji... ... Benchmarks: 1: list: early -- 100.00 % -- [Elapsed: 0.0018266 sec] 2: list: start -- 115.76 % -- [Elapsed: 0.0021144 sec] 3: list: middle -- 143.44 % -- [Elapsed: 0.0026201 sec] 4: list: late -- 190.05 % -- [Elapsed: 0.0034715 sec] 5: list: end -- 193.78 % -- [Elapsed: 0.0035395 sec] 6: hashset: early -- 215.00 % -- [Elapsed: 0.0039271 sec] 7: hashset: end -- 248.47 % -- [Elapsed: 0.0045386 sec] 8: hashset: start -- 298.04 % -- [Elapsed: 0.005444 sec] 9: hashset: middle -- 325.63 % -- [Elapsed: 0.005948 sec] 10: hashset: late -- 431.62 % -- [Elapsed: 0.0078839 sec] ---------- Testing many long strings ------------ Sample items: (5000 total) yrpjccgxjbketcpmnvyqvghhlnjblhgimybdygumtijtrwaromwrajlsjhxoselbucqualmhbmwnvnpnm ... Benchmarks: 1: list: early -- 100.00 % -- [Elapsed: 0.0016211 sec] 2: list: start -- 132.73 % -- [Elapsed: 0.0021517 sec] 3: hashset: start -- 231.26 % -- [Elapsed: 0.003749 sec] 4: hashset: end -- 368.74 % -- [Elapsed: 0.0059776 sec] 5: hashset: middle -- 385.50 % -- [Elapsed: 0.0062493 sec] 6: hashset: late -- 406.23 % -- [Elapsed: 0.0065854 sec] 7: hashset: early -- 421.34 % -- [Elapsed: 0.0068304 sec] 8: list: middle -- 18,619.12 % -- [Elapsed: 0.3018345 sec] 9: list: end -- 40,942.82 % -- [Elapsed: 0.663724 sec] 10: list: late -- 41,188.19 % -- [Elapsed: 0.6677017 sec] ---------- Testing few ints ------------ Sample items: (16 total) 7266092 60668895 159021363 216428460 28007724 ... Benchmarks: 1: hashset: early -- 100.00 % -- [Elapsed: 0.0016211 sec] 2: hashset: end -- 100.45 % -- [Elapsed: 0.0016284 sec] 3: list: early -- 101.83 % -- [Elapsed: 0.0016507 sec] 4: hashset: late -- 108.95 % -- [Elapsed: 0.0017662 sec] 5: hashset: middle -- 112.29 % -- [Elapsed: 0.0018204 sec] 6: hashset: start -- 120.33 % -- [Elapsed: 0.0019506 sec] 7: list: late -- 134.45 % -- [Elapsed: 0.0021795 sec] 8: list: start -- 136.43 % -- [Elapsed: 0.0022117 sec] 9: list: end -- 169.77 % -- [Elapsed: 0.0027522 sec] 10: list: middle -- 237.94 % -- [Elapsed: 0.0038573 sec] ---------- Testing many ints ------------ Sample items: (10357 total) 370826556 569127161 101235820 792075135 270823009 ... Benchmarks: 1: list: early -- 100.00 % -- [Elapsed: 0.0015132 sec] 2: hashset: end -- 101.79 % -- [Elapsed: 0.0015403 sec] 3: hashset: early -- 102.08 % -- [Elapsed: 0.0015446 sec] 4: hashset: middle -- 103.21 % -- [Elapsed: 0.0015618 sec] 5: hashset: late -- 104.26 % -- [Elapsed: 0.0015776 sec] 6: list: start -- 126.78 % -- [Elapsed: 0.0019184 sec] 7: hashset: start -- 130.91 % -- [Elapsed: 0.0019809 sec] 8: list: middle -- 16,497.89 % -- [Elapsed: 0.2496461 sec] 9: list: end -- 32,715.52 % -- [Elapsed: 0.4950512 sec] 10: list: late -- 33,698.87 % -- [Elapsed: 0.5099313 sec] A: Depends on what you're hashing. If your keys are integers you probably don't need very many items before the HashSet is faster. If you're keying it on a string then it will be slower, and depends on the input string. Surely you could whip up a benchmark pretty easily? A: One factor your not taking into account is the robustness of the GetHashcode() function. With a perfect hash function the HashSet will clearly have better searching performance. But as the hash function diminishes so will the HashSet search time. A: The breakeven will depend on the cost of computing the hash. Hash computations can be trivial, or not... :-) There is always the System.Collections.Specialized.HybridDictionary class to help you not have to worry about the breakeven point. A: It's essentially pointless to compare two structures for performance that behave differently. Use the structure that conveys the intent. Even if you say your List<T> wouldn't have duplicates and iteration order doesn't matter making it comparable to a HashSet<T>, its still a poor choice to use List<T> because its relatively less fault tolerant. That said, I will inspect some other aspects of performance, +------------+--------+-------------+-----------+----------+----------+-----------+ | Collection | Random | Containment | Insertion | Addition | Removal | Memory | | | access | | | | | | +------------+--------+-------------+-----------+----------+----------+-----------+ | List<T> | O(1) | O(n) | O(n) | O(1)* | O(n) | Lesser | | HashSet<T> | O(n) | O(1) | n/a | O(1) | O(1) | Greater** | +------------+--------+-------------+-----------+----------+----------+-----------+ * *Even though addition is O(1) in both cases, it will be relatively slower in HashSet since it involves cost of precomputing hash code before storing it. *The superior scalability of HashSet has a memory cost. Every entry is stored as a new object along with its hash code. This article might give you an idea. A: Depends on a lot of factors... List implementation, CPU architecture, JVM, loop semantics, complexity of equals method, etc... By the time the list gets big enough to effectively benchmark (1000+ elements), Hash-based binary lookups beat linear searches hands-down, and the difference only scales up from there. Hope this helps!
{ "language": "en", "url": "https://stackoverflow.com/questions/150750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "528" }
Q: How to avoid heap fragmentation? I'm currently working on a project for medical image processing, that needs a huge amount of memory. Is there anything I can do to avoid heap fragmentation and to speed up access of image data that has already been loaded into memory? The application has been written in C++ and runs on Windows XP. EDIT: The application does some preprocessing with the image data, like reformatting, calculating look-up-tables, extracting sub images of interest ... The application needs about 2 GB RAM during processing, of which about 1,5 GB may be used for the image data. A: There are answers, but it's difficult to be general without knowing the details of the problem. I'm assuming 32-bit Windows XP. Try to avoid needing 100s of MB of contiguous memory, if you are unlucky, a few random dlls will load themselves at inconventient points through your available address space rapidly cutting down very large areas of contiguous memory. Depending on what APIs you need, this can be quite hard to prevent. It can be quite surprising how just allocating a couple of 400MB blocks of memory in addition to some 'normal' memory usage can leave you with nowhere to allocate a final 'little' 40MB block. On the other hand, do preallocate reasonable size chunks at a time. Of the order of 10MB or so is a good compromise block size. If you can manage to partition your data into this sort of size chunks, you'll be able to fill the address space reasonably efficiently. If you're still going to run out of address space, you're going to need to be able to page blocks in and out based on some sort of caching algorithm. Choosing the right blocks to page out is going to depend very much on your processing algortihm and will need careful analysis. Choosing where to page things out to is another decision. You might decide to just write them to temporary files. You could also investigate Microsoft's Address Windowing Extenstions API. In either case you need to be careful in your application design to clean up any pointers that are pointing to something that is about to be paged out otherwise really bad things(tm) will happen. Good Luck! A: If you are going to be performing operations on a large image matrix, you might want to consider a technique called "tiling". The idea is generally to load the image in memory so that the same contiguous block of bytes would not contain pixels in one line, but rather of a square in 2D space. The rationale behind this is that you would do more operations that are closer to each other in 2D rather than on one scan line. This is not going to reduce your memory use, but may have a huge impact on page swapping and performance. A: Without much more information about the problem (for example language), one thing you can do is to avoid allocation churn by reusing allocations and not allocate, operate and free. Allocator such as dlmalloc handles fragmentation better than Win32 heaps. A: What you will be hitting here is virtual address range limit, which with 32b Windows gives you at most 2 GB. You should be also aware that using a graphical API like DirectX or OpenGL will use extensive portions of those 2 GB for frame buffer, textures and similar data. 1.5-2 GB for a 32b application is quite hard to achieve. The most elegant way to do this is to use 64b OS and 64b application. Even with 64b OS and 32b application this may be somewhat viable, as long as you use LARGE_ADDRESS_AWARE. However, as you need to store image data, you may also be able to work around this by using File Mapping as a memory store - this can be done in such a way that you have a memory committed and accessible, but not using any virtual addresses at all. A: If you are doing medical image processing it is likely that you are allocating big blocks at a time (512x512, 2-byte per pixel images). Fragmentation will bite you if you allocate smaller objects between the allocations of image buffers. Writing a custom allocator is not necessarily hard for this particular use-case. You can use the standard C++ allocator for your Image object, but for the pixel buffer you can use custom allocation that is all managed within your Image object. Here's a quick and dirty outline: * *Use a static array of structs, each struct has: * *A solid chunk of memory that can hold N images -- the chunking will help control fragmentation -- try an initial N of 5 or so *A parallel array of bools indicating whether the corresponding image is in use *To allocate, search the array for an empty buffer and set its flag * *If none found, append a new struct to the end of the array *To deallocate, find the corresponding buffer in the array(s) and clear the boolean flag This is just one simple idea with lots of room for variation. The main trick is to avoid freeing and reallocating the image pixel buffers. A: Guessing here that you meant avoid fragmentation and not avoid defragmentation. Also guessing that you are working with a non managed language (c or C++ probably). I would suggest that you allocate large chunks of memory and then serve heap allocations from the allocated memory blocks. This pool of memory because contains large blocks of memory is lessely prone to fragmentation. To sum up you should implement a custom memory allocator. See some general ideas on this here. A: I gues you're using something unmanaged, because in managed platforms the system (garbage collector) takes care of fragmentation. For C/C++ you can use some other allocator, than the default one. (there were alrady some threads about allocators on stackowerflow). Also, you can create your own data storage. For example, in the project I'm currently working on, we have a custom storage (pool) for bitmaps (we store them in a large contigous hunk of memory), because we have a lot of them, and we keep track of heap fragmentation and defragment it when the fragmentation is to big. A: You might need to implement manual memory management. Is the image data long lived? If not, then you can use the pattern used by apache web server: allocate large amounts of memory and wrap them into memory pools. Pass those pools as the last argument in functions, so they can use the pool to satisfy the need to allocate temporary memory. Once the call chain is finished, all the memory in the pool can should be no longer used, so you can scrub the memory area and used it again. Allocations are fast, since they only mean adding a value to a pointer. Deallocation is really fast, since you will free very large blocks of memory at once. If your application is multithreaded, you might need to store the pool in thread local storage, to avoid cross-thread communication overhead. A: If you can isolate exactly those places where you're likely to allocate large blocks, you can (on Windows) directly call VirtualAlloc instead of going through the memory manager. This will avoid fragmentation within the normal memory manager. This is an easy solution and it doesn't require you to use a custom memory manager.
{ "language": "en", "url": "https://stackoverflow.com/questions/150753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How to remove duplicate rows from flat file using SSIS? Let me first say that being able to take 17 million records from a flat file, pushing to a DB on a remote box and having it take 7 minutes is amazing. SSIS truly is fantastic. But now that I have that data up there, how do I remove duplicates? Better yet, I want to take the flat file, remove the duplicates from the flat file and put them back into another flat file. I am thinking about a: Data Flow Task * *File source (with an associated file connection) *A for loop container *A script container that contains some logic to tell if another row exists Thak you, and everyone on this site is incredibly knowledgeable. Update: I have found this link, might help in answering this question A: I would suggest using SSIS to copy the records to a temporary table, then create a task that uses Select Distinct or Rank depending on your situation to select the duplicates which would funnel them to a flat file and delete them from the temporary table. The last step would be to copy the records from the temporary table into the destination table. Determining a duplicate is something SQL is good at but a flat file is not as well suited for. In the case you proposed, the script container would load a row and then would have to compare it against 17 million records, then load the next row and repeat...The performance might not be all that great. A: Flat File Source --> Aggregate (Group By Columns you want unique) --> Flat File Destination A: Use the Sort Component. Simply choose which fields you wish to sort your loaded rows by and in the bottom left corner you'll see a check box to remove duplicates. This box removes any rows which are duplicates based on the sort criteria only so in the example below the rows would be considered duplicate if we only sorted on the first field: 1 | sample A | 1 | sample B | A: The strategy will usually depend on how many columns the staging table has. The more columns, the more complex the solution. The article you linked has some very good advice. The only thing that I will add to what everybody else has said so far, is that columns with date and datetime values will give some of the solutions presented here fits. One solution that I came up with is this: SET NOCOUNT ON DECLARE @email varchar(100) SET @email = '' SET @emailid = (SELECT min(email) from StagingTable WITH (NOLOCK) WHERE email > @email) WHILE @emailid IS NOT NULL BEGIN -- Do INSERT statement based on the email INSERT StagingTable2 (Email) FROM StagingTable WITH (NOLOCK) WHERE email = @email SET @emailid = (SELECT min(email) from StagingTable WITH (NOLOCK) WHERE email > @email) END This is a LOT faster when doing deduping, than a CURSOR and will not peg the server's CPU. To use this, separate each column that comes from the text file into their own variables. Use a separate SELECT statement before and inside the loop, then include them in the INSERT statement. This has worked really well for me. A: To do this on the flat file, I use the unix command line tool, sort: sort -u inputfile > outputfile Unfortunately, the windows sort command does not have a unique option, but you could try downloading a sort utility from one of these: * *http://unxutils.sourceforge.net/ *http://www.highend3d.com/downloads/tools/os_utils/76.html. (I haven't tried them, so no guarantees, I'm afraid). On the other hand, to do this as the records are loaded into the database, you could create a unique index on the key the database table whith ignore_dup_key. This will make the records unique very efficiently at load time. CREATE UNIQUE INDEX idx1 ON TABLE (col1, col2, ...) WITH IGNORE_DUP_KEY A: A bit of a dirty solution is to set your target table up with a composite key that spans all columns. This will guarantee distint uniqueness. Then on the Data Destination shape, configure the task to ignore errors. All duplicate inserts will fall off into oblivion. A: We can use look up tables for this. Like SSIS provides two DFS (Data Flow Transformations) i.e. Fuzzy Grouping and Fuzzy Lookup. A: I would recommend loading a staging table on the destination server and then merge the results into a target table on the destination server. If you need to run any hygiene rules, then you could do this via stored procedure since you are bound to get better performance than through SSIS data flow transformation tasks. Besides, deduping is generally a multi-step process. You may want to dedupe on: * *Distinct lines. *Distinct groups of columns like First Name, Last Name, Email Address, etc. *You may want to dedupe against an existing target table. If that's the case, then you may need to include NOT EXISTS or NOT IN statements. Or you may want to update the original row with new values. This usually is best served with a MERGE statement and a subquery for the source. *Take the first or last row of a particular pattern. For instance, you may want the last row entered in the file for each occurrence of an email address or phone number. I usually rely on CTE's with ROW_NUMBER() to generate sequential order and reverse order columns like in the folling sample: . WITH sample_records ( email_address , entry_date , row_identifier ) AS ( SELECT 'tester@test.com' , '2009-10-08 10:00:00' , 1 UNION ALL SELECT 'tester@test.com' , '2009-10-08 10:00:01' , 2 UNION ALL SELECT 'tester@test.com' , '2009-10-08 10:00:02' , 3 UNION ALL SELECT 'the_other_test@test.com' , '2009-10-08 10:00:00' , 4 UNION ALL SELECT 'the_other_test@test.com' , '2009-10-08 10:00:00' , 5 ) , filter_records ( email_address , entry_date , row_identifier , sequential_order , reverse_order ) AS ( SELECT email_address , entry_date , row_identifier , 'sequential_order' = ROW_NUMBER() OVER ( PARTITION BY email_address ORDER BY row_identifier ASC) , 'reverse_order' = ROW_NUMBER() OVER ( PARTITION BY email_address ORDER BY row_identifier DESC) FROM sample_records ) SELECT email_address , entry_date , row_identifier FROM filter_records WHERE reverse_order = 1 ORDER BY email_address; There are lots of options for you on deduping files, but ultimately I recommend handling this in a stored procedure once you have loaded a staging table on the destination server. After you cleanse the data, then you can either MERGE or INSERT into your final destination. A: Found this page link text might be worth looking at, although with 17 million records might take a bit too long
{ "language": "en", "url": "https://stackoverflow.com/questions/150760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How can I test if a list of files exist? I have a file that lists filenames, each on it's own line, and I want to test if each exists in a particular directory. For example, some sample lines of the file might be mshta.dll foobar.dll somethingelse.dll The directory I'm interested in is X:\Windows\System32\, so I want to see if the following files exist: X:\Windows\System32\mshta.dll X:\Windows\System32\foobar.dll X:\Windows\System32\somethingelse.dll How can I do this using the Windows command prompt? Also (out of curiosity) how would I do this using bash or another Unix shell? A: for /f %i in (files.txt) do @if exist "%i" (@echo Present: %i) else (@echo Missing: %i) A: Bash: while read f; do [ -f "$f" ] && echo "$f" exists done < file.txt A: In cmd.exe, the FOR /F %variable IN ( filename ) DO command should give you what you want. This reads the contents of filename (and they could be more than one filenames) one line at a time, placing the line in %variable (more or less; do a HELP FOR in a command prompt). If no one else supplies a command script, I will attempt. EDIT: my attempt for a cmd.exe script that does the requested: @echo off rem first arg is the file containing filenames rem second arg is the target directory FOR /F %%f IN (%1) DO IF EXIST %2\%%f ECHO %%f exists in %2 Note, the script above must be a script; a FOR loop in a .cmd or .bat file, for some strange reason, must have double percent-signs before its variable. Now, for a script that works with bash|ash|dash|sh|ksh : filename="${1:-please specify filename containing filenames}" directory="${2:-please specify directory to check} for fn in `cat "$filename"` do [ -f "$directory"/"$fn" ] && echo "$fn" exists in "$directory" done A: In Windows: type file.txt >NUL 2>NUL if ERRORLEVEL 1 then echo "file doesn't exist" (This may not be the best way to do it; it is a way I know of; see also http://blogs.msdn.com/oldnewthing/archive/2008/09/26/8965755.aspx) In Bash: if ( test -e file.txt ); then echo "file exists"; fi A: Please note, however, that using the default file systems under both Win32 and *nix there is no way to guarantee the atomicity of the operation, i.e. if you check for the existence of files A, B, and C, some other process or thread might have deleted file A after you passed it and while you were looking for B and C. File systems such as Transactional NTFS can overcome this limitation. A: I wanted to add one small comment to most of the above solutions. They are not actually testing if a particular file exists or not. They are checking to see if the file exists and you have access to it. It's entirely possible for a file to exist in a directory you do not have permission to in which case you won't be able to view the file even though it exists.
{ "language": "en", "url": "https://stackoverflow.com/questions/150762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Are regexes really maintainable? Any code I've seen that uses Regexes tends to use them as a black box: * *Put in string *Magic Regex *Get out string This doesn't seem a particularly good idea to use in production code, as even a small change can often result in a completely different regex. Apart from cases where the standard is permanent and unchanging, are regexes the way to do things, or is it better to try different methods? A: I don't know which language you're using, but Perl - for example - supports the x flag, so spaces are ignored in regexes unless escaped, so you can break it into several lines and comment everything inline: $foo =~ m{ (some-thing) # matches something \s* # matches any amount of spaces (match another thing) # matches something else }x; This helps making long regexes more readable. A: It only seems like magic if you don't understand the regex. Any number of small changes in production code can cause major problems so that is not a good reason, in my opinion, to not use regex's. Thorough testing should point out any problems. A: Small changes to any code in any language can result in completely different results. Some of them even prevent compilation. Substitute regex with "C" or "C#" or "Java" or "Python" or "Perl" or "SQL" or "Ruby" or "awk" or ... anything, really, and you get the same question. Regex is just another language, Huffman coded to be efficient at string matching. Just like Java, Perl, PHP, or especially SQL, each language has strengths and weaknesses, and you need to know the language you're writing in when you're writing it (or maintaining it) to have any hope of being productive. Edit: Mike, regex's are Huffman coded in that common things to do are shorter than than rarer things. Literal matches of text is generally a single character (the one you want to match). Special characters exist - the common ones are short. Special constructs, such as (?:) are longer. These are not the same things that would be common in general-purpose languages like Perl, C++, etc., so the Huffman coding was targetted at this specialisation. A: Complex regexes are fire-and-forget for me. Write it, test it, and when it works, write a comment what it does and we're fine. In many cases, however, you can breakdown regular expressions to smaller parts, maybe write some well-documented code that combines these regexes. But if you find a multi-line regex in your code, you better be not the one who must maintain it :) Sounds familiar? That's more or less true of any code. You don't want to have very long methods, you don't want to have very long classes, and you don't want to have very long regular expressions, though methods and classes are by far easier to refactor. But in essence, it's the same concept. A: Regex's aren't the ONLY way to do something. You can do logically in code everything that a regular expression can. Regular expressions are just * *Fast *Tested and Proven *Powerful A: RegExs can be very maintainable, if you utilize new features introduced by Perl 5.10. The features I refer to are back-ported features from Perl 6. Example copied directly from perlretut. Defining named patterns Some regular expressions use identical subpatterns in several places. Starting with Perl 5.10, it is possible to define named subpatterns in a section of the pattern so that they can be called up by name anywhere in the pattern. This syntactic pattern for this definition group is (?(DEFINE)(?<name>pattern)...). An insertion of a named pattern is written as (?&name). The example below illustrates this feature using the pattern for floating point numbers that was presented earlier on. The three subpatterns that are used more than once are the optional sign, the digit sequence for an integer and the decimal fraction. The DEFINE group at the end of the pattern contains their definition. Notice that the decimal fraction pattern is the first place where we can reuse the integer pattern. /^ (?&osg)\ * ( (?&int)(?&dec)? | (?&dec) ) (?: [eE](?&osg)(?&int) )? $ (?(DEFINE) (?<osg>[-+]?) # optional sign (?<int>\d++) # integer (?<dec>\.(?&int)) # decimal fraction ) /x A: If regexes are long and impenetrable, making them hard to maintain then they should be commented. A lot of regex implementations allow you to pad regexes with whitespace and comments. See https://www.regular-expressions.info/freespacing.html#parenscomment and Coding Horror: Regular Expressions: Now You Have Two Problems Any code I've seen that uses Regexes tends to use them as a black box: If by black box you mean abstraction, that's what all programming is, trying to abstract away the difficult part (parsing strings) so that you can concentrate on the problem domain (what kind of strings do I want to match). even a small change can often result in a completely different regex. That's true of any code. As long as you are testing your regex to make sure it matches the strings you expect, ideally with unit tests, then you should be confident at changing them. Edit: please also read Jeff's comment to this answer about production code. A: famous quote about regexes: "Some people, when confronted with a problem, think β€œI know, I'll use regular expressions.” Now they have two problems." -- Jamie Zawinski When I do use regexes, I find them to be maintainable, but they are used in special cases. There is usually a better, non-regex method for doing almost everything. A: When used consciously regular expressions are a powerful mechanism that spares you from lines and lines of possible text parsing. They should of course be documented correctly and efficiently tracked in order to verify if initial assumptions are still valid and otherwise updated them accordingly. Regarding maintenance IMHO is better to change a single line of code (the regular expression pattern) than understand lines and lines of parsing code or whatever the regular expressions purpose is. A: Are regexes the way to do things? It depends on the task. As with all things programming, there isn't a hard and fast right, or wrong answer. If a regexp solves a particular task quickly and simply, then it's possibly better then a more verbose solution. If a regexp is trying to achieve a complicated task, then something more verbose might be simpler to understand and therefore maintain. A: There are a lot of possibilities to make RegEx more maintainable. In the end it's just a technique a (good?) programmer has to learn when it comes to major (or sometimes even minor) changes. When there weren't some really good pro's no one would bother with them because of their complex syntax. But they are fast, compact and very flexible in doing their job. For .NET People there could be the "Linq to RegEx" library worse a look or "Readable Regular Expressions Library". It makes them more easy to maintain and yet easier to write. I used both of them in own projects I knew the html-sourcecode I analysed with them could change anytime. But trust me: When you cotton on to them they could even make fun to write and read. :) A: Obligatory. It really comes down to the regex. If it's this huge monolithic expression, then yes, it's a maintainability problem. If you can express them succinctly (perhaps by breaking them up), or if you have good comments and tools to help you understand them, then they can be a powerful tool. A: I have a policy of thoroughly commenting non-trivial regexes. That means describing and justifying each atom that doesn't match itself. Some languages (Python, for one) offer "verbose" regexes that ignore whitespace and allow comments; use this whenever possible. Otherwise, go atom by atom in a comment above the regex. A: The problem is not with the regexes themselves, but rather with their treatment as a black box. As with any programming language, maintainability has more to do with the person who wrote it and the person who reads it than with the language itself. There's also a lot to be said for using the right tool for the job. In the example you mentioned in your comment to the original post, a regex is the wrong tool to use for parsing HTML, as is mentioned rather frequently over on PerlMonks. If you try to parse HTML in anything resembling a general manner using only a regex, then you're going to end up either doing it in an incorrect and fragile manner, writing a horrendous and unmaintainable monstrosity of a regex, or (most likely) both. A: Your question doesn’t seem to pertain to regular expressions themselves, but only the syntax generally used to express regular expressions. Among many hardcore coders, this syntax has come to be accepted as pretty succinct and powerful, but for longer regular expressions it is actually really unreadable and unmaintainable. Some people have already mentioned the β€œx” flag in Perl, which helps a bit, but not much. I like regular expressions a lot, but not the syntax. It would be nice to be able to construct a regular expression from readable, meaningful method names. For example, instead of this C# code: foreach (var match in Regex.Matches(input, @"-?(?<number>\d+)")) { Console.WriteLine(match.Groups["number"].Value); } you could have something much more verbose but much more readable and maintainable: int number = 0; Regex r = Regex.Char('-').Optional().Then( Regex.Digit().OneOrMore().Capture(c => number = int.Parse(c)) ); foreach (var match in r.Matches(input)) { Console.WriteLine(number); } This is just a quick idea; I know there are other, unrelated maintainability issues with this (although I would argue they are fewer and more minor). An extra benefit of this is compile-time verification. Of course, if you think this is over the top and too verbose, you can still have a regular expression syntax that is somewhere in between, perhaps... instead of: -?(?<number>\d+) could have: ("-" or "") + (number = digit * [1..]) This is still a million times more readable and only twice as long. Such a syntax can easily be made to have the same expressive power as normal regular expressions, and it can certainly be integrated into a programming language’s compiler for static analysis. I don’t really know why there is so much opposition to rethinking the syntax for regular expressions even when entire programming languages are rethought (e.g. Perl 6, or when C# was new). Furthermore, the above very-verbose idea is not even incompatible with β€œold” regular expressions; the API could easily be implemented as one that constructs an old-style regular expression under the hood. A: I use them in my apps but I keep the actual regEx expression in the configuration file so if the source text I'm parsing (an email for example) changes format for some reason I can quickly update the config to handle the change without re-building the app. A: Regex has been referred to as a "write only" programming language for sure. However, I don't think that means you should avoid them. I just think you should comment the hell out of their intent. I'm usually not a big fan of comments that explain what a line does, I can read the code for that, but Regexs are the exception. Comment everything! A: I usually go to the extent of writing a scanner specification file. A scanner, or "scanner generator" is essentially an optimized text parser. Since I usually work with Java my preferred method is JFlex (http://www.jflex.de), but there is also Lex, YACC, and several others. Scanners work on regular expressions that you can define as macros. Then you implement callbacks when the regular expressions match part of the text. When it comes to the code I have a specification file containing all the parsing logic. I run it through the scanner generator tool of choice to generate the source code in the language of choice. Then I just wrap all that into a parser function or class of some sort. This abstraction then makes it easy to manage all the regular expression logic, and it is very good performance. Of course, it is overkill if you are working with just one or two regexps, and it easily takes at least 2-3 days to learn what the hell is going on, but if you ever work with, say, 5 or 6 or 30 of them, it becomes a really nice feature and implementing parsing logic starts to only take minutes and they stay easy to maintain and easy to document. A: I've always approached this issue as a building-block problem. You don't just write some 3000 character regex and hope for the best. You write a bunch of small chunks that you add together. For example, to match a URI, you have the protocol, authority, subdomain, domain, tld, path, arguments (at least). And some of these are optional! I'm sure you could write one monster to handle it, but it's easier to write chunks and add them together. A: I commonly split up the regex into pieces with comments, then put them all together for the final push. Pieces can be either substrings or array elements Two PHP PCRE examples (specifics or the particular use are not important): 1) $dktpat = '/^[^a-z0-9]*'. // skip any initial non-digits '([a-z0-9]:)?'. // division within the district '(\d+)'. // year '((-)|-?([a-z][a-z])-?)'. // type of court if any - cv, bk, etc. '(\d+)'. // docket sequence number '[^0-9]*$/i'; // ignore anything after the sequence number if (preg_match($dktpat,$DocketID,$m)) { 2) $pat= array ( 'Row' => '\s*(\d*)', 'Parties' => '(.*)', 'CourtID' => '<a[^>]*>([a-z]*)</a>', 'CaseNo' => '<a[^>]*>([a-z0-9:\-]*)</a>', 'FirstFiled' => '([0-9\/]*)', 'NOS' => '(\d*)', 'CaseClosed' => '([0-9\/]*)', 'CaseTitle' => '(.*)', ); // wrap terms in table syntax $pat = '#<tr>(<td[^>]*>'. implode('</td>)(</tr><tr>)?(<td[^>]*>',$pat). '</td>)</tr>#iUx'; if (preg_match_all ($pat,$this->DocketText,$matches, PREG_PATTERN_ORDER))
{ "language": "en", "url": "https://stackoverflow.com/questions/150764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Porting Android's Java VM to the iPhone? Does anyone know of any existing projects that aim to port Android's Java VM over to the iPhone? From what I understand, this wouldn't be too out of reach and would certainly make for some exciting developments. Edit: I should point out that I am aware this will not happen using the official iPhone SDK. However, a jailbroken platform would remove any Apple-imposed roadblocks. I imagine most that would be interested in integrating Android into the iPhone would also be the demographic that would typically have a jailbroken iphone. A: Android Dalvik run on iOS: The "In the box" open source project show on their website (www.in-the-box.org) for the first time and as a first step of their open source project a Android Dalvik VM running on iOS. (No need to jailbreak). "In the box" is an open source project create to provide a porting of Gingerbread Android runtime on top of iOS. It enables Android application developers to execute their Android application on iOS. Enjoy :-) A: As of now, there are no existing projects aiming to port Dalvik (the Android VM, which is not really a Java VM since it doesn't execute Java bytecode) to the iPhone. There is, however, at least one "real" Java VM available for the iPhone. You can find it in Cydia on jailbroken phones. The issue with these projects is that Apple doesn't allow third-party apps to execute code, so Java VMs can only run on jailbroken iPhones. A: There isn't currently an effort to port Dalvik to iPhone because Google hasn't released the source yet. As soon as the source is released (assuming all of it will be) I would think this will happen. It's also likely to be seen on other homebrew platforms such as PSP, Pandora, openmoko, etc. A: Apple's iPhone is a closed system. They control what is deployed from the OS to the applications. They have said they have no intention of supporting a JVM. This would have to be a rogue application outside of that control and therefore not very appealing to the masses. A: To be useful you'd also have to port the connection to Google's App Store. Yeah, Apple's gonna allow that. We're much more likely to see some iPhone-emulation tools for the Android.
{ "language": "en", "url": "https://stackoverflow.com/questions/150781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Side effects of calling RegisterWindow multiple times with same window class? I'm working on a little test application at the minute and I have multiple window objects floating around and they each call RegisterWindowEx with the same WNDCLASSEX structure (mainly because they are all an instance of the same class). The first one registers ok, then multiple ones fail, saying class already registered - as expected. My question is - is this bad? I was thinking of using a hash table to store the ATOM results in, to look up before calling RegisterWindow, but it seems Windows does this already? A: If your window class is defined in a DLL Perhaps you should call your RegisterClass() in the PROCESS_ATTACH part of the DllMain, and call your UnregisterClass() in the PROCESS_DETACH part of the DllMain If your window class is defined in the executable Perhaps you should call your RegisterClass() in the main, before the message loop, and call your UnregisterClass() in the main, after the message loop. Registering in an object constructor would be a mistake Because you would, by reflex, clean it in the destructor. Should one of your window be destroyed, the destructor will be called and... If you have other windows floating around... And using global data to count the number of active registration will need proper synchronisation to be sure your code is thread-friendly, if not thread-safe. Why in the main/DllMain? Because you're registering some kind of global object (at least, for your process). So it makes sense to have it initialized in a "global way", that is either in the main or in the DllMain. Why it is not so evil? Because Windows will not fail just because you did register it more than once. A clean code would have used GetClassInfo() to ask if the class was already registered. But again, Windows won't crash (for this reason, at least). You can even avoid unregistering the window class, as Windows will clean them away when the process will end. I saw conflicting info on MSDN blogs on the subject (two years ago... don't ask me to find the info again). Why it is evil anyway? My personal viewpoint is that you should cleanly handle your resources, that is allocate them once, and deallocate them once. Just because Win32 will clean leaking memory should not stop you from freeing your dynamically allocated memory. The same goes for window classes. A: You can test if the window class was previously registered calling GetClassInfoEx. If the function finds a matching class and successfully copies the data, the return value is nonzero. http://msdn.microsoft.com/en-us/library/ms633579(VS.85).aspx This way you can conditionally register the window class based on the return of GetClassInfoEx. A: You only need to call RegisterWindowEx once and then use the resulting ATOM when you create your windows. There are no problems in reusing this ATOM for several windows. A: I had this problem recently. To get around it and keep all of the code in the class concerned rather than spreading it around the program I had a class static reference count that I incremented before calling RegisterClass(). I then ignored return values of ERROR_CLASS_ALREADY_EXISTS and only called UnregisterClass() in the dtor when the reference count was 0. This didn't require any locking (use InterlockedIncrement() and InterlockedDecrement() to manage the reference count) and means that uses of the class don't need to know or care that internally the class uses a hidden window. A: Well, you might be able to avoid a call down into the kernel - RegisterClass seems to need to get down there - but window classes are per-process and per-module, so you shouldn't hurt anything by registering a class multiple times. Given that there aren't generally that many classes, I wouldn't be too surprised to find it was actually implemented as a linked-list. You might get a little gain by looking it up in a hash table, but you'd probably be better off doing it as a simple boolean.
{ "language": "en", "url": "https://stackoverflow.com/questions/150803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Best way to track down a memory leak (C#) only visible on one customer's box What is the best way to track down a memory leak that is only found on one customer's test/release box, and no where else? A: dotTrace3.1 (This question is kinda funny, cause I am tracking a mem leak that isn't present on my machine ...) A: Try a memory profiler like ANTS Profiler. A: If the user has the problem it consistently, take a stackdump and analyse in the standard way A: Here's an option: Give them a box where the leak isn't present. Sometimes, it's not the code. Edit: It's either the code, the data, or the configuration. Or the .NET Framework, the OS, the drivers, IIS, or COM (automating Excel, for example), or so-on. My assumption is that the memory leak is not reproducible except on the client's box (which the dev cannot be allowed to access for debugging). A: It's either code, data or configuration. Since you say the code is not faulty 100% of the time, I would blame configuration. Take a copy of the configuration (and optionally some data) and try to replicate the problem; you won't know you've found and fixed it without reproduction. Finally, solve it with a memory profiler. A: PerfMon can be helpful (http://dotnetdebug.net/2005/06/30/perfmon-your-debugging-buddy/). There are several counters that may help narrow down what resource is leaking, and at what rate, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/150805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How to handle an ActiveX event in Javascript This is somewhat of a follow-up to an answer here. I have a custom ActiveX control that is raising an event ("ReceiveMessage" with a "msg" parameter) that needs to be handled by Javascript in the web browser. Historically we've been able to use the following IE-only syntax to accomplish this on different projects: function MyControl::ReceiveMessage(msg) { alert(msg); } However, when inside a layout in which the control is buried, the Javascript cannot find the control. Specifically, if we put this into a plain HTML page it works fine, but if we put it into an ASPX page wrapped by the <Form> tag, we get a "MyControl is undefined" error. We've tried variations on the following: var GetControl = document.getElementById("MyControl"); function GetControl::ReceiveMessage(msg) { alert(msg); } ... but it results in the Javascript error "GetControl is undefined." What is the proper way to handle an event being sent from an ActiveX control? Right now we're only interested in getting this working in IE. This has to be a custom ActiveX control for what we're doing. Thanks. A: I have used activex in my applications before. i place the object tags in the ASP.NET form and the following JavaScript works for me. function onEventHandler(arg1, arg2){ // do something } window.onload = function(){ var yourActiveXObject = document.getElementById('YourObjectTagID'); if(typeof(yourActiveXObject) === 'undefined' || yourActiveXObject === null){ alert('Unable to load ActiveX'); return; } // attach events var status = yourActiveXObject.attachEvent('EventName', onEventHandler); } A: OK, but if you are using C# (.NET 2.0) with inherited UserControl (ActiveX)... The only way to make it work is by "Extending" the event's handler functionality: http://www.codeproject.com/KB/dotnet/extend_events.aspx?display=Print The above project link from our friend Mr. Werner Willemsens has saved my project. If you don't do that the javascript can't bind to the event handler. He used the "extension" in a complex way due to the example he chose but if you make it simple, attaching the handle directly to the event itself, it also works. The C# ActiveX should support "ScriptCallbackObject" to bind the event to a javascript function like below: var clock = new ActiveXObject("Clocks.clock"); var extendedClockEvents = clock.ExtendedClockEvents(); // Here you assign (subscribe to) your callback method! extendedClockEvents.ScriptCallbackObject = clock_Callback; ... function clock_Callback(time) { document.getElementById("text_tag").innerHTML = time; } Of course you have to implement IObjectSafety and the other security stuff to make it work better. A: I was able to get this working using the following script block format, but I'm still curious if this is the best way: <script for="MyControl" event="ReceiveMessage(msg)"> alert(msg); </script> A: If you have an ActiveX element on your page that has an id of 'MyControl' then your javascript handler syntax is this: function MyControl::ReceiveMessage(msg) { alert(msg); } A: I found this code works within a form tag. In this example, callback is a function parameter passed in by javascript to the ActiveX control, and callbackparam is a parameter of the callback event generated within the activeX control. This way I use the same event handler for whatever types of events, rather than try to declare a bunch of separate event handlers. <object id="ActivexObject" name="ActivexObject" classid="clsid:15C5A3F3-F8F7-4d5e-B87E-5084CC98A25A"></object> <script> function document.ActivexObject::OnCallback(callback, callbackparam){ callback(callbackparam); } </script> A: In my case, I needed a way to dynamically create ActiveX controls and listen to their events. I was able to get something like this to work: //create the ActiveX var ax = $("<object></object>", { classid: "clsid:" + clsid, codebase: install ? cabfile : undefined, width: 0, height: 0, id: '__ax_'+idIncrement++ }) .appendTo('#someHost'); And then to register a handler for an event: //this function registers an event listener for an ActiveX object (obviously for IE only) //the this argument for the handler is the ActiveX object. function registerAXEvent(control, name, handler) { control = jQuery(control); //can't use closures through the string due to the parameter renaming done by the JavaScript compressor //can't use jQuery.data() on ActiveX objects because it uses expando properties var id = control[0].id; var axe = registerAXEvent.axevents = registerAXEvent.axevents || {}; axe[id] = axe[id] || {}; axe[id][name] = handler; var script = "(function(){"+ "var f=registerAXEvent.axevents['" + id + "']['" + name + "'],e=jQuery('#" + id + "');"+ "function document." + id + "::" + name + "(){"+ "f.apply(e,arguments);"+ "}"+ "})();"; eval(script); } This code allows you to use closures and minimizes the scope of the eval(). The ActiveX control's <object> element must already be added to the document; otherwise, IE will not find the element and you'll just get script errors. A: I think that the MyControl::ReceiveMessage example does not work because the ActiveX control is being exposed with a different name or in a different scope. With the example GetControl::ReceiveMessage, I believe that the function definition is being parsed before the GetControl reference is being set, thus it does not refer to a valid object and cannot bind the function to the object. I would attack this problem by using the MS script debugger and trying to determine if a default reference for the control exists with a different name or in a different scope (possibly as a child of the form). If you can determine the correct reference for the control, you should be able to bind the function properly with the Automagic :: method that the MSDN article specifies. One more thought, the reference may be based on the name of the object and not the ID, so try setting both :)
{ "language": "en", "url": "https://stackoverflow.com/questions/150814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Strategies for Data Loading and DB Initialization for Integration Tests I'm developing an integration testing framework for a data intensive J2EE enterprise application and I'm trying to decide upon a strategy for initializing and populating the database. We have a fairly complex model. The system will have to: * *Initialize the system itself *Load users *Load application test data The test data won't be as complex as the system will handle, load and stress testing is the domain of a specialized test team. We're interested in how well the UI's display what is in the system and that functions integration correctly from top to bottom. A: DBUnit is a pretty good framework for loading data into a test database. A: Unitils provides support for both loading the test data and keeping the test DB schema up-to-date. In order for the latter to work, your schema change scripts need to follow a particular naming convention.
{ "language": "en", "url": "https://stackoverflow.com/questions/150820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Pentaho vs Microsoft BI Stack My company is heavily invested in the MS BI Stack (SQL Server Reporting Services, -Analysis Services and -Integration Services), but I want to have a look at what the seemingly most talked about open-source alternative Pentaho is like. I've installed a version, and I got it up and running quite painlessly. So that's good. But I haven't really the time to start using it for actual work to get a thorough understanding of the package. Have any of you got any insights into what are the pros and cons of Pentaho vs MS BI, or any links to such comparisons? Much appreciated! A: Warning -- there are numerous sites out there listing the numerous deficiencies, bugs, and annoyances with SSIS. Not sure why SSIS came out on top with the post -- but before you bet your project on it, look at what people have to say in the blogosphere. From my experience its about 20:1 ranting about how horrible SSIS is to work with--I can concur as well, currently looking for any alternative. A: Great information here? I have not tried Pentaho but and planning on checking it out. I am a seasoned MS BI consultant, using it since 1998. SSIS is very fast and very powerful but the criticisms are spot on. I found the following issues with SSIS: (1) It is hard to debug, you get cryptic errors that may not give you any hint about what and where the problem really is. (2) Per a prior comment, it is the shittiest development environment ever! I have no clue what they are thinking. (a) Create a table with a 100 or more columns and put a merge join on it. Now go back in and try to make an update to the merge join (like pull a new column through). It can take several minutes, even on the fastest machine after you click ok on the merge join to save your change. I have a huge dataflow with lots of wide records and many merge joins. Adding one column to the dataflow takes more than half a day. I update a merge join and then have to go do something else and check back 5-10 minutes later to see if it is done. Microsoft's response to this is to break up your package into multiple packages, place the data in a table or binary between them. Well if you are going to disk between all the steps, you may was well do the whole thing in SQL! One of the main purposes of an ETL tool is to all this stuff in memory and avoid disk I/O. (b) The designer outright crashes sometimes, losing all your work since last save (I do ctrl-S in my sleep now because of this) (c) I had to figure out a hack and generate SSIS package XML in Excel for wide records. I have a Healthcare client where 600+ column records are commonplace. If you try to define a file format with 600 columns in SSIS, you have to type every single column in one at a time!!! Even MS access allows you to cut and paste a layout from a spreadsheet into a file layout, but not SSIS. So I had to generate the XML from the layout and paste the XML code into the right place in the package. Ugly way to do it but it saved entire days of work and lots of errors. (d) Similar to (c), if you need to trim all your columns and you have say 600+ of them, guess what? In the derived column component, you have to type trim(column1) 600+ times! I now do all simple transforms like this in the SQL query to get the data, since that can easily be generated from an Excel sheet. (e) There are many quirky things, components that turn invisible, sometimes you open the package and all the components are completely re-arranged incoherently. (f) The FTP feature, possibly one of the most common things you need in ETL, is weak and only supports plain vanilla FTP which nobody uses. Everyone these days uses SFTP, FTPS, https, etc... So almost every implementation requires using a 3rd party commend line driven file transfer app the package has to call. (g) Trying to CYA, similar to the ridiculous security in Windows Vista, Microsoft has made it exceedingly difficult to actually promote an SSIS package from one environment to another. It defaults to this stupid thing of "encrypting sensitive information with user key" security which means it must run under the same account in the environment you are moving it to as the environment you developed it, something that is rarely the case. There are better ways to configure but it always try to revert to this completely useless security protection. (h) Lastly most of these problems are now in there 3rd version, clearly indicating Microsoft has no plan to fix them. (i) Debugging is not nearly as easy as other languages. SSIS still has a great many benefits, but not without some serious pain. A: I reviewed multiple Bi stacks while on a path to get off of Business Objects. A lot of my comments are preference. Both tool sets are excellent. Some things are how I prefer chocolate fudge brownie ice cream over plain chocolate. Pentaho has some really smart guys working with them but Microsoft has been on a well funded and well planned path. Keep in mind MS are still the underdogs in the database market. Oracle is king here. To be competitive MS has been giving away a lot of goodies when you buy the database and have been forced to reinvent their platform a couple of times. I know this is not about the database, but the DB battle has cause MS to give away a lot in order to add value to their stack. 1.) Platform SQL server doesn't run on Unix or Linux so they are automatically excluded from this market. Windows is about the same price as some versions or Unix now. Windows is pretty cheap and runs faily well now. It gives me about as much trouble as Linux. 2.) OLAP Analysis services was reinvented in 2005 (current is 2008) over the 2000 version. It is an order of magnatude more powerful over 2000. The pentaho (Mondrian) is not as fast once you get big. It also has few features. It is pretty good but there are less in the way of tools. Both support Excel as the platform which is esscential. The MS version is more robust. 3.) ETL MS - DTS has been replaced with SSIS. Again, order of magnatude increase in speed, power, and ability. It controls any and all data movement or program control. If it can't do it you can write a script in Powershell. On par with Informatica in the 2008 release. Pentaho - Much better than is used to be. Not as fast as I would like but I can do just about everything I want to do. 4.) dashboard Pentaho has improved this. It is sort of uncomfortable and unfriendly to develop but there is really not a real equiv for MS. 5.) reports MS reports is really powerful but not all that hard to use. I like it now but hated it at first, until I got to know it a little better. I had been using crystal reports and the MS report builder is much more powerful. It is easy to do hard things in MS, but a little harder to do easy things. Pentaho is a little clumsy. I didn't like it at all but you might. I found it to be overly complex. I wish it was either more like the Crystal report builder or the MS report builder but it is jasper like. I find is to be hard. That may be a preference. 6.) ad hoc MS - this was the real winner for me. I tested it with my users an they instantly in love with the MS user report builder. What made the difference was how it was not just easy to use, but also productive. Pentaho - is good but pretty old school. It uses the more typical wizard based model and has powerful tools but I hate it. It is an excellent tool for what it is, but we have moved on from this style and no one wants to go back. Same problem I had with logiXML. The interface worked well for what it was but is not really much of a change from what we used 12 years. http://wiki.pentaho.com/display/PRESALESPORTAL/Methods+of+Interactive+Reporting There are some experienced people out there that can make Pentaho really run well, I just found the MS suite to be more productive. A: I started using MS Reporting Services many years ago and just love it. I've not tried Penaho's reporting solution so I can't comment on it. Nor have I tried either Analysis Services or Pentaho's alternative. Recently I needed an ETL solution and being familiar with MSSQL and MSRS it seemed obvious that I would review and probably choose MS Integration Service. But for me, MSIS was awful. Mostly because it was not intuitive. After spending a couple of days trying to learn the tool I decided to look for an alternative and came across Pentaho Data Integration, formerly known as Kettle. I had it up and running within minutes and immediately created my first transformation. It just works. Admittedly my needs are fairly simple but performance has been great and the community seems very helpful. A: I have used SSIS and Pentaho Kettle, and I would highly recommend using Pentaho Kettle for your ETL tool instead of SSIS. My reasons: -the flow of SSIS is task to task. Kettle makes you think about rows of data flowing through the system. Kettle's approach seems much more intuitive to me. -SSIS is poorly documented. This happens. But there seems to be a lot of nook-and-cranny clicking and setting of variables. Very complex. Pentaho has a community forum which is quite helpful. -I trust Pentaho to integrate with multiple types of databases, including SQL Server. You can also use JDBC which is nice. Also, I've used it to go between SQL Server and Oracle on one side and Vertica on the other. It has a bulk loader available for it on Vertica. That's quite nice. -I have found it very, very hard relatively speaking to get a SSIS package to run on a server. It just wasn't worth my time. -I found it quite easy for Pentaho to mail a warning or error message to a person or list of people. -Pentaho allows tasks to be done in JavaScript for things that need some logic. Simple and easily done with a language most of us have come across. A: I can't offer any input on the MS BI Stack but at the most recent Barcamp Orlando, the folks from Pentaho were there and spoke about their products and it was an extremely impressive demo. The fact that it's an Open Source project that you can extend yourself as well as a paid package for really good service leaves you with a lot of options. They demonstrated some paid work they did for a client and they definitely wow'd the crowd. I also had a chance to chat a little bit with a developer working on the data warehousing side of things for Pentaho and he was extremely sharp and was very open to suggestions and had no problems answering any questions. So as far as a company goes, Pentaho really impressed me with both their work and how friendly and approachable all of their developers were. A: a couple of points to add * *Although there is a window version of all Pentaho tools the setup in windows is onerous. Pentaho (especially the server start and stop which is separate from the GUI tool) is typically used in Linux, not windows shop, and there is steep learning curve going from Windows to Linux. *any tool has a learning curve when you shift to it. when you get used to always clicking OK and refreshing metadata when you have problems, SSIS isn't that bad. Pentaho can be flaky, too. Tool questions need to be addressed in terms of larger cultural questions - what kind of shops use open source tools? in my experience i've found that althsough Microsoft shops seem more rigid, when you have trouble with a connection string in a Microsoft shop you can get help.. in Pentaho and Linux shops its more DYI. BTW, watch out for Pentaho sales guys doing demos - all the things they show are a lot harder to get working than it seems! :) A: If you are looking for a robust, low cost alternative to the big boys LogiXML has dashboarding and ad hoc reporting on a .NET platform. We've been using them since late 2006 when Pentaho was just starting, but I haven't looked at it in awhile. A: I recently tried pentaho open source BI. I found it to be extremely clumsy. It was not very intuitive and development time took much longer. It is quite different from either Oracle or ms BI solutions. Maybe the enterprise edition is better.
{ "language": "en", "url": "https://stackoverflow.com/questions/150825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: How can I create this action link? I'm having issues creating an ActionLink using Preview 5. All the docs I can find describe the older generic version. I'm constructing links on a list of jobs on the page /jobs. Each job has a guid, and I'd like to construct a link to /jobs/details/{guid} so I can show details about the job. My jobs controller has an Index controller and a Details controller. The Details controller takes a guid. I've tried this <%= Html.ActionLink(job.Name, "Details", job.JobId); %> However, that gives me the url "/jobs/details". What am I missing here? Solved, with your help. Route (added before the catch-all route): routes.Add(new Route("Jobs/Details/{id}", new MvcRouteHandler()) { Defaults = new RouteValueDictionary(new { controller = "Jobs", action = "Details", id = new Guid() } }); Action link: <%= Html.ActionLink(job.Name, "Details", new { id = job.JobId }) %> Results in the html anchor: http://localhost:3570/WebsiteAdministration/Details?id=2db8cee5-3c56-4861-aae9-a34546ee2113 So, its confusing routes. I moved my jobs route definition before the website admin and it works now. Obviously, my route definitions SUCK. I need to read more examples. A side note, my links weren't showing, and quickwatches weren't working (can't quickwatch an expression with an anonymous type), which made it much harder to figure out what was going on here. It turned out the action links weren't showing because of a very minor typo: <% Html.ActionLink(job.Name, "Details", new { id = job.JobId })%> That's gonna get me again. A: Give this a shot: <%= Html.ActionLink(job.Name, "Details", new { guid = job.JobId}); %> Where "guid" is the actual name of the parameter in your route. This instructs the routing engine that you want to place the value of the job.JobId property into the route definition's guid parameter. A: Have you defined a route to handle this in your Global.asax.cs file? The default route is {controller}/{action}/{id}. You are passing "JobID", which the framework won't map to "id" automatically. You either need to change this to be job.id or define a route to handle this case explicitly.
{ "language": "en", "url": "https://stackoverflow.com/questions/150845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to get JRE/JDK with matching source? I'd like to get at least one JRE/JDK level on my Windows machine where I have the JRE/JDK source that matches the exact level of the JRE/JDK. My purpose is to be able to go into the system classes while debugging. Any suggestions about how to do this? Thanks in advance. A: Most of the useful source will be in the src.zip file in your JDK. You can get source up to jdk 6u3 from jdk6.dev.java.net. On Linux you can get OpenJDK source and packages from openjdk.java.net. A: I had this problem for a long time; the source-download site must just not have been maintained for a while there. It seems fixed now, though: http://download.java.net/jdk6/6u10/archive/ (Has links for all the JDK 6 source downloads, not just 6u10.) A: The source code is included in the JDK 1.5+ installer. Just make sure that the option is not unchecked while installing. A: Just install the JDK. It will install a private JRE too and the source will match. If you need a specific JDK, see here: http://java.sun.com/products/archive/ A: If you're using eclipse, you can bind the JDK to its source if it is not done automatically. This is done in Window > Preferences > Java > Installed JREs. You edit one of the listed JRE/JDK and for each jar on 'System libraries' you indicate what's the src (you can set the zip that comes with the JDK for instance). This way you can debug any JDK class.
{ "language": "en", "url": "https://stackoverflow.com/questions/150849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What IDEs and tools are available for C language development? Looking at learning some C since i saw in another SO question that is good to learn for the language and for the historical experience. Wondering about what IDE's professionals use and what other tools are useful while programming in C? A: I have always been fond of Code::Blocks It's a wonderful C/C++ IDE, with several helpful addons. As for a compiler I've always used MingW but I hear DigitalMars C/C++ compiler is good. A: I actually use Vim when editing C code, so I don't really know about C IDEs. I often use a couple of tools to help though: * *Ctags : Generate tag files for source code *Make : Build automatisation *GDB : The GNU debugger *GCC : The GNU C Compiler A: You can play with Eclipse, it is not the best one for C but it works. For a compiler I would use GNU gcc. for tools, look at CScope, gdb (debugger). If you don't care for extra baggage go with Microsoft Visual C++ Express edition but do keep in mind there is lots of extra stuff in there... A: If you use Windows I suggest using Visual Studio. There's a free Express Edition here, but there is a downside - Visual C++ has a lot of "added functionality" for Win32 and .Net development. These added features might be confusing when trying to focus on C. I learned using Borland's Turbo C. It was a while back, though. A: I use Cygwin as my development environment and Notepad++ as an editor; I prefer sets of simple applications that each do one thing rather than massive complicated IDEs. Visual Studio is particularly problematic in this sense; not only is it very C++-oriented, but its completely overwhelming to newer programmers due to its sheer mass of features. MSVC also lacks support for most of the C99 standard, which can be very annoying when programming in C. For example, you have to declare all variables at the top of code blocks. A: A favorite of mine is Slickedit. A comprehensive IDE, one of the first apps to have C and C++ function hints (think, intellisense), works with GCC or almost every c/c++ compiler out there, will help you manage a make file or let you do it all yourself, fast, clean, and all in all slick. Integrates with almost any version control server as well. Fully configurable, has C/C++ refactoring, and will read/import almost any/every other project type out there. Of course, you have to pay for it, but if you want a good experience, you usually do. Alternatively, there's many many free code development tools out there like Eclipse, Textpad, CodeBlocks, Editpad, all with various levels of project integration. Most of Microsoft's development apps are available with their Visual Studio Express apps, if that's your cup of tea. Of course, lets not forget the classics: Vi, Emacs. People have developed with these tools for a long, long time. A: If you develop on the Windows platform, the Zeus editor has support for the C language. (source: zeusedit.com) A: Netbeans provides a fairly slick C/C++ development environment. Excellent for anyone who is already familiar with NB for Java, Ruby, or PHP development. Provides many of the same features as Visual Studio, Borland, or CodeWarrior (are they still around?) but without being tied to the proprietary libraries. It also provides for a portable development environment so you get a consistent workflow and toolset between platforms. Of course, a properly configured Vim with the GNU compiler tools can provide a pretty slick experience. You don't get popups and a gui, but it can automate the build process and even jump to errors in your code.
{ "language": "en", "url": "https://stackoverflow.com/questions/150876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What can cause mutated Word document attachements? We are sending out Word documents via email (automated system, not by hand). The email is sent to the user, and CC'd to me. We are getting reports that some users are having the attachments come through corrupted, though when we open the copy that is CC'd to me, it opens fine. When the user forwards us the copy they received, then we cannot open it. Below is a hex comparison of the two files. Can anyone identity what is going on here? Message headers are below Return-Path: <info@example.co.nz> Received: from animal.hosts.net.nz (root@localhost) by example.co.nz (8.12.11/8.12.11) with ESMTP id m8T52Mw6021168; Mon, 29 Sep 2008 18:02:22 +1300 X-Clientaddr: 210.48.108.196 Received: from marjory.hosts.net.nz (marjory.hosts.net.nz [210.48.108.196]) by animal.hosts.net.nz (8.12.11/8.12.11) with ESMTP id m8T52EvU028021; Mon, 29 Sep 2008 18:02:19 +1300 Received: from example.example.co.nz ([210.48.67.48]) by marjory.hosts.net.nz with esmtp (Exim 4.63) (envelope-from <info@example.co.nz>) id 1KkAtd-0004Ch-I9; Mon, 29 Sep 2008 18:02:09 +1300 Received: from localhost ([127.0.0.1]) by example.example.co.nz with esmtp (Exim 4.63) (envelope-from <info@example.co.nz>) id 1KkAtV-0001C3-4s; Mon, 29 Sep 2008 18:02:01 +1300 From: "XXX" <info@example.co.nz> To: "Sue" <sue@example.co.nz> Reply-To: jayar_navarro@example.com Subject: XXX: new application received Date: Mon, 29 Sep 2008 18:02:01 +1300 Content-Type: multipart/mixed; charset="utf-8"; boundary="=_5549133ca51ec83196e2cfd28dad40f7" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline MIME-Version: 1.0 Message-ID: <E1KkAtV-0001C3-4s@example.example.co.nz> I think I know what it is, but not why it is happening. "X-Mimeole: Produced By Microsoft Exchange V6.5" the client is using Exchange. Now, compare these lines. The original: Content-Type: multipart/mixed; charset="utf-8"; boundary="=_5549133ca51ec83196e2cfd28dad40f7" What they get: Content-Type: multipart/mixed; boundary="----_=_NextPart_001_01C92270.6BBA3EE6" The missing charset="UTF-8" likely means that the client will fall back to Windows-1252, which I think (can someone confirm?) result in corrupted attachments. Now the question is, why would the char-set be stripped? A: Not sure what happens, but have you tried a compressed file? That sometimes solves the problem of corrupted email attachments. A: The first 3 characters are missing in the corrupted one - compare // Your correct version 00000BC0 0D 0D 0D 41 // Their corrupted one 00000BC0 D0 D4 1... Either their mail server, mail program, anti-virus or some such program has removed the first few chars, which seems to be causing the confusion when Word tries to open it. The fact that the file is still garbled when they send it back to you confirms that something is altering the file on their side once received.
{ "language": "en", "url": "https://stackoverflow.com/questions/150881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Socket programming: Do some ISP's impose rate-limiting on FTP uploads? I'm currently trying to debug a customer's issue with an FTP upload feature in one of our products. The feature allows customers to upload files (< 1MB) to a central FTP server for further processing. The FTP client code was written in-house in VB.NET. The customer reports that they receive "Connection forcibly closed by remote host" errors when they try to upload files in the range of 300KB to 500KB. However, we tested this in-house with much larger files (relatively speaking), i.e. 3MB and up, and never received this error. We uploaded to the same FTP server that the client connects to using the same FTP logon credentials, the only difference being that we did it from our office. I know that the TCP protocol has flow-control built-in, so it shouldn't matter how much data is sent in a single Send call, since the protocol will throttle itself accordingly to match the server's internal limits (if I remember correctly...) Therefore, the only thing I can think is that an intermediate host between the client and the router is artificially rate-limiting the client and disconnecting it (we send the file data in a loop in 512-byte chunks). This is the loop that is used to send the data (buffer is a Byte array containing the file data): For i = 0 To buffer.Length - 1 Step 512 mDataSocket.Send(buffer, i, 512, SocketFlags.None) OnTransferStatus(i, buffer.Length) Next Is it possible that the customer's ISP (or their own firewall) is imposing an artificial rate-limit on how much data our client code can send within a given period of time? If so, what is the best way to handle this situation? I guess the obvious solution would be to introduce a delay in our send loop, unless there is a way to do this at the socket level. It seems really odd to me that an ISP would handle a rate-limit violation by killing the client connection. Why wouldn't they just rely on TCP/IP's internal flow-control/throttling mechanism? A: Do a search for Comcast and BitTorrent. Here's one article. A: Try to isolate the issue: * *Let the customer upload the same file to a different server. Maybe the problem is with the client's ... FTP client. *Get the file from the client and upload it yourself with your client and see if you can repro the issue. In the end, even if a 3MB file works fine, a 500KB file isn't guaranteed to work, because the issue could be state-depending and happening while ending the file transfer. A: Yes, ISPs can impose limits to packets as they see fit (although it is ethically questionable). My ISP for example has no problem in cutting any P2P traffic its hardware manages to sniff out. Its called traffic shaping. However for FTP traffic this is highly unlikelly, but you never know. The thing is, they never drop your sockets with traffic shaping, they only drop packets. The tcp protocol is handled on each pear side so you can drop all the packets in between and the socket keeps alive. In some instances if one of the computers crashes the socket remains alive if you dont try to use it. I think you best bet is a bad firewall/proxy configuration on the client side. Better explanations here. Either that or a faulty or badly configured router or cable on the client installations. A: 500k is awefully small these days, so I'd be a little surprised if they throttle something that small. I know you're already chunking your request, but can you determine if any data is transferred? Does the code always fail at the same loop point? Are you able to look at the ftp server logs? What about an entire stack trace? Have you tried contacting the ISP and asking them what policies they have? That said, assuming that some data makes it through, one thought is that the ISP has traffic shaping and the rules engage after x bytes have been written. What could be happening is at data > x the socket timeout expires before the data is sent, throwing an exception. Keep in mind ftp clients create another connection for data transfer, but if the server detects the control connection is closed, it will typically kill the data transfer connection. So another thing to check is ensure the control connection is still alive. Lastly, ftp servers usually support resumable transfers, so if all other remedy's fail, resuming the failed transfer might be the easiest solution. A: I dont think the ISP would try to kill a 500KB file transfer. Im no expert in either socket thingy or on ISPs... just giving my thoughts on the matter.
{ "language": "en", "url": "https://stackoverflow.com/questions/150886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Remove Duplicates with Caveats I have a table with rowID, longitude, latitude, businessName, url, caption. This might look like: rowID | long | lat | businessName | url | caption 1 20 -20 Pizza Hut yum.com null How do I delete all of the duplicates, but only keep the one that has a URL (first priority), or keep the one that has a caption if the other doesn't have a URL (second priority) and delete the rest? A: This solution is brought to you by "stuff I've learned on Stack Overflow" in the last week: DELETE restaurant WHERE rowID in (SELECT rowID FROM restaurant EXCEPT SELECT rowID FROM ( SELECT rowID, Rank() over (Partition BY BusinessName, lat, long ORDER BY url DESC, caption DESC ) AS Rank FROM restaurant ) rs WHERE Rank = 1) Warning: I have not tested this on a real database A: Here's my looping technique. This will probably get voted down for not being mainstream - and I'm cool with that. DECLARE @LoopVar int DECLARE @long int, @lat int, @businessname varchar(30), @winner int SET @LoopVar = (SELECT MIN(rowID) FROM Locations) WHILE @LoopVar is not null BEGIN --initialize the variables. SELECT @long = null, @lat = null, @businessname = null, @winner = null -- load data from the known good row. SELECT @long = long, @lat = lat, @businessname = businessname FROM Locations WHERE rowID = @LoopVar --find the winning row with that data SELECT top 1 @Winner = rowID FROM Locations WHERE @long = long AND @lat = lat AND @businessname = businessname ORDER BY CASE WHEN URL is not null THEN 1 ELSE 2 END, CASE WHEN Caption is not null THEN 1 ELSE 2 END, RowId --delete any losers. DELETE FROM Locations WHERE @long = long AND @lat = lat AND @businessname = businessname AND @winner != rowID -- prep the next loop value. SET @LoopVar = (SELECT MIN(rowID) FROM Locations WHERE @LoopVar < rowID) END A: Set-based solution: delete from T as t1 where /* delete if there is a "better" row with same long, lat and businessName */ exists( select * from T as t2 where t1.rowID <> t2.rowID and t1.long = t2.long and t1.lat = t2.lat and t1.businessName = t2.businessName and case when t1.url is null then 0 else 4 end /* 4 points for non-null url */ + case when t1.businessName is null then 0 else 2 end /* 2 points for non-null businessName */ + case when t1.rowID > t2.rowId then 0 else 1 end /* 1 point for having smaller rowId */ < case when t2.url is null then 0 else 4 end + case when t2.businessName is null then 0 else 2 end ) A: delete MyTable from MyTable left outer join ( select min(rowID) as rowID, long, lat, businessName from MyTable where url is not null group by long, lat, businessName ) as HasUrl on MyTable.long = HasUrl.long and MyTable.lat = HasUrl.lat and MyTable.businessName = HasUrl.businessName left outer join ( select min(rowID) as rowID, long, lat, businessName from MyTable where caption is not null group by long, lat, businessName ) HasCaption on MyTable.long = HasCaption.long and MyTable.lat = HasCaption.lat and MyTable.businessName = HasCaption.businessName left outer join ( select min(rowID) as rowID, long, lat, businessName from MyTable where url is null and caption is null group by long, lat, businessName ) HasNone on MyTable.long = HasNone.long and MyTable.lat = HasNone.lat and MyTable.businessName = HasNone.businessName where MyTable.rowID <> coalesce(HasUrl.rowID, HasCaption.rowID, HasNone.rowID) A: Similar to another answer, but you want to delete based on row number rather than rank. Mix with common table expressions as well: ;WITH GroupedRows AS ( SELECT rowID, Row_Number() OVER (Partition BY BusinessName, lat, long ORDER BY url DESC, caption DESC) rowNum FROM restaurant ) DELETE r FROM restaurant r JOIN GroupedRows gr ON r.rowID = gr.rowID WHERE gr.rowNum > 1 A: If possible, can you homogenize, then remove duplicates? Step 1: UPDATE BusinessLocations SET BusinessLocations.url = LocationsWithUrl.url FROM BusinessLocations INNER JOIN ( SELECT long, lat, businessName, url, caption FROM BusinessLocations WHERE url IS NOT NULL) LocationsWithUrl ON BusinessLocations.long = LocationsWithUrl.long AND BusinessLocations.lat = LocationsWithUrl.lat AND BusinessLocations.businessName = LocationsWithUrl.businessName UPDATE BusinessLocations SET BusinessLocations.caption = LocationsWithCaption.caption FROM BusinessLocations INNER JOIN ( SELECT long, lat, businessName, url, caption FROM BusinessLocations WHERE caption IS NOT NULL) LocationsWithCaption ON BusinessLocations.long = LocationsWithCaption.long AND BusinessLocations.lat = LocationsWithCaption.lat AND BusinessLocations.businessName = LocationsWithCaption.businessName Step 2: Remove duplicates.
{ "language": "en", "url": "https://stackoverflow.com/questions/150891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Windows Forms UserControl overrides not being called I am creating a Windows Forms control derived from UserControl to be embedded in a WPF app. I have generally followed the procedures given in this link. public ref class CTiledImgViewControl : public UserControl { ... virtual void OnPaint( PaintEventArgs^ e ) override; ... }; And in my CPP file: void CTiledImgViewControl::OnPaint( PaintEventArgs^ e ) { UserControl::OnPaint(e); // do something interesting... } Everything compiles and runs, however the OnPaint method is never getting called. Any ideas of things to look for? I've done a lot with C++, but am pretty new to WinForms and WPF, so it could well be something obvious... A: The OnPaint won't normally get called in a UserControl unless you set the appropriate style when it is constructed using the SetStyle method. You need to set the UserPaint style to true for the OnPaint to get called. SetStyle(ControlStyles::UserPaint, true); Update I recently encountered this issue myself and went digging for an answer. I wanted to perform some calculations during a paint (to leverage the unique handling of paint messages) but I wasn't always getting a call to OnPaint. After digging around with Reflector, I discovered that OnPaint is only called if the clipping rectangle of the corresponding WM_PAINT message is not empty. My UserControl instance had a child control that filled its entire client region and therefore, clipped it all. This meant that the clipping rectangle was empty and so no OnPaint call. I worked around this by overriding WndProc and adding a handler for WM_PAINT directly as I couldn't find another way to achieve what I wanted. A: I solved the issue, in case anyone is interested. It was because my WinForms control was embedded in a ViewBox. I changed it to a grid and immediately started getting paint events. I guess when asking questions about WPF, you should always include the XAML in the question!
{ "language": "en", "url": "https://stackoverflow.com/questions/150900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: .Net Regex ValidationExpression ASCII Anyone know a good Regex expression to drop in the ValidationExpression to be sure that my users are only entering ASCII characters? <asp:RegularExpressionValidator id="myRegex" runat="server" ControlToValidate="txtName" ValidationExpression="???" ErrorMessage="Non-ASCII Characters" Display="Dynamic" /> A: One thing you may want to watch out for is the lower part of the ASCII table has a lot of control characters which can cause funky results. Here's the expression I use to only allow "non-funky" characters: ^([^\x0d\x0a\x20-\x7e\t]*)$ A: If you want to map the possible 0x00 - 0xff ASCII values you can use this regular expression (.NET). ^([\x00-\xff]*)$
{ "language": "en", "url": "https://stackoverflow.com/questions/150901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Hibernate -- load an object based on a member object's field How can an object be loaded via Hibernate based on a field value of a member object? For example, suppose the following classes existed, with a one-to-one relationship between bar and foo: Foo { Long id; } Bar { Long id; Foo aMember; } How could one use Hibernate Criteria to load Bar if you only had the id of Foo? The first thing that leapt into my head was to load the Foo object and set that as a Criterion to load the Bar object, but that seems wasteful. Is there an effective way to do this with Criteria, or is HQL the way this should be handled? A: You can absolutely use Criteria in an efficient manner to accomplish this: session.createCriteria(Bar.class). createAlias("aMember", "a"). add(Restrictions.eq("a.id", fooId)); ought to do the trick. A: You can use Criteria or HQL. HQL example: Query query = session.createQuery("from Bar as bar where bar.aMember.id = :fooId"); query.setParameter("fooId", fooId); List result = query.list();
{ "language": "en", "url": "https://stackoverflow.com/questions/150902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I password protect IIS in a method analogous to Apache's AuthType / AuthUserFile mechanism? I'm used to doing basic password protection for Apache w/ the following method in Apache config files: AuthType Basic AuthName "By Invitation Only" AuthUserFile /path/to/.htpasswd Require valid-user However, I've been asked to put some protection on a subdirectory of a site running ColdFusion on top of IIS6, and I'm unfamiliar with how to do this. How is this done? What should I look out for? I just need to password protect an administrative subdirectory, so I don't need a full user login system - just something that limits who can access the section of the site. A: You can go into IIS 6 and the properties for your website's folder you want to protect. Click directory security tab and uncheck allow anonymous. Then you need to choose an authntication type. If its over SSL you can use basic, otherwise use another type. But since you mention basic, this may suffice regardless. Keep in mind basic auth will send password in plain text. Thsi will ask for a windows login in order to access that folder. You can create a limited wondows account for this purpose, reagrdless if its on the domain or local.One more thing, wether you use an existing account or create a new account make sure this folder has at least read permission to the folder and its sub folders. A: I believe you're looking for IISPassword from Parker Software.
{ "language": "en", "url": "https://stackoverflow.com/questions/150923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's the best way for a .NET windows forms application to update itself? I use a home-grown system where the application updates itself from a web service. However, I seem to remember something in the original .NET sales pitch about auto-updating of components being a built-in feature of .NET. What are the best practices for having an application update itself and/or the assemblies it uses? A: You may want to take a look at the Click-Once technology. Some great examples in these references. http://www.codeproject.com/KB/install/QuickClickOnceArticle.aspx http://msdn.microsoft.com/en-us/magazine/cc163973.aspx A: Will ClickOnce do everything you want?
{ "language": "en", "url": "https://stackoverflow.com/questions/150935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Computer Science text book way to do text/xml/whatever parsing It's been ratling in my brain for a while. I've had some investigation on Compilers/Flex/Byson and stuff but I never found a good reference that talked in detail about the "parsing stack", or how to go about implementing one. Does anyone know of good references where I could catch up? Edit: I do appreciate all the compiler references, and I'm going to get some of the books listed, but my main focus was on the Parsing itself and not what you do with it after. A: This is in response to Dima's answer that you accepted as the correct answer. Although it is not a bad answer to state that parsing is related to automata theory, I feel that there is some misunderstanding here. * *Firstly, finite state automata are only able to recognize regular languages (e.g. regular expressions). In order to recognize context-free languages you need pushdown automata, which is more powerful. See http://en.wikipedia.org/wiki/Automata_theory#Classes_of_automata for more automata and their relation to different classes of languages. *Secondly, parsing is different from recognizing. Recognizing a string only tells you whether that string is in the language generated by your grammar. The purpose of a parser is to produce a concrete syntax tree which is both harder and generally more useful. There's a wide variety of parsing methods out there, so it's hard to give you one specific reference that will tell you what you need to know... In general, you should understand the difference between top-down parsing and bottom-up parsing. But here's an overview of a few common techniques employed by parser generators in case you're interested: * *The wikipedia articles for LR Parsing, LL Parsing, SLR Parsing, LALR Parsing, GLR Parsing *ANTLR's LL(*) parsing *Monadic Parsing in Haskell (for building parsers in functional programming languages) *And the more exotic Parsing Expression Grammars EDIT: I'm sorry for bumping this question again, I just happened across two excellent posts describing the relationship between regular languages and finite automata, context-free languages and push-down automata. Might be interesting for people who find this question. A: The Dragon book! I used it quite recently to write a compiler (in PHP!) for a processing language for template files written in RTF... A: A parser is basically a finite state machine, aka a finite automaton. You should find a book on theory of computation, which discusses finite automata, and things like regular languages, context free languages, etc. A: try amazon Compiler Construction is just one good example A: Check out "Brinch Hansen on Pascal Compilers".. it was written in 1985, but I used it last year for a course on compilers (by Per Brinch Hansen ofcourse.) and found it very concise and helpful for compiler design.
{ "language": "en", "url": "https://stackoverflow.com/questions/150937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you specify the port number in a oledb connection string to SQL server 2005? Just found this out, so i am answering my own question :) Use a comma where you would normally use a colon. This can be a problem for named instances, as you seem to need to specify the port even if it is the default port 1433. Example: Provider=SQLOLEDB;Data Source=192.168.200.123,1433; Initial Catalog=Northwind; User Id=WebUser; Password=windy" A: I always check out http://www.connectionstrings.com/. It is a brilliant resource for connection strings. A: Good call BlackWasp, actually that is where i found the answer! (But it was somewhat buried, so i wrote this one which is hopefully clearer)
{ "language": "en", "url": "https://stackoverflow.com/questions/150941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Error "set_paths: undefined method uid for nil:NilClass (NoMethodError)" while installing RubyGems on Vista I get the following error when attempting to install RubyGems. I've tried Googling but have had no luck there. Has anybody encountered and resolved this issue before? C:\rubygems-1.3.0> ruby setup.rb . . install -c -m 0644 rubygems/validator.rb C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/validator.rb install -c -m 0644 rubygems/version.rb C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/version.rb install -c -m 0644 rubygems/version_option.rb C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/version_option.rb install -c -m 0644 rubygems.rb C:/Ruby/lib/ruby/site_ruby/1.8/rubygems.rb install -c -m 0644 ubygems.rb C:/Ruby/lib/ruby/site_ruby/1.8/ubygems.rb cp gem C:/Users/brian/AppData/Local/Temp/gem install -c -m 0755 C:/Users/brian/AppData/Local/Temp/gem C:/Ruby/bin/gem rm C:/Users/brian/AppData/Local/Temp/gem install -c -m 0755 C:/Users/brian/AppData/Local/Temp/gem.bat C:/Ruby/bin/gem.bat rm C:/Users/brian/AppData/Local/Temp/gem.bat Removing old RubyGems RDoc and ri Installing rubygems-1.3.0 ri into C:/Ruby/lib/ruby/gems/1.8/doc/rubygems-1.3.0/ri ./lib/rubygems.rb:713:in `set_paths': undefined method `uid' for nil:NilClass (NoMethodError) from ./lib/rubygems.rb:711:in `each' from ./lib/rubygems.rb:711:in `set_paths' from ./lib/rubygems.rb:518:in `path' from ./lib/rubygems/source_index.rb:66:in `installed_spec_directories' from ./lib/rubygems/source_index.rb:56:in `from_installed_gems' from ./lib/rubygems.rb:726:in `source_index' from ./lib/rubygems.rb:138:in `activate' from ./lib/rubygems.rb:49:in `gem' from setup.rb:279:in `run_rdoc' from setup.rb:296 C:\rubygems-1.3.0> I have Ruby 1.8.6 installed on my laptop running Windows Vista. A: I assume you're not trying to install under cygwin; that install is meant for unix-like operating systems. Edit: (Actually, from the log above it looks like there is some Windows-specific stuff being run... perhaps you're running into a UAC protection issue?) If you just use the Windows ruby one-click installer, it includes rubygems. If you're not getting the rubygems functionality, you may need to require "rubygems" in your script, or add -rubygems to your RUBYOPT environment variable. (You can also require rubygems from the command line with ruby -rubygems myscript.rb. Are you trying to install it separately for some other reason? A: I have rubygems 1.2.0 installed on Vista and it works fine. I have no tested rubygems 1.3.0 yet. A: I found the same error with rubygems 1.3 on Vista. I downgraded to 1.2 and it seems to have fixed it A: I can confirm also, rubygems 1.3.0 on windows for some strange reason doesn't work at all. Downgrade, by re-installing 1.2.0 on top of the 1.3.0.
{ "language": "en", "url": "https://stackoverflow.com/questions/150953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Perform regex (replace) in an SQL query What is the best way to replace all '&lt' with &lt; in a given database column? Basically perform s/&lt[^;]/&lt;/gi Notes: * *must work in MS SQL Server 2000 *Must be repeatable (and not end up with &lt;;;;;;;;;;) A: How about: UPDATE tableName SET columName = REPLACE(columName , '&lt', '&lt;') WHERE columnName LIKE '%lt%' AND columnName NOT LIKE '%lt;%' Edit: I just realized this will ignore columns with partially correct &lt; strings. In that case you can ignore the second part of the where clause and call this afterward: UPDATE tableName SET columName = REPLACE(columName , '&lt;;', '&lt;') A: Some hacking required but we can do this with LIKE, PATINDEX, LEFT AND RIGHT and good old string concatenation. create table test ( id int identity(1, 1) not null, val varchar(25) not null ) insert into test values ('&lt; <- ok, &lt <- nok') while 1 = 1 begin update test set val = left(val, patindex('%&lt[^;]%', val) - 1) + '&lt;' + right(val, len(val) - patindex('%&lt[^;]%', val) - 2) from test where val like '%&lt[^;]%' IF @@ROWCOUNT = 0 BREAK end select * from test Better is that this is SQL Server version agnostic and should work just fine. A: I think this can be done much cleaner if you use different STUFF :) create table test ( id int identity(1, 1) not null, val varchar(25) not null ) insert into test values ('&lt; <- ok, &lt <- nok') WHILE 1 = 1 BEGIN UPDATE test SET val = STUFF( val , PATINDEX('%&lt[^;]%', val) + 3 , 0 , ';' ) FROM test WHERE val LIKE '%&lt[^;]%' IF @@ROWCOUNT = 0 BREAK END select * from test A: If MSSQL's regex flavor supports negative lookahead, that would be The Right Way to approach this. s/&lt(?!;)/&lt;/gi will catch all instances of &lt which are not followed by a ; (even if they're followed by nothing, which [^;] would miss) and does not capture the following non-; character as part of the match, eliminating the issue mentioned in the comments on the original question of that character being lost in the replacement. Unfortunately, I don't use MSSQL, so I have no idea whether it supports negative lookahead or not... A: Very specific to this pattern, but I have done similar to this in the past: REPLACE(REPLACE(columName, '&lt;', '&lt'), '&lt', '&lt;') broader example (encode characters which may be inappropriate in a TITLE attribute) REPLACE(REPLACE(REPLACE(REPLACE( REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE( columName -- Remove existing encoding: , '&amp;', '&') , '&#34;', '"') , '&#39;', '''') -- Reinstate/Encode: , '&', '&amp;') -- Encode: , '"', '&#34;') , '''', '&#39;') , ' ', '%20') , '<', '%3C') , '>', '%3E') , '/', '%2F') , '\', '%5C')
{ "language": "en", "url": "https://stackoverflow.com/questions/150977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: What's the best way for a .NET winforms application to update itself without using ClickOnce? For technical reasons, I can't use ClickOnce to auto-update my .NET application and its assemblies. What is the best way to handle auto-updating in .NET? A: I think the Updater Application Block was something of a precursor to ClickOnce. Might be worth investigating. Looking at its source code might be enough to spark some ideas. A: About 3-4 years ago I published an example that sits outside the app, if an update is detected, the app calls the updater ans shuts down, then the updates are done, and the app restarts. I published the example on the old GotDotNet site...I'll have to try and find it. It worked perfect and took about 1-2 hours to write. A: Indigo Rose has a product called TrueUpdate that also does this for you. I have used them in the past from both managed and unmanaged apps. It is basically a file you put on your server (http, ftp, whatever you like). Then you call a client side EXE to check for updates. The updates file is pulled and has logic to detect what version is on the client (your choice, DLL detection, registry key reads, etc). Then it will find the appropriate updater for it and download the file for execution. It works well through proxies as well. The only thing they don't do is actually build the patches for you. You have to do that manually, or with another product they have. It is a commcerial solution and works quite well though if you need it. A: As a starting point for rolling your own, it's probably worth looking at Alex Feinman's article on MSDN entitled "Creating Self-Updating Applications with the .NET Compact Framework". A: We have a product that's commercial/open source: wyBuild & wyUpdate. It has patching ability and is dead simple to use. Edit: I'm getting voted down into the negative numbers, but my post wasn't just blatant selling. Our updater, wyUpdate, is open source, written in C# and is licensed under the BSD license. I thought it might help anyone trying to build an updater from scratch using the .NET framework. But, vote me down if you must. A: Write your own. I have heard that they are somewhat difficult to write the first time, but after that it gets simple. Since I haven't written one yet (although its on my list), I can give you some of the things that I have thought of. Maintain accurate dll versions, as this is important for self updating. And make sure that the updater can update itself. A: In your Program.cs file do something like this: static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Update(); Application.Run(new Form1()); } private static void Update() { string mainFolder; string updateFolder; string backupFolder; foreach (string file in System.IO.Directory.GetFiles(updateFolder)) { string newFile = file.Replace( updateFolder, mainFolder); if (System.IO.File.Exists(newFile)) { System.IO.File.Replace(file, newFile, backupFolder); } else { System.IO.File.Move(file, newFile); } } } Additionally, it can be made recursive to pick up directory structure if necessary. This will allow you to update any .dll in your project; everything, in fact, outside of the main .exe. Then somewhere else within your application you can deal with getting the files from your server (or wherever) that need to be updated, put then in the updateFolder and restart the application. A: On a project a long time ago, using .NET Compact Framework 1.0, I wrote an auto-updating application. We used SqlCE's CAB deployment feature to get the files onto the device (you would use Sync Framework now), and we had a separate exe that did the unpacking of the CAB, and updating the files. An update would go like this: the user would be prompted to update, click a button and drop out of the UI application. The updater exe would take over, get the cab file from the server, backup the current dlls and unpack the cab file with wceload. The UI would then be restarted, and if it failed, the update would be rolled back. This is still an interesting scenario on compact devices, but there are better tools now than just sqlce. I would certainly look at updater application block and sync framework to implement this if clickonce is not an option. But I'm guessing you'll still need a separate executable because the dlls you want to overwrite are probably file locked while in use by an exe, like one of the previous answers already said. A: I wrote my own autoupdater, the autoupdater uses a common config file to the application which contains urls to download latest versions from / check if it needs to update. This way you run the updater, which either updates the app or not, then runs the application, which as part of normal operation, checks for an updated updater and patches that.
{ "language": "en", "url": "https://stackoverflow.com/questions/150994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: In my ActionScript3 class, can I have a property with a getter and setter? In my ActionScript3 class, can I have a property with a getter and setter? A: Yes you can create getter and setter functions inside an AS3 class. Example: private var _foo:String = ""; public function get foo():String{ return _foo; } public function set foo(value:String):void { _foo= value; } more information about getters and setters can be found here A: Ok, well you can just use the basic getter/setter syntax for any property of your AS3 class. For example package { public class PropEG { private var _prop:String; public function get prop():String { return _prop; } public function set prop(value:String):void { _prop = value; } } } A: A getter is a function with a return value depending on what we return. A setter has always one parameter, since we give a variable a new value through the parameter. We first create an instance of the class containing the getter and setter, in our case it is "a". Then we call the setter, if we want to change the variable and using the dot syntax we call the setter function and with the = operator we fill the parameter. To retieve the value for a variable we use the getter in a similar way as shown in the example (a.myVar). Unlike a regular function call we omit the parentheses. Do not forget to add the return type, otherwise there will be an error. package { import flash.display.Sprite; import flash.text.TextField; public class App extends Sprite { private var tsecField:TextField; private var tField:TextField; public function App() { myTest(); } private function myTest():void { var a:Testvar = new Testvar(); tField = new TextField(); tField.autoSize = "left"; tField.background = true; tField.border = true; a.mynewVar = "This is the new var."; tField.text = "Test is: "+a.myVar; addChild(tField); } } } import flash.display.Sprite; import flash.text.TextField; class Testvar extends Sprite { public var test:String; public function Testvar() { } public function set mynewVar(newTest:String):void { test = newTest; } public function get myVar():String { return test; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/150998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Finalizers and Dispose I've got a class named BackgroundWorker that has a thread constantly running. To turn this thread off, an instance variable named stop to needs to be true. To make sure the thread is freed when the class is done being used, I've added IDisposable and a finalizer that invokes Dispose(). Assuming that stop = true does indeed cause this thread to exit, is this sippet correct? It's fine to invoke Dispose from a finalizer, right? Finalizers should always call Dispose if the object inherits IDisposable, right? /// <summary> /// Force the background thread to exit. /// </summary> public void Dispose() { lock (this.locker) { this.stop = true; } } ~BackgroundWorker() { this.Dispose(); } A: Out of interest, any reason this couldn't use the regular BackgroundWorker, which has full support for cancellation? Re the lock - a volatile bool field might be less troublesome. However, in this case your finalizer isn't doing anything interesting, especially given the "if(disposing)" - i.e. it only runs the interesting code during Dispose(). Personally I'd be tempted to stick with just IDisposable, and not provide a finalizer: you should be cleaning it up with Dispose(). A: Your code is fine, although locking in a finalizer is somewhat "scary" and I would avoid it - if you get a deadlock... I am not 100% certain what would happen but it would not be good. However, if you are safe this should not be a problem. Mostly. The internals of garbage collection are painful and I hope you never have to see them ;) As Marc Gravell points out, a volatile bool would allow you to get rid of the lock, which would mitigate this issue. Implement this change if you can. nedruod's code puts the assignment inside the if (disposing) check, which is completely wrong - the thread is an unmanaged resource and must be stopped even if not explicitly disposing. Your code is fine, I am just pointing out that you should not take the advice given in that code snippet. Yes, you almost always should call Dispose() from the finalizer if implementing the IDisposable pattern. The full IDisposable pattern is a bit bigger than what you have but you do not always need it - it merely provides two extra possibilities: * *detecting whether Dispose() was called or the finalizer is executing (you are not allowed to touch any managed resources in the finalizer, outside of the object being finalized); *enabling subclasses to override the Dispose() method. A: First off, a severe warning. Don't use a finalizer like you are. You are setting yourself up for some very bad effects if you take locks within a finalizer. Short story is don't do it. Now to the original question. public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } /// <summary> /// Force the background thread to exit. /// </summary> protected virtual void Dispose(bool disposing) { if (disposing) { lock (this.locker) { this.stop = true; } } } ~BackgroundWorker() { Dispose(false); } The only reason to have a finalizer at all is to allow sub-classes to extend and release unmanaged resources. If you don't have subclasses then seal your class and drop the finalizer completely. A: Is the "stop" instance variable a property? If not, there's no particular point in setting it during the finalizer - nothing is referencing the object anymore, so nothing can query the member. If you're actually releasing a resource, then having Dispose() and the finalizer perform the same work (first testing whether the work still needs to be done) is a good pattern. A: You need the full disposable pattern but the stop has to be something the thread can access. If it is a member variable of the class being disposed, that's no good because it can't reference a disposed class. Consider having an event that the thread owns and signaling that on dispose instead. A: The object that implements the finalizer needs a reference to a flag--stored in another object--which the thread will be able to see; the thread must not have any strong reference, direct or indirect, to the object that implements the finalizer. The finalizer should set the flag using something like a CompareExchange, and the thread should use a similar means to test it. Note that if the finalizer of one object accesses another object, the other object may have been finalized but it will still exist. It's fine for a finalizer to reference other objects if it does so in a way that won't be bothered by their finalization. If all you're doing is setting a flag, you're fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/151000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I create an Excel (.XLS and .XLSX) file in C# without installing Microsoft Office? How can I create an Excel spreadsheet with C# without requiring Excel to be installed on the machine that's running the code? A: The commercial solution, SpreadsheetGear for .NET will do it. You can see live ASP.NET (C# and VB) samples here and download an evaluation version here. Disclaimer: I own SpreadsheetGear LLC A: The Java open source solution is Apache POI. Maybe there is a way to setup interop here, but I don't know enough about Java to answer that. When I explored this problem I ended up using the Interop assemblies. A: An extremely lightweight option may be to use HTML tables. Just create head, body, and table tags in a file, and save it as a file with an .xls extension. There are Microsoft specific attributes that you can use to style the output, including formulas. I realize that you may not be coding this in a web application, but here is an example of the composition of an Excel file via an HTML table. This technique could be used if you were coding a console app, desktop app, or service. A: Have you ever tried sylk? We used to generate excelsheets in classic asp as sylk and right now we're searching for an excelgenerater too. The advantages for sylk are, you can format the cells. A: A few options I have used: If XLSX is a must: ExcelPackage is a good start but died off when the developer quit working on it. ExML picked up from there and added a few features. ExML isn't a bad option, I'm still using it in a couple of production websites. For all of my new projects, though, I'm using NPOI, the .NET port of Apache POI. NPOI 2.0 (Alpha) also supports XLSX. A: If you are happy with the xlsx format, try my library, EPPlus. It started with the source from ExcelPackage, but since became a total rewrite. It supports ranges, cell styling, charts, shapes, pictures, named ranges, AutoFilter, and a lot of other stuff. You have two options: * *EPPlus 4, licensed under LGPL (original branch, developed until 2020) *EPPlus 5, licensed under Polyform Noncommercial 1.0.0 (since 2020). From the EPPlus 5 readme.md: With the new license EPPlus is still free to use in some cases, but will require a commercial license to be used in a commercial business. EPPlus website: https://www.epplussoftware.com/ A: If you're creating Excel 2007/2010 files give this open source project a try: https://github.com/closedxml/closedxml It provides an object oriented way to manipulate the files (similar to VBA) without dealing with the hassles of XML Documents. It can be used by any .NET language like C# and Visual Basic (VB). ClosedXML allows you to create Excel 2007/2010 files without the Excel application. The typical example is creating Excel reports on a web server: var workbook = new XLWorkbook(); var worksheet = workbook.Worksheets.Add("Sample Sheet"); worksheet.Cell("A1").Value = "Hello World!"; workbook.SaveAs("HelloWorld.xlsx"); A: You can just write it out to XML using the Excel XML format and name it with .XLS extension and it will open with excel. You can control all the formatting (bold, widths, etc) in your XML file heading. There is an example XML from Wikipedia. A: You actually might want to check out the interop classes available in C# (e.g. Microsoft.Office.Interop.Excel. You say no OLE (which this isn't), but the interop classes are very easy to use. Check out the C# Documentation here (Interop for Excel starts on page 1072 of the C# PDF). You might be impressed if you haven't tried them. Please be warned of Microsoft's stance on this: Microsoft does not currently recommend, and does not support, Automation of Microsoft Office applications from any unattended, non-interactive client application or component (including ASP, ASP.NET, DCOM, and NT Services), because Office may exhibit unstable behavior and/or deadlock when Office is run in this environment. A: You can use ExcelXmlWriter. It works fine. A: I also vote for GemBox.Spreadsheet. Very fast and easy to use, with tons of examples on their site. Took my reporting tasks on a whole new level of execution speed. A: One really easy option which is often overlooked is to create a .rdlc report using Microsoft Reporting and export it to excel format. You can design it in visual studio and generate the file using: localReport.Render("EXCELOPENXML", null, ((name, ext, encoding, mimeType, willSeek) => stream = new FileStream(name, FileMode.CreateNew)), out warnings); You can also export it do .doc or .pdf, using "WORDOPENXML" and "PDF" respectively, and it's supported on many different platforms such as ASP.NET and SSRS. It's much easier to make changes in a visual designer where you can see the results, and trust me, once you start grouping data, formatting group headers, adding new sections, you don't want to mess with dozens of XML nodes. A: You can try my SwiftExcel library. This library writes directly to the file, so it is very efficient. For example you can write 100k rows in few seconds without any memory usage. Here is a simple example of usage: using (var ew = new ExcelWriter("C:\\temp\\test.xlsx")) { for (var row = 1; row <= 10; row++) { for (var col = 1; col <= 5; col++) { ew.Write($"row:{row}-col:{col}", col, row); } } } A: Here's a completely free C# library, which lets you export from a DataSet, DataTable or List<> into a genuine Excel 2007 .xlsx file, using the OpenXML libraries: http://mikesknowledgebase.com/pages/CSharp/ExportToExcel.htm Full source code is provided - free of charge - along with instructions, and a demo application. After adding this class to your application, you can export your DataSet to Excel in just one line of code: CreateExcelFile.CreateExcelDocument(myDataSet, "C:\\Sample.xlsx"); It doesn't get much simpler than that... And it doesn't even require Excel to be present on your server. A: http://www.codeproject.com/KB/cs/Excel_and_C_.aspx <= why not just use the built in power of windows, just install office on the server, any application that you install can be automated. So much easier just use the native methods. If it installed you can use it, this is the most awesome and under used feature in windows it was Dubbed COM back in the good old days, and it saves you tons of time and pain. Or even easier just use the ref lib MS supplies - http://csharp.net-informations.com/excel/csharp-create-excel.htm A: How to create an Excel (.xslx) file using C# on OneDrive without installing Microsoft Office The Microsoft Graph API provides File and Excel APIs for creating and modifying Excel files stored in OneDrive for both enterprise and consumer accounts. The Microsoft.Graph NuGet package provides many interfaces for working with the File and Excel APIs. { Name = "myExcelFile.xslx", File = new Microsoft.Graph.File() }; // Create an empty file in the user's OneDrive. var excelWorkbookDriveItem = await graphClient.Me.Drive.Root.Children.Request().AddAsync(excelWorkbook); // Add the contents of a template Excel file. DriveItem excelDriveItem; using (Stream ms = ResourceHelper.GetResourceAsStream(ResourceHelper.ExcelTestResource)) { //Upload content to the file. ExcelTestResource is an empty template Excel file. //https://graph.microsoft.io/en-us/docs/api-reference/v1.0/api/item_uploadcontent excelDriveItem = await graphClient.Me.Drive.Items[excelWorkbookDriveItem.Id].Content.Request().PutAsync<DriveItem>(ms); } At this point, you now have an Excel file created in the user (enterprise or consumer) or group's OneDrive. You can now use the Excel APIs to make changes to the Excel file without using Excel and without needing to understand the Excel XML format. A: I recode again the code and now you can create an .xls file, later you can convert to Excel 2003 Open XML Format. private static void exportToExcel(DataSet source, string fileName) { // Documentacion en: // https://en.wikipedia.org/wiki/Microsoft_Office_XML_formats // https://answers.microsoft.com/en-us/msoffice/forum/all/how-to-save-office-ms-xml-as-xlsx-file/4a77dae5-6855-457d-8359-e7b537beb1db // https://riptutorial.com/es/openxml const string startExcelXML = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\r\n"+ "<?mso-application progid=\"Excel.Sheet\"?>\r\n" + "<Workbook xmlns=\"urn:schemas-microsoft-com:office:spreadsheet\"\r\n" + "xmlns:o=\"urn:schemas-microsoft-com:office:office\"\r\n " + "xmlns:x=\"urn:schemas-microsoft-com:office:excel\"\r\n " + "xmlns:ss=\"urn:schemas-microsoft-com:office:spreadsheet\"\r\n " + "xmlns:html=\"http://www.w3.org/TR/REC-html40\">\r\n " + "xmlns:html=\"https://www.w3.org/TR/html401/\">\r\n " + "<DocumentProperties xmlns=\"urn:schemas-microsoft-com:office:office\">\r\n " + " <Version>16.00</Version>\r\n " + "</DocumentProperties>\r\n " + " <OfficeDocumentSettings xmlns=\"urn:schemas-microsoft-com:office:office\">\r\n " + " <AllowPNG/>\r\n " + " </OfficeDocumentSettings>\r\n " + " <ExcelWorkbook xmlns=\"urn:schemas-microsoft-com:office:excel\">\r\n " + " <WindowHeight>9750</WindowHeight>\r\n " + " <WindowWidth>24000</WindowWidth>\r\n " + " <WindowTopX>0</WindowTopX>\r\n " + " <WindowTopY>0</WindowTopY>\r\n " + " <RefModeR1C1/>\r\n " + " <ProtectStructure>False</ProtectStructure>\r\n " + " <ProtectWindows>False</ProtectWindows>\r\n " + " </ExcelWorkbook>\r\n " + "<Styles>\r\n " + "<Style ss:ID=\"Default\" ss:Name=\"Normal\">\r\n " + "<Alignment ss:Vertical=\"Bottom\"/>\r\n <Borders/>" + "\r\n <Font/>\r\n <Interior/>\r\n <NumberFormat/>" + "\r\n <Protection/>\r\n </Style>\r\n " + "<Style ss:ID=\"BoldColumn\">\r\n <Font " + "x:Family=\"Swiss\" ss:Bold=\"1\"/>\r\n </Style>\r\n " + "<Style ss:ID=\"StringLiteral\">\r\n <NumberFormat" + " ss:Format=\"@\"/>\r\n </Style>\r\n <Style " + "ss:ID=\"Decimal\">\r\n <NumberFormat " + "ss:Format=\"0.0000\"/>\r\n </Style>\r\n " + "<Style ss:ID=\"Integer\">\r\n <NumberFormat/>" + "ss:Format=\"0\"/>\r\n </Style>\r\n <Style " + "ss:ID=\"DateLiteral\">\r\n <NumberFormat " + "ss:Format=\"dd/mm/yyyy;@\"/>\r\n </Style>\r\n " + "</Styles>\r\n "; System.IO.StreamWriter excelDoc = null; excelDoc = new System.IO.StreamWriter(fileName,false); int sheetCount = 1; excelDoc.Write(startExcelXML); foreach (DataTable table in source.Tables) { int rowCount = 0; excelDoc.Write("<Worksheet ss:Name=\"" + table.TableName + "\">"); excelDoc.Write("<Table>"); excelDoc.Write("<Row>"); for (int x = 0; x < table.Columns.Count; x++) { excelDoc.Write("<Cell ss:StyleID=\"BoldColumn\"><Data ss:Type=\"String\">"); excelDoc.Write(table.Columns[x].ColumnName); excelDoc.Write("</Data></Cell>"); } excelDoc.Write("</Row>"); foreach (DataRow x in table.Rows) { rowCount++; //if the number of rows is > 64000 create a new page to continue output if (rowCount == 1048576) { rowCount = 0; sheetCount++; excelDoc.Write("</Table>"); excelDoc.Write(" </Worksheet>"); excelDoc.Write("<Worksheet ss:Name=\"" + table.TableName + "\">"); excelDoc.Write("<Table>"); } excelDoc.Write("<Row>"); //ID=" + rowCount + " for (int y = 0; y < table.Columns.Count; y++) { System.Type rowType; rowType = x[y].GetType(); switch (rowType.ToString()) { case "System.String": string XMLstring = x[y].ToString(); XMLstring = XMLstring.Trim(); XMLstring = XMLstring.Replace("&", "&"); XMLstring = XMLstring.Replace(">", ">"); XMLstring = XMLstring.Replace("<", "<"); excelDoc.Write("<Cell ss:StyleID=\"StringLiteral\">" + "<Data ss:Type=\"String\">"); excelDoc.Write(XMLstring); excelDoc.Write("</Data></Cell>"); break; case "System.DateTime": //Excel has a specific Date Format of YYYY-MM-DD followed by //the letter 'T' then hh:mm:sss.lll Example 2005-01-31T24:01:21.000 //The Following Code puts the date stored in XMLDate //to the format above DateTime XMLDate = (DateTime)x[y]; string XMLDatetoString = ""; //Excel Converted Date XMLDatetoString = XMLDate.Year.ToString() + "-" + (XMLDate.Month < 10 ? "0" + XMLDate.Month.ToString() : XMLDate.Month.ToString()) + "-" + (XMLDate.Day < 10 ? "0" + XMLDate.Day.ToString() : XMLDate.Day.ToString()) + "T" + (XMLDate.Hour < 10 ? "0" + XMLDate.Hour.ToString() : XMLDate.Hour.ToString()) + ":" + (XMLDate.Minute < 10 ? "0" + XMLDate.Minute.ToString() : XMLDate.Minute.ToString()) + ":" + (XMLDate.Second < 10 ? "0" + XMLDate.Second.ToString() : XMLDate.Second.ToString()) + ".000"; excelDoc.Write("<Cell ss:StyleID=\"DateLiteral\">" + "<Data ss:Type=\"DateTime\">"); excelDoc.Write(XMLDatetoString); excelDoc.Write("</Data></Cell>"); break; case "System.Boolean": excelDoc.Write("<Cell ss:StyleID=\"StringLiteral\">" + "<Data ss:Type=\"String\">"); excelDoc.Write(x[y].ToString()); excelDoc.Write("</Data></Cell>"); break; case "System.Int16": case "System.Int32": case "System.Int64": case "System.Byte": excelDoc.Write("<Cell ss:StyleID=\"Integer\">" + "<Data ss:Type=\"Number\">"); excelDoc.Write(x[y].ToString()); excelDoc.Write("</Data></Cell>"); break; case "System.Decimal": case "System.Double": excelDoc.Write("<Cell ss:StyleID=\"Decimal\">" + "<Data ss:Type=\"Number\">"); excelDoc.Write(x[y].ToString()); excelDoc.Write("</Data></Cell>"); break; case "System.DBNull": excelDoc.Write("<Cell ss:StyleID=\"StringLiteral\">" + "<Data ss:Type=\"String\">"); excelDoc.Write(""); excelDoc.Write("</Data></Cell>"); break; default: throw (new Exception(rowType.ToString() + " not handled.")); } } excelDoc.Write("</Row>"); } excelDoc.Write("</Table>"); excelDoc.Write("</Worksheet>"); sheetCount++; } const string endExcelOptions1 = "\r\n<WorksheetOptions xmlns=\"urn:schemas-microsoft-com:office:excel\">\r\n" + "<Selected/>\r\n" + "<ProtectObjects>False</ProtectObjects>\r\n" + "<ProtectScenarios>False</ProtectScenarios>\r\n" + "</WorksheetOptions>\r\n"; excelDoc.Write(endExcelOptions1); excelDoc.Write("</Workbook>"); excelDoc.Close(); } A: You could consider creating your files using the XML Spreadsheet 2003 format. This is a simple XML format using a well documented schema. A: You may want to take a look at GemBox.Spreadsheet. They have a free version with all features but limited to 150 rows per sheet and 5 sheets per workbook, if that falls within your needs. I haven't had need to use it myself yet, but does look interesting. A: Syncfusion Essential XlsIO can do this. It has no dependency on Microsoft office and also has specific support for different platforms. * *ASP.NET *ASP.NET MVC *UWP *Xamarin *WPF and Windows Forms *Windows Service and batch based operations Code sample: //Creates a new instance for ExcelEngine. ExcelEngine excelEngine = new ExcelEngine(); //Loads or open an existing workbook through Open method of IWorkbooks IWorkbook workbook = excelEngine.Excel.Workbooks.Open(fileName); //To-Do some manipulation| //To-Do some manipulation //Set the version of the workbook. workbook.Version = ExcelVersion.Excel2013; //Save the workbook in file system as xlsx format workbook.SaveAs(outputFileName); The whole suite of controls is available for free through the community license program if you qualify (less than 1 million USD in revenue). Note: I work for Syncfusion. A: And what about using Open XML SDK 2.0 for Microsoft Office? A few benefits: * *Doesn't require Office installed *Made by Microsoft = decent MSDN documentation *Just one .Net dll to use in project *SDK comes with many tools like diff, validator, etc Links: * *Github *Main MSDN Landing *"How Do I..." starter page *blogs.MSDN brian_jones announcing SDK *blogs.MSDN brian_jones describing SDK handling large files without crashing (unlike DOM method) A: You can install OpenXml nuget package on Visual Studio. Here is a bit of code to export a data table to an excel file: Imports DocumentFormat.OpenXml Imports DocumentFormat.OpenXml.Packaging Imports DocumentFormat.OpenXml.Spreadsheet Public Class ExportExcelClass Public Sub New() End Sub Public Sub ExportDataTable(ByVal table As DataTable, ByVal exportFile As String) ' Create a spreadsheet document by supplying the filepath. ' By default, AutoSave = true, Editable = true, and Type = xlsx. Dim spreadsheetDocument As SpreadsheetDocument = spreadsheetDocument.Create(exportFile, SpreadsheetDocumentType.Workbook) ' Add a WorkbookPart to the document. Dim workbook As WorkbookPart = spreadsheetDocument.AddWorkbookPart workbook.Workbook = New Workbook ' Add a WorksheetPart to the WorkbookPart. Dim Worksheet As WorksheetPart = workbook.AddNewPart(Of WorksheetPart)() Worksheet.Worksheet = New Worksheet(New SheetData()) ' Add Sheets to the Workbook. Dim sheets As Sheets = spreadsheetDocument.WorkbookPart.Workbook.AppendChild(Of Sheets)(New Sheets()) Dim data As SheetData = Worksheet.Worksheet.GetFirstChild(Of SheetData)() Dim Header As Row = New Row() Header.RowIndex = CType(1, UInt32) For Each column As DataColumn In table.Columns Dim headerCell As Cell = createTextCell(table.Columns.IndexOf(column) + 1, 1, column.ColumnName) Header.AppendChild(headerCell) Next data.AppendChild(Header) Dim contentRow As DataRow For i As Integer = 0 To table.Rows.Count - 1 contentRow = table.Rows(i) data.AppendChild(createContentRow(contentRow, i + 2)) Next End Sub Private Function createTextCell(ByVal columnIndex As Integer, ByVal rowIndex As Integer, ByVal cellValue As Object) As Cell Dim cell As Cell = New Cell() cell.DataType = CellValues.InlineString cell.CellReference = getColumnName(columnIndex) + rowIndex.ToString Dim inlineString As InlineString = New InlineString() Dim t As Text = New Text() t.Text = cellValue.ToString() inlineString.AppendChild(t) cell.AppendChild(inlineString) Return cell End Function Private Function createContentRow(ByVal dataRow As DataRow, ByVal rowIndex As Integer) As Row Dim row As Row = New Row With { .rowIndex = CType(rowIndex, UInt32) } For i As Integer = 0 To dataRow.Table.Columns.Count - 1 Dim dataCell As Cell = createTextCell(i + 1, rowIndex, dataRow(i)) row.AppendChild(dataCell) Next Return row End Function Private Function getColumnName(ByVal columnIndex As Integer) As String Dim dividend As Integer = columnIndex Dim columnName As String = String.Empty Dim modifier As Integer While dividend > 0 modifier = (dividend - 1) Mod 26 columnName = Convert.ToChar(65 + modifier).ToString() & columnName dividend = CInt(((dividend - modifier) / 26)) End While Return columnName End Function End Class A: Here is the simplest way to create an Excel file. Excel files with extension .xlsx are compressed folders containing .XML files - but complicated. First create the folder structure as follows - public class CreateFileOrFolder { static void Main() { // // https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/file-system/how-to-create-a-file-or-folder // // https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/file-system/how-to-write-to-a-text-file // // .NET Framework 4.7.2 // // Specify a name for your top-level folder. string folderName = @"C:\Users\david\Desktop\Book3"; // To create a string that specifies the path to a subfolder under your // top-level folder, add a name for the subfolder to folderName. string pathString = System.IO.Path.Combine(folderName, "_rels"); System.IO.Directory.CreateDirectory(pathString); pathString = System.IO.Path.Combine(folderName, "docProps"); System.IO.Directory.CreateDirectory(pathString); pathString = System.IO.Path.Combine(folderName, "xl"); System.IO.Directory.CreateDirectory(pathString); string subPathString = System.IO.Path.Combine(pathString, "_rels"); System.IO.Directory.CreateDirectory(subPathString); subPathString = System.IO.Path.Combine(pathString, "theme"); System.IO.Directory.CreateDirectory(subPathString); subPathString = System.IO.Path.Combine(pathString, "worksheets"); System.IO.Directory.CreateDirectory(subPathString); // Keep the console window open in debug mode. System.Console.WriteLine("Press any key to exit."); System.Console.ReadKey(); } } Next, create text files to hold the XML needed to describe the Excel spreadsheet. namespace MakeFiles3 { class Program { static void Main(string[] args) { // // https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/file-system/how-to-write-to-a-text-file // // .NET Framework 4.7.2 // string fileName = @"C:\Users\david\Desktop\Book3\_rels\.rels"; fnWriteFile(fileName); fileName = @"C:\Users\david\Desktop\Book3\docProps\app.xml"; fnWriteFile(fileName); fileName = @"C:\Users\david\Desktop\Book3\docProps\core.xml"; fnWriteFile(fileName); fileName = @"C:\Users\david\Desktop\Book3\xl\_rels\workbook.xml.rels"; fnWriteFile(fileName); fileName = @"C:\Users\david\Desktop\Book3\xl\theme\theme1.xml"; fnWriteFile(fileName); fileName = @"C:\Users\david\Desktop\Book3\xl\worksheets\sheet1.xml"; fnWriteFile(fileName); fileName = @"C:\Users\david\Desktop\Book3\xl\styles.xml"; fnWriteFile(fileName); fileName = @"C:\Users\david\Desktop\Book3\xl\workbook.xml"; fnWriteFile(fileName); fileName = @"C:\Users\david\Desktop\Book3\[Content_Types].xml"; fnWriteFile(fileName); // Keep the console window open in debug mode. System.Console.WriteLine("Press any key to exit."); System.Console.ReadKey(); bool fnWriteFile(string strFilePath) { if (!System.IO.File.Exists(strFilePath)) { using (System.IO.FileStream fs = System.IO.File.Create(strFilePath)) { return true; } } else { System.Console.WriteLine("File \"{0}\" already exists.", strFilePath); return false; } } } } } Next populate the text files with XML. The XML required is fairly verbose so you may need to use this github repository. https://github.com/DaveTallett26/MakeFiles4/blob/master/MakeFiles4/Program.cs // // https://learn.microsoft.com/en-us/dotnet/standard/io/how-to-write-text-to-a-file // .NET Framework 4.7.2 // using System.IO; namespace MakeFiles4 { class Program { static void Main(string[] args) { string xContents = @"a"; string xFilename = @"a"; xFilename = @"C:\Users\david\Desktop\Book3\[Content_Types].xml"; xContents = @"<?xml version=""1.0"" encoding=""UTF-8"" standalone=""yes""?><Types xmlns=""http://schemas.openxmlformats.org/package/2006/content-types""><Default Extension=""rels"" ContentType=""application/vnd.openxmlformats-package.relationships+xml""/><Default Extension=""xml"" ContentType=""application/xml""/><Override PartName=""/xl/workbook.xml"" ContentType=""application/vnd.openxmlformats-officedocument.spreadsheetml.sheet.main+xml""/><Override PartName=""/xl/worksheets/sheet1.xml"" ContentType=""application/vnd.openxmlformats-officedocument.spreadsheetml.worksheet+xml""/><Override PartName=""/xl/theme/theme1.xml"" ContentType=""application/vnd.openxmlformats-officedocument.theme+xml""/><Override PartName=""/xl/styles.xml"" ContentType=""application/vnd.openxmlformats-officedocument.spreadsheetml.styles+xml""/><Override PartName=""/docProps/core.xml"" ContentType=""application/vnd.openxmlformats-package.core-properties+xml""/><Override PartName=""/docProps/app.xml"" ContentType=""application/vnd.openxmlformats-officedocument.extended-properties+xml""/></Types>"; StartExstream(xContents, xFilename); xFilename = @"C:\Users\david\Desktop\Book3\_rels\.rels"; xContents = @"<?xml version=""1.0"" encoding=""UTF-8"" standalone=""yes""?><Relationships xmlns=""http://schemas.openxmlformats.org/package/2006/relationships""><Relationship Id=""rId3"" Type=""http://schemas.openxmlformats.org/officeDocument/2006/relationships/extended-properties"" Target=""docProps/app.xml""/><Relationship Id=""rId2"" Type=""http://schemas.openxmlformats.org/package/2006/relationships/metadata/core-properties"" Target=""docProps/core.xml""/><Relationship Id=""rId1"" Type=""http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument"" Target=""xl/workbook.xml""/></Relationships>"; StartExstream(xContents, xFilename); xFilename = @"C:\Users\david\Desktop\Book3\docProps\app.xml"; xContents = @"<?xml version=""1.0"" encoding=""UTF-8"" standalone=""yes""?><Properties xmlns=""http://schemas.openxmlformats.org/officeDocument/2006/extended-properties"" xmlns:vt=""http://schemas.openxmlformats.org/officeDocument/2006/docPropsVTypes""><Application>Microsoft Excel</Application><DocSecurity>0</DocSecurity><ScaleCrop>false</ScaleCrop><HeadingPairs><vt:vector size=""2"" baseType=""variant""><vt:variant><vt:lpstr>Worksheets</vt:lpstr></vt:variant><vt:variant><vt:i4>1</vt:i4></vt:variant></vt:vector></HeadingPairs><TitlesOfParts><vt:vector size=""1"" baseType=""lpstr""><vt:lpstr>Sheet1</vt:lpstr></vt:vector></TitlesOfParts><Company></Company><LinksUpToDate>false</LinksUpToDate><SharedDoc>false</SharedDoc><HyperlinksChanged>false</HyperlinksChanged><AppVersion>16.0300</AppVersion></Properties>"; StartExstream(xContents, xFilename); xFilename = @"C:\Users\david\Desktop\Book3\docProps\core.xml"; xContents = @"<?xml version=""1.0"" encoding=""UTF-8"" standalone=""yes""?><cp:coreProperties xmlns:cp=""http://schemas.openxmlformats.org/package/2006/metadata/core-properties"" xmlns:dc=""http://purl.org/dc/elements/1.1/"" xmlns:dcterms=""http://purl.org/dc/terms/"" xmlns:dcmitype=""http://purl.org/dc/dcmitype/"" xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance""><dc:creator>David Tallett</dc:creator><cp:lastModifiedBy>David Tallett</cp:lastModifiedBy><dcterms:created xsi:type=""dcterms:W3CDTF"">2021-10-26T15:47:15Z</dcterms:created><dcterms:modified xsi:type=""dcterms:W3CDTF"">2021-10-26T15:47:35Z</dcterms:modified></cp:coreProperties>"; StartExstream(xContents, xFilename); xFilename = @"C:\Users\david\Desktop\Book3\xl\styles.xml"; xContents = @"<?xml version=""1.0"" encoding=""UTF-8"" standalone=""yes""?><styleSheet xmlns=""http://schemas.openxmlformats.org/spreadsheetml/2006/main"" xmlns:mc=""http://schemas.openxmlformats.org/markup-compatibility/2006"" mc:Ignorable=""x14ac x16r2 xr"" xmlns:x14ac=""http://schemas.microsoft.com/office/spreadsheetml/2009/9/ac"" xmlns:x16r2=""http://schemas.microsoft.com/office/spreadsheetml/2015/02/main"" xmlns:xr=""http://schemas.microsoft.com/office/spreadsheetml/2014/revision""><fonts count=""1"" x14ac:knownFonts=""1""><font><sz val=""11""/><color theme=""1""/><name val=""Calibri""/><family val=""2""/><scheme val=""minor""/></font></fonts><fills count=""2""><fill><patternFill patternType=""none""/></fill><fill><patternFill patternType=""gray125""/></fill></fills><borders count=""1""><border><left/><right/><top/><bottom/><diagonal/></border></borders><cellStyleXfs count=""1""><xf numFmtId=""0"" fontId=""0"" fillId=""0"" borderId=""0""/></cellStyleXfs><cellXfs count=""1""><xf numFmtId=""0"" fontId=""0"" fillId=""0"" borderId=""0"" xfId=""0""/></cellXfs><cellStyles count=""1""><cellStyle name=""Normal"" xfId=""0"" builtinId=""0""/></cellStyles><dxfs count=""0""/><tableStyles count=""0"" defaultTableStyle=""TableStyleMedium2"" defaultPivotStyle=""PivotStyleLight16""/><extLst><ext uri=""{EB79DEF2-80B8-43e5-95BD-54CBDDF9020C}"" xmlns:x14=""http://schemas.microsoft.com/office/spreadsheetml/2009/9/main""><x14:slicerStyles defaultSlicerStyle=""SlicerStyleLight1""/></ext><ext uri=""{9260A510-F301-46a8-8635-F512D64BE5F5}"" xmlns:x15=""http://schemas.microsoft.com/office/spreadsheetml/2010/11/main""><x15:timelineStyles defaultTimelineStyle=""TimeSlicerStyleLight1""/></ext></extLst></styleSheet>"; StartExstream(xContents, xFilename); xFilename = @"C:\Users\david\Desktop\Book3\xl\workbook.xml"; xContents = @"<?xml version=""1.0"" encoding=""UTF-8"" standalone=""yes""?><workbook xmlns=""http://schemas.openxmlformats.org/spreadsheetml/2006/main"" xmlns:r=""http://schemas.openxmlformats.org/officeDocument/2006/relationships"" xmlns:mc=""http://schemas.openxmlformats.org/markup-compatibility/2006"" mc:Ignorable=""x15 xr xr6 xr10 xr2"" xmlns:x15=""http://schemas.microsoft.com/office/spreadsheetml/2010/11/main"" xmlns:xr=""http://schemas.microsoft.com/office/spreadsheetml/2014/revision"" xmlns:xr6=""http://schemas.microsoft.com/office/spreadsheetml/2016/revision6"" xmlns:xr10=""http://schemas.microsoft.com/office/spreadsheetml/2016/revision10"" xmlns:xr2=""http://schemas.microsoft.com/office/spreadsheetml/2015/revision2""><fileVersion appName=""xl"" lastEdited=""7"" lowestEdited=""7"" rupBuild=""24430""/><workbookPr defaultThemeVersion=""166925""/><mc:AlternateContent xmlns:mc=""http://schemas.openxmlformats.org/markup-compatibility/2006""><mc:Choice Requires=""x15""><x15ac:absPath url=""C:\Users\david\Desktop\"" xmlns:x15ac=""http://schemas.microsoft.com/office/spreadsheetml/2010/11/ac""/></mc:Choice></mc:AlternateContent><xr:revisionPtr revIDLastSave=""0"" documentId=""8_{C633700D-2D40-49EE-8C5E-2561E28A6758}"" xr6:coauthVersionLast=""47"" xr6:coauthVersionMax=""47"" xr10:uidLastSave=""{00000000-0000-0000-0000-000000000000}""/><bookViews><workbookView xWindow=""-120"" yWindow=""-120"" windowWidth=""29040"" windowHeight=""15840"" xr2:uid=""{934C5B62-1DC1-4322-BAE8-00D615BD2FB3}""/></bookViews><sheets><sheet name=""Sheet1"" sheetId=""1"" r:id=""rId1""/></sheets><calcPr calcId=""191029""/><extLst><ext uri=""{140A7094-0E35-4892-8432-C4D2E57EDEB5}"" xmlns:x15=""http://schemas.microsoft.com/office/spreadsheetml/2010/11/main""><x15:workbookPr chartTrackingRefBase=""1""/></ext><ext uri=""{B58B0392-4F1F-4190-BB64-5DF3571DCE5F}"" xmlns:xcalcf=""http://schemas.microsoft.com/office/spreadsheetml/2018/calcfeatures""><xcalcf:calcFeatures><xcalcf:feature name=""microsoft.com:RD""/><xcalcf:feature name=""microsoft.com:Single""/><xcalcf:feature name=""microsoft.com:FV""/><xcalcf:feature name=""microsoft.com:CNMTM""/><xcalcf:feature name=""microsoft.com:LET_WF""/></xcalcf:calcFeatures></ext></extLst></workbook>"; StartExstream(xContents, xFilename); xFilename = @"C:\Users\david\Desktop\Book3\xl\_rels\workbook.xml.rels"; xContents = @"<?xml version=""1.0"" encoding=""UTF-8"" standalone=""yes""?><Relationships xmlns=""http://schemas.openxmlformats.org/package/2006/relationships""><Relationship Id=""rId3"" Type=""http://schemas.openxmlformats.org/officeDocument/2006/relationships/styles"" Target=""styles.xml""/><Relationship Id=""rId2"" Type=""http://schemas.openxmlformats.org/officeDocument/2006/relationships/theme"" Target=""theme/theme1.xml""/><Relationship Id=""rId1"" Type=""http://schemas.openxmlformats.org/officeDocument/2006/relationships/worksheet"" Target=""worksheets/sheet1.xml""/></Relationships>"; StartExstream(xContents, xFilename); xFilename = @"C:\Users\david\Desktop\Book3\xl\theme\theme1.xml"; xContents = @"<?xml version=""1.0"" encoding=""UTF-8"" standalone=""yes""?><a:theme xmlns:a=""http://schemas.openxmlformats.org/drawingml/2006/main"" name=""Office Theme""><a:themeElements><a:clrScheme name=""Office""><a:dk1><a:sysClr val=""windowText"" lastClr=""000000""/></a:dk1><a:lt1><a:sysClr val=""window"" lastClr=""FFFFFF""/></a:lt1><a:dk2><a:srgbClr val=""44546A""/></a:dk2><a:lt2><a:srgbClr val=""E7E6E6""/></a:lt2><a:accent1><a:srgbClr val=""4472C4""/></a:accent1><a:accent2><a:srgbClr val=""ED7D31""/></a:accent2><a:accent3><a:srgbClr val=""A5A5A5""/></a:accent3><a:accent4><a:srgbClr val=""FFC000""/></a:accent4><a:accent5><a:srgbClr val=""5B9BD5""/></a:accent5><a:accent6><a:srgbClr val=""70AD47""/></a:accent6><a:hlink><a:srgbClr val=""0563C1""/></a:hlink><a:folHlink><a:srgbClr val=""954F72""/></a:folHlink></a:clrScheme><a:fontScheme name=""Office""><a:majorFont><a:latin typeface=""Calibri Light"" panose=""020F0302020204030204""/><a:ea typeface=""""/><a:cs typeface=""""/><a:font script=""Jpan"" typeface=""游ゴシック Light""/><a:font script=""Hang"" typeface=""맑은 κ³ λ”•""/><a:font script=""Hans"" typeface=""η­‰ηΊΏ Light""/><a:font script=""Hant"" typeface=""ζ–°η΄°ζ˜Žι«”""/><a:font script=""Arab"" typeface=""Times New Roman""/><a:font script=""Hebr"" typeface=""Times New Roman""/><a:font script=""Thai"" typeface=""Tahoma""/><a:font script=""Ethi"" typeface=""Nyala""/><a:font script=""Beng"" typeface=""Vrinda""/><a:font script=""Gujr"" typeface=""Shruti""/><a:font script=""Khmr"" typeface=""MoolBoran""/><a:font script=""Knda"" typeface=""Tunga""/><a:font script=""Guru"" typeface=""Raavi""/><a:font script=""Cans"" typeface=""Euphemia""/><a:font script=""Cher"" typeface=""Plantagenet Cherokee""/><a:font script=""Yiii"" typeface=""Microsoft Yi Baiti""/><a:font script=""Tibt"" typeface=""Microsoft Himalaya""/><a:font script=""Thaa"" typeface=""MV Boli""/><a:font script=""Deva"" typeface=""Mangal""/><a:font script=""Telu"" typeface=""Gautami""/><a:font script=""Taml"" typeface=""Latha""/><a:font script=""Syrc"" typeface=""Estrangelo Edessa""/><a:font script=""Orya"" typeface=""Kalinga""/><a:font script=""Mlym"" typeface=""Kartika""/><a:font script=""Laoo"" typeface=""DokChampa""/><a:font script=""Sinh"" typeface=""Iskoola Pota""/><a:font script=""Mong"" typeface=""Mongolian Baiti""/><a:font script=""Viet"" typeface=""Times New Roman""/><a:font script=""Uigh"" typeface=""Microsoft Uighur""/><a:font script=""Geor"" typeface=""Sylfaen""/><a:font script=""Armn"" typeface=""Arial""/><a:font script=""Bugi"" typeface=""Leelawadee UI""/><a:font script=""Bopo"" typeface=""Microsoft JhengHei""/><a:font script=""Java"" typeface=""Javanese Text""/><a:font script=""Lisu"" typeface=""Segoe UI""/><a:font script=""Mymr"" typeface=""Myanmar Text""/><a:font script=""Nkoo"" typeface=""Ebrima""/><a:font script=""Olck"" typeface=""Nirmala UI""/><a:font script=""Osma"" typeface=""Ebrima""/><a:font script=""Phag"" typeface=""Phagspa""/><a:font script=""Syrn"" typeface=""Estrangelo Edessa""/><a:font script=""Syrj"" typeface=""Estrangelo Edessa""/><a:font script=""Syre"" typeface=""Estrangelo Edessa""/><a:font script=""Sora"" typeface=""Nirmala UI""/><a:font script=""Tale"" typeface=""Microsoft Tai Le""/><a:font script=""Talu"" typeface=""Microsoft New Tai Lue""/><a:font script=""Tfng"" typeface=""Ebrima""/></a:majorFont><a:minorFont><a:latin typeface=""Calibri"" panose=""020F0502020204030204""/><a:ea typeface=""""/><a:cs typeface=""""/><a:font script=""Jpan"" typeface=""游ゴシック""/><a:font script=""Hang"" typeface=""맑은 κ³ λ”•""/><a:font script=""Hans"" typeface=""η­‰ηΊΏ""/><a:font script=""Hant"" typeface=""ζ–°η΄°ζ˜Žι«”""/><a:font script=""Arab"" typeface=""Arial""/><a:font script=""Hebr"" typeface=""Arial""/><a:font script=""Thai"" typeface=""Tahoma""/><a:font script=""Ethi"" typeface=""Nyala""/><a:font script=""Beng"" typeface=""Vrinda""/><a:font script=""Gujr"" typeface=""Shruti""/><a:font script=""Khmr"" typeface=""DaunPenh""/><a:font script=""Knda"" typeface=""Tunga""/><a:font script=""Guru"" typeface=""Raavi""/><a:font script=""Cans"" typeface=""Euphemia""/><a:font script=""Cher"" typeface=""Plantagenet Cherokee""/><a:font script=""Yiii"" typeface=""Microsoft Yi Baiti""/><a:font script=""Tibt"" typeface=""Microsoft Himalaya""/><a:font script=""Thaa"" typeface=""MV Boli""/><a:font script=""Deva"" typeface=""Mangal""/><a:font script=""Telu"" typeface=""Gautami""/><a:font script=""Taml"" typeface=""Latha""/><a:font script=""Syrc"" typeface=""Estrangelo Edessa""/><a:font script=""Orya"" typeface=""Kalinga""/><a:font script=""Mlym"" typeface=""Kartika""/><a:font script=""Laoo"" typeface=""DokChampa""/><a:font script=""Sinh"" typeface=""Iskoola Pota""/><a:font script=""Mong"" typeface=""Mongolian Baiti""/><a:font script=""Viet"" typeface=""Arial""/><a:font script=""Uigh"" typeface=""Microsoft Uighur""/><a:font script=""Geor"" typeface=""Sylfaen""/><a:font script=""Armn"" typeface=""Arial""/><a:font script=""Bugi"" typeface=""Leelawadee UI""/><a:font script=""Bopo"" typeface=""Microsoft JhengHei""/><a:font script=""Java"" typeface=""Javanese Text""/><a:font script=""Lisu"" typeface=""Segoe UI""/><a:font script=""Mymr"" typeface=""Myanmar Text""/><a:font script=""Nkoo"" typeface=""Ebrima""/><a:font script=""Olck"" typeface=""Nirmala UI""/><a:font script=""Osma"" typeface=""Ebrima""/><a:font script=""Phag"" typeface=""Phagspa""/><a:font script=""Syrn"" typeface=""Estrangelo Edessa""/><a:font script=""Syrj"" typeface=""Estrangelo Edessa""/><a:font script=""Syre"" typeface=""Estrangelo Edessa""/><a:font script=""Sora"" typeface=""Nirmala UI""/><a:font script=""Tale"" typeface=""Microsoft Tai Le""/><a:font script=""Talu"" typeface=""Microsoft New Tai Lue""/><a:font script=""Tfng"" typeface=""Ebrima""/></a:minorFont></a:fontScheme><a:fmtScheme name=""Office""><a:fillStyleLst><a:solidFill><a:schemeClr val=""phClr""/></a:solidFill><a:gradFill rotWithShape=""1""><a:gsLst><a:gs pos=""0""><a:schemeClr val=""phClr""><a:lumMod val=""110000""/><a:satMod val=""105000""/><a:tint val=""67000""/></a:schemeClr></a:gs><a:gs pos=""50000""><a:schemeClr val=""phClr""><a:lumMod val=""105000""/><a:satMod val=""103000""/><a:tint val=""73000""/></a:schemeClr></a:gs><a:gs pos=""100000""><a:schemeClr val=""phClr""><a:lumMod val=""105000""/><a:satMod val=""109000""/><a:tint val=""81000""/></a:schemeClr></a:gs></a:gsLst><a:lin ang=""5400000"" scaled=""0""/></a:gradFill><a:gradFill rotWithShape=""1""><a:gsLst><a:gs pos=""0""><a:schemeClr val=""phClr""><a:satMod val=""103000""/><a:lumMod val=""102000""/><a:tint val=""94000""/></a:schemeClr></a:gs><a:gs pos=""50000""><a:schemeClr val=""phClr""><a:satMod val=""110000""/><a:lumMod val=""100000""/><a:shade val=""100000""/></a:schemeClr></a:gs><a:gs pos=""100000""><a:schemeClr val=""phClr""><a:lumMod val=""99000""/><a:satMod val=""120000""/><a:shade val=""78000""/></a:schemeClr></a:gs></a:gsLst><a:lin ang=""5400000"" scaled=""0""/></a:gradFill></a:fillStyleLst><a:lnStyleLst><a:ln w=""6350"" cap=""flat"" cmpd=""sng"" algn=""ctr""><a:solidFill><a:schemeClr val=""phClr""/></a:solidFill><a:prstDash val=""solid""/><a:miter lim=""800000""/></a:ln><a:ln w=""12700"" cap=""flat"" cmpd=""sng"" algn=""ctr""><a:solidFill><a:schemeClr val=""phClr""/></a:solidFill><a:prstDash val=""solid""/><a:miter lim=""800000""/></a:ln><a:ln w=""19050"" cap=""flat"" cmpd=""sng"" algn=""ctr""><a:solidFill><a:schemeClr val=""phClr""/></a:solidFill><a:prstDash val=""solid""/><a:miter lim=""800000""/></a:ln></a:lnStyleLst><a:effectStyleLst><a:effectStyle><a:effectLst/></a:effectStyle><a:effectStyle><a:effectLst/></a:effectStyle><a:effectStyle><a:effectLst><a:outerShdw blurRad=""57150"" dist=""19050"" dir=""5400000"" algn=""ctr"" rotWithShape=""0""><a:srgbClr val=""000000""><a:alpha val=""63000""/></a:srgbClr></a:outerShdw></a:effectLst></a:effectStyle></a:effectStyleLst><a:bgFillStyleLst><a:solidFill><a:schemeClr val=""phClr""/></a:solidFill><a:solidFill><a:schemeClr val=""phClr""><a:tint val=""95000""/><a:satMod val=""170000""/></a:schemeClr></a:solidFill><a:gradFill rotWithShape=""1""><a:gsLst><a:gs pos=""0""><a:schemeClr val=""phClr""><a:tint val=""93000""/><a:satMod val=""150000""/><a:shade val=""98000""/><a:lumMod val=""102000""/></a:schemeClr></a:gs><a:gs pos=""50000""><a:schemeClr val=""phClr""><a:tint val=""98000""/><a:satMod val=""130000""/><a:shade val=""90000""/><a:lumMod val=""103000""/></a:schemeClr></a:gs><a:gs pos=""100000""><a:schemeClr val=""phClr""><a:shade val=""63000""/><a:satMod val=""120000""/></a:schemeClr></a:gs></a:gsLst><a:lin ang=""5400000"" scaled=""0""/></a:gradFill></a:bgFillStyleLst></a:fmtScheme></a:themeElements><a:objectDefaults/><a:extraClrSchemeLst/><a:extLst><a:ext uri=""{05A4C25C-085E-4340-85A3-A5531E510DB2}""><thm15:themeFamily xmlns:thm15=""http://schemas.microsoft.com/office/thememl/2012/main"" name=""Office Theme"" id=""{62F939B6-93AF-4DB8-9C6B-D6C7DFDC589F}"" vid=""{4A3C46E8-61CC-4603-A589-7422A47A8E4A}""/></a:ext></a:extLst></a:theme>"; StartExstream(xContents, xFilename); xFilename = @"C:\Users\david\Desktop\Book3\xl\worksheets\sheet1.xml"; xContents = @"<?xml version=""1.0"" encoding=""UTF-8"" standalone=""yes""?><worksheet xmlns=""http://schemas.openxmlformats.org/spreadsheetml/2006/main"" xmlns:r=""http://schemas.openxmlformats.org/officeDocument/2006/relationships"" xmlns:mc=""http://schemas.openxmlformats.org/markup-compatibility/2006"" mc:Ignorable=""x14ac xr xr2 xr3"" xmlns:x14ac=""http://schemas.microsoft.com/office/spreadsheetml/2009/9/ac"" xmlns:xr=""http://schemas.microsoft.com/office/spreadsheetml/2014/revision"" xmlns:xr2=""http://schemas.microsoft.com/office/spreadsheetml/2015/revision2"" xmlns:xr3=""http://schemas.microsoft.com/office/spreadsheetml/2016/revision3"" xr:uid=""{54E3D330-4E78-4755-89E0-1AADACAC4953}""><dimension ref=""A1:A3""/><sheetViews><sheetView tabSelected=""1"" workbookViewId=""0""><selection activeCell=""A4"" sqref=""A4""/></sheetView></sheetViews><sheetFormatPr defaultRowHeight=""15"" x14ac:dyDescent=""0.25""/><sheetData><row r=""1"" spans=""1:1"" x14ac:dyDescent=""0.25""><c r=""A1""><v>1</v></c></row><row r=""2"" spans=""1:1"" x14ac:dyDescent=""0.25""><c r=""A2""><v>2</v></c></row><row r=""3"" spans=""1:1"" x14ac:dyDescent=""0.25""><c r=""A3""><v>3</v></c></row></sheetData><pageMargins left=""0.7"" right=""0.7"" top=""0.75"" bottom=""0.75"" header=""0.3"" footer=""0.3""/></worksheet>"; StartExstream(xContents, xFilename); // Keep the console window open in debug mode. System.Console.WriteLine("Press any key to exit."); System.Console.ReadKey(); bool StartExstream(string strLine, string strFileName) { // Write the string to a file. using (StreamWriter outputFile = new StreamWriter(strFileName)) { outputFile.WriteLine(strLine); return true; } } } } } Finally ZIP the folder structure containing the XML - namespace ZipFolder // .NET Framework 4.7.2 // https://stackoverflow.com/questions/15241889/i-didnt-find-zipfile-class-in-the-system-io-compression-namespace?answertab=votes#tab-top { class Program { static void Main(string[] args) { string xlPath = @"C:\Users\david\Desktop\Book3.xlsx"; string folderPath = @"C:\Users\david\Desktop\Book3"; System.IO.Compression.ZipFile.CreateFromDirectory(folderPath, xlPath); // Keep the console window open in debug mode. System.Console.WriteLine("Press any key to exit."); System.Console.ReadKey(); } } } This produces an Excel file named Book3.xlsx which is valid and opens cleanly in Excel 365 on Windows 11. The result is a very simple Excel spreadsheet but you may need to reverse engineer a more complex version. Here is the code to unzip a .xlsx file. namespace UnZipXL // .NET Framework 4.7.2 // https://stackoverflow.com/questions/15241889/i-didnt-find-zipfile-class-in-the-system-io-compression-namespace?answertab=votes#tab-top { class Program { static void Main(string[] args) { string XLPath = @"C:\Users\david\Desktop\Book2.xlsx"; string extractPath = @"C:\Users\david\Desktop\extract"; System.IO.Compression.ZipFile.ExtractToDirectory(XLPath, extractPath); // Keep the console window open in debug mode. System.Console.WriteLine("Press any key to exit."); System.Console.ReadKey(); } } } Update: Here is a code fragment to update the Excel file. This is very simple again. // // https://learn.microsoft.com/en-us/dotnet/standard/io/how-to-write-text-to-a-file // .NET Framework 4.7.2 // using System.IO; namespace UpdateWorksheet5 { class Program { static void Main(string[] args) { string xContents = @"a"; string xFilename = @"a"; xFilename = @"C:\Users\david\Desktop\Book3\xl\worksheets\sheet1.xml"; xContents = @"<?xml version=""1.0"" encoding=""UTF-8"" standalone=""yes""?><worksheet xmlns=""http://schemas.openxmlformats.org/spreadsheetml/2006/main"" xmlns:r=""http://schemas.openxmlformats.org/officeDocument/2006/relationships"" xmlns:mc=""http://schemas.openxmlformats.org/markup-compatibility/2006"" mc:Ignorable=""x14ac xr xr2 xr3"" xmlns:x14ac=""http://schemas.microsoft.com/office/spreadsheetml/2009/9/ac"" xmlns:xr=""http://schemas.microsoft.com/office/spreadsheetml/2014/revision"" xmlns:xr2=""http://schemas.microsoft.com/office/spreadsheetml/2015/revision2"" xmlns:xr3=""http://schemas.microsoft.com/office/spreadsheetml/2016/revision3"" xr:uid=""{54E3D330-4E78-4755-89E0-1AADACAC4953}""><dimension ref=""A1:A3""/><sheetViews><sheetView tabSelected=""1"" workbookViewId=""0""><selection activeCell=""A4"" sqref=""A4""/></sheetView></sheetViews><sheetFormatPr defaultRowHeight=""15"" x14ac:dyDescent=""0.25""/><sheetData><row r=""1"" spans=""1:1"" x14ac:dyDescent=""0.25""><c r=""A1""><v>1</v></c></row><row r=""2"" spans=""1:1"" x14ac:dyDescent=""0.25""><c r=""A2""><v>2</v></c></row><row r=""3"" spans=""1:1"" x14ac:dyDescent=""0.25""><c r=""A3""><v>3</v></c></row></sheetData><pageMargins left=""0.7"" right=""0.7"" top=""0.75"" bottom=""0.75"" header=""0.3"" footer=""0.3""/></worksheet>"; xContents = xContents.Remove(941, 1).Insert(941, "0"); // character to replace is at 942 => index 941 StartExstream(xContents, xFilename); // Keep the console window open in debug mode. System.Console.WriteLine("Press any key to exit."); System.Console.ReadKey(); bool StartExstream(string strLine, string strFileName) { // Write the string to a file. using (StreamWriter outputFile = new StreamWriter(strFileName)) { outputFile.WriteLine(strLine); return true; } } } } } Update 2 - This code works almost unchanged except for the folder paths on a Mac. Using Microsoft Excel Online, .NET Framework 3.1, Visual Studio 2019 for Mac, MacOS Monterey 12.1. A: I found another library that does it without much dependencies: MiniExcel. You can create a DataSet with DataTables as spreadsheets (the name of the table being the name of the spreadsheet) and save it to .xlsx file by using var stream = File.Create(filePath); stream.SaveAs(dataSet); or MiniExcel.SaveAs(filePath, dataSet); It offers also reading of Excel files and does support reading and writing of CSV files. A: The various Office 2003 XML libraries avaliable work pretty well for smaller excel files. However, I find the sheer size of a large workbook saved in the XML format to be a problem. For example, a workbook I work with that would be 40MB in the new (and admittedly more tightly packed) XLSX format becomes a 360MB XML file. As far as my research has taken me, there are two commercial packages that allow output to the older binary file formats. They are: * *Gembox *ComponentOne Excel Neither are cheap (500USD and 800USD respectively, I think). but both work independant of Excel itself. What I would be curious about is the Excel output module for the likes of OpenOffice.org. I wonder if they can be ported from Java to .Net. A: I have written a simple code to export dataset to excel without using excel object by using System.IO.StreamWriter. Below is the code which will read all tables from dataset and write them to sheets one by one. I took help from this article. public static void exportToExcel(DataSet source, string fileName) { const string endExcelXML = "</Workbook>"; const string startExcelXML = "<xml version>\r\n<Workbook " + "xmlns=\"urn:schemas-microsoft-com:office:spreadsheet\"\r\n" + " xmlns:o=\"urn:schemas-microsoft-com:office:office\"\r\n " + "xmlns:x=\"urn:schemas- microsoft-com:office:" + "excel\"\r\n xmlns:ss=\"urn:schemas-microsoft-com:" + "office:spreadsheet\">\r\n <Styles>\r\n " + "<Style ss:ID=\"Default\" ss:Name=\"Normal\">\r\n " + "<Alignment ss:Vertical=\"Bottom\"/>\r\n <Borders/>" + "\r\n <Font/>\r\n <Interior/>\r\n <NumberFormat/>" + "\r\n <Protection/>\r\n </Style>\r\n " + "<Style ss:ID=\"BoldColumn\">\r\n <Font " + "x:Family=\"Swiss\" ss:Bold=\"1\"/>\r\n </Style>\r\n " + "<Style ss:ID=\"StringLiteral\">\r\n <NumberFormat" + " ss:Format=\"@\"/>\r\n </Style>\r\n <Style " + "ss:ID=\"Decimal\">\r\n <NumberFormat " + "ss:Format=\"0.0000\"/>\r\n </Style>\r\n " + "<Style ss:ID=\"Integer\">\r\n <NumberFormat " + "ss:Format=\"0\"/>\r\n </Style>\r\n <Style " + "ss:ID=\"DateLiteral\">\r\n <NumberFormat " + "ss:Format=\"mm/dd/yyyy;@\"/>\r\n </Style>\r\n " + "</Styles>\r\n "; System.IO.StreamWriter excelDoc = null; excelDoc = new System.IO.StreamWriter(fileName); int sheetCount = 1; excelDoc.Write(startExcelXML); foreach (DataTable table in source.Tables) { int rowCount = 0; excelDoc.Write("<Worksheet ss:Name=\"" + table.TableName + "\">"); excelDoc.Write("<Table>"); excelDoc.Write("<Row>"); for (int x = 0; x < table.Columns.Count; x++) { excelDoc.Write("<Cell ss:StyleID=\"BoldColumn\"><Data ss:Type=\"String\">"); excelDoc.Write(table.Columns[x].ColumnName); excelDoc.Write("</Data></Cell>"); } excelDoc.Write("</Row>"); foreach (DataRow x in table.Rows) { rowCount++; //if the number of rows is > 64000 create a new page to continue output if (rowCount == 64000) { rowCount = 0; sheetCount++; excelDoc.Write("</Table>"); excelDoc.Write(" </Worksheet>"); excelDoc.Write("<Worksheet ss:Name=\"" + table.TableName + "\">"); excelDoc.Write("<Table>"); } excelDoc.Write("<Row>"); //ID=" + rowCount + " for (int y = 0; y < table.Columns.Count; y++) { System.Type rowType; rowType = x[y].GetType(); switch (rowType.ToString()) { case "System.String": string XMLstring = x[y].ToString(); XMLstring = XMLstring.Trim(); XMLstring = XMLstring.Replace("&", "&"); XMLstring = XMLstring.Replace(">", ">"); XMLstring = XMLstring.Replace("<", "<"); excelDoc.Write("<Cell ss:StyleID=\"StringLiteral\">" + "<Data ss:Type=\"String\">"); excelDoc.Write(XMLstring); excelDoc.Write("</Data></Cell>"); break; case "System.DateTime": //Excel has a specific Date Format of YYYY-MM-DD followed by //the letter 'T' then hh:mm:sss.lll Example 2005-01-31T24:01:21.000 //The Following Code puts the date stored in XMLDate //to the format above DateTime XMLDate = (DateTime)x[y]; string XMLDatetoString = ""; //Excel Converted Date XMLDatetoString = XMLDate.Year.ToString() + "-" + (XMLDate.Month < 10 ? "0" + XMLDate.Month.ToString() : XMLDate.Month.ToString()) + "-" + (XMLDate.Day < 10 ? "0" + XMLDate.Day.ToString() : XMLDate.Day.ToString()) + "T" + (XMLDate.Hour < 10 ? "0" + XMLDate.Hour.ToString() : XMLDate.Hour.ToString()) + ":" + (XMLDate.Minute < 10 ? "0" + XMLDate.Minute.ToString() : XMLDate.Minute.ToString()) + ":" + (XMLDate.Second < 10 ? "0" + XMLDate.Second.ToString() : XMLDate.Second.ToString()) + ".000"; excelDoc.Write("<Cell ss:StyleID=\"DateLiteral\">" + "<Data ss:Type=\"DateTime\">"); excelDoc.Write(XMLDatetoString); excelDoc.Write("</Data></Cell>"); break; case "System.Boolean": excelDoc.Write("<Cell ss:StyleID=\"StringLiteral\">" + "<Data ss:Type=\"String\">"); excelDoc.Write(x[y].ToString()); excelDoc.Write("</Data></Cell>"); break; case "System.Int16": case "System.Int32": case "System.Int64": case "System.Byte": excelDoc.Write("<Cell ss:StyleID=\"Integer\">" + "<Data ss:Type=\"Number\">"); excelDoc.Write(x[y].ToString()); excelDoc.Write("</Data></Cell>"); break; case "System.Decimal": case "System.Double": excelDoc.Write("<Cell ss:StyleID=\"Decimal\">" + "<Data ss:Type=\"Number\">"); excelDoc.Write(x[y].ToString()); excelDoc.Write("</Data></Cell>"); break; case "System.DBNull": excelDoc.Write("<Cell ss:StyleID=\"StringLiteral\">" + "<Data ss:Type=\"String\">"); excelDoc.Write(""); excelDoc.Write("</Data></Cell>"); break; default: throw (new Exception(rowType.ToString() + " not handled.")); } } excelDoc.Write("</Row>"); } excelDoc.Write("</Table>"); excelDoc.Write(" </Worksheet>"); sheetCount++; } excelDoc.Write(endExcelXML); excelDoc.Close(); } A: I've just recently used FlexCel.NET and found it to be an excellent library! I don't say that about too many software products. No point in giving the whole sales pitch here, you can read all the features on their website. It is a commercial product, but you get the full source if you buy it. So I suppose you could compile it into your assembly if you really wanted to. Otherwise it's just one extra assembly to xcopy - no configuration or installation or anything like that. I don't think you'll find any way to do this without third-party libraries as .NET framework, obviously, does not have built in support for it and OLE Automation is just a whole world of pain. A: OpenXML is also a good alternative that helps avoid installing MS Excel on Server.The Open XML SDK 2.0 provided by Microsoft simplifies the task of manipulating Open XML packages and the underlying Open XML schema elements within a package. The Open XML Application Programming Interface (API) encapsulates many common tasks that developers perform on Open XML packages. Check this out OpenXML: Alternative that helps avoid installing MS Excel on Server A: I've used with success the following open source projects: * *ExcelPackage for OOXML formats (Office 2007) *NPOI for .XLS format (Office 2003). NPOI 2.0 (Beta) also supports XLSX. Take a look at my blog posts: Creating Excel spreadsheets .XLS and .XLSX in C# NPOI with Excel Table and dynamic Chart A: Well, you can also use a third party library like Aspose. This library has the benefit that it does not require Excel to be installed on your machine which would be ideal in your case. A: I agree about generating XML Spreadsheets, here's an example on how to do it for C# 3 (everyone just blogs about it in VB 9 :P) http://www.aaron-powell.com/linq-to-xml-to-excel A: Just want to add another reference to a third party solution that directly addresses your issue: http://www.officewriter.com (Disclaimer: I work for SoftArtisans, the company that makes OfficeWriter) A: public class GridViewExportUtil { public static void Export(string fileName, GridView gv) { HttpContext.Current.Response.Clear(); HttpContext.Current.Response.AddHeader( "content-disposition", string.Format("attachment; filename={0}", fileName)); HttpContext.Current.Response.ContentType = "application/ms-excel"; using (StringWriter sw = new StringWriter()) { using (HtmlTextWriter htw = new HtmlTextWriter(sw)) { // Create a form to contain the grid Table table = new Table(); // add the header row to the table if (gv.HeaderRow != null) { GridViewExportUtil.PrepareControlForExport(gv.HeaderRow); table.Rows.Add(gv.HeaderRow); } // add each of the data rows to the table foreach (GridViewRow row in gv.Rows) { GridViewExportUtil.PrepareControlForExport(row); table.Rows.Add(row); } // add the footer row to the table if (gv.FooterRow != null) { GridViewExportUtil.PrepareControlForExport(gv.FooterRow); table.Rows.Add(gv.FooterRow); } // render the table into the htmlwriter table.RenderControl(htw); // render the htmlwriter into the response HttpContext.Current.Response.Write(sw.ToString()); HttpContext.Current.Response.End(); } } } /// <summary> /// Replace any of the contained controls with literals /// </summary> /// <param name="control"></param> private static void PrepareControlForExport(Control control) { for (int i = 0; i < control.Controls.Count; i++) { Control current = control.Controls[i]; if (current is LinkButton) { control.Controls.Remove(current); control.Controls.AddAt(i, new LiteralControl((current as LinkButton).Text)); } else if (current is ImageButton) { control.Controls.Remove(current); control.Controls.AddAt(i, new LiteralControl((current as ImageButton).AlternateText)); } else if (current is HyperLink) { control.Controls.Remove(current); control.Controls.AddAt(i, new LiteralControl((current as HyperLink).Text)); } else if (current is DropDownList) { control.Controls.Remove(current); control.Controls.AddAt(i, new LiteralControl((current as DropDownList).SelectedItem.Text)); } else if (current is CheckBox) { control.Controls.Remove(current); control.Controls.AddAt(i, new LiteralControl((current as CheckBox).Checked ? "True" : "False")); } if (current.HasControls()) { GridViewExportUtil.PrepareControlForExport(current); } } } } Hi this solution is to export your grid view to your excel file it might help you out A: Some 3rd party component vendors like Infragistics or Syncfusion provide very good Excel export capabilities that do not require Microsoft Excel to be installed. Since these vendors also provide advanced UI grid components, these components are particularly handy if you want the style and layout of an excel export to mimic the current state of a grid in the user interface of your application. If your export is intended to be executed server side with emphasis on the data to be exported and with no link to the UI, then I would go for one of the free open source options (e.g. ExcelLibrary). I have previously been involved with projects that attempted to use server side automation on the Microsoft Office suite. Based on this experience I would strongly recommend against that approach. A: IKVM + POI Or, you could use the Interop ... A: Here's a way to do it with LINQ to XML, complete with sample code: Quickly Import and Export Excel Data with LINQ to XML It's a little complex, since you have to import namespaces and so forth, but it does let you avoid any external dependencies. (Also, of course, it's VB .NET, not C#, but you can always isolate the VB .NET stuff in its own project to use XML Literals, and do everything else in C#.) A: You can create nicely formatted Excel files using this library: http://officehelper.codeplex.com/documentation See below sample: using (ExcelHelper helper = new ExcelHelper(TEMPLATE_FILE_NAME, GENERATED_FILE_NAME)) { helper.Direction = ExcelHelper.DirectionType.TOP_TO_DOWN; helper.CurrentSheetName = "Sheet1"; helper.CurrentPosition = new CellRef("C3"); //the template xlsx should contains the named range "header"; use the command "insert"/"name". helper.InsertRange("header"); //the template xlsx should contains the named range "sample1"; //inside this range you should have cells with these values: //<name> , <value> and <comment>, which will be replaced by the values from the getSample() CellRangeTemplate sample1 = helper.CreateCellRangeTemplate("sample1", new List<string> {"name", "value", "comment"}); helper.InsertRange(sample1, getSample()); //you could use here other named ranges to insert new cells and call InsertRange as many times you want, //it will be copied one after another; //even you can change direction or the current cell/sheet before you insert //typically you put all your "template ranges" (the names) on the same sheet and then you just delete it helper.DeleteSheet("Sheet3"); } where sample look like this: private IEnumerable<List<object>> getSample() { var random = new Random(); for (int loop = 0; loop < 3000; loop++) { yield return new List<object> {"test", DateTime.Now.AddDays(random.NextDouble()*100 - 50), loop}; } } A: The simplest and fastest way to create an Excel file from C# is to use the Open XML Productivity Tool. The Open XML Productivity Tool comes with the Open XML SDK installation. The tool reverse engineers any Excel file into C# code. The C# code can then be used to re-generate that file. An overview of the process involved is: * *Install the Open XML SDK with the tool. *Create an Excel file using the latest Excel client with desired look. Name it DesiredLook.xlsx. *With the tool open DesiredLook.xlsx and click the Reflect Code button near the top. *The C# code for your file will be generated in the right pane of the tool. Add this to your C# solution and generate files with that desired look. As a bonus, this method works for any Word and PowerPoint files. As the C# developer, you will then make changes to the code to fit your needs. I have developed a simple WPF app on github which will run on Windows for this purpose. There is a placeholder class called GeneratedClass where you can paste the generated code. If you go back one version of the file, it will generate an excel file like this: A: You can use OLEDB to create and manipulate Excel files. Check this: Reading and Writing Excel using OLEDB. Typical example: using (OleDbConnection conn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\\temp\\test.xls;Extended Properties='Excel 8.0;HDR=Yes'")) { conn.Open(); OleDbCommand cmd = new OleDbCommand("CREATE TABLE [Sheet1] ([Column1] string, [Column2] string)", conn); cmd.ExecuteNonQuery(); } EDIT - Some more links: * *Hey, Scripting Guy! How Can I Read from Excel Without Using Excel? *How To Use ADO.NET to Retrieve and Modify Records in an Excel Workbook With Visual Basic .NET *Reading and Writing Excel Spreadsheets Using ADO.NET C# DbProviderFactory A: You can use a library called ExcelLibrary. It's a free, open source library posted on Google Code: ExcelLibrary This looks to be a port of the PHP ExcelWriter that you mentioned above. It will not write to the new .xlsx format yet, but they are working on adding that functionality in. It's very simple, small and easy to use. Plus it has a DataSetHelper that lets you use DataSets and DataTables to easily work with Excel data. ExcelLibrary seems to still only work for the older Excel format (.xls files), but may be adding support in the future for newer 2007/2010 formats. You can also use EPPlus, which works only for Excel 2007/2010 format files (.xlsx files). There's also NPOI which works with both. There are a few known bugs with each library as noted in the comments. In all, EPPlus seems to be the best choice as time goes on. It seems to be more actively updated and documented as well. Also, as noted by @АртёмЦарионов below, EPPlus has support for Pivot Tables and ExcelLibrary may have some support (Pivot table issue in ExcelLibrary) Here are a couple links for quick reference: ExcelLibrary - GNU Lesser GPL EPPlus - GNU (LGPL) - No longer maintained EPPlus 5 - Polyform Noncommercial - Starting May 2020 NPOI - Apache License Here some example code for ExcelLibrary: Here is an example taking data from a database and creating a workbook from it. Note that the ExcelLibrary code is the single line at the bottom: //Create the data set and table DataSet ds = new DataSet("New_DataSet"); DataTable dt = new DataTable("New_DataTable"); //Set the locale for each ds.Locale = System.Threading.Thread.CurrentThread.CurrentCulture; dt.Locale = System.Threading.Thread.CurrentThread.CurrentCulture; //Open a DB connection (in this example with OleDB) OleDbConnection con = new OleDbConnection(dbConnectionString); con.Open(); //Create a query and fill the data table with the data from the DB string sql = "SELECT Whatever FROM MyDBTable;"; OleDbCommand cmd = new OleDbCommand(sql, con); OleDbDataAdapter adptr = new OleDbDataAdapter(); adptr.SelectCommand = cmd; adptr.Fill(dt); con.Close(); //Add the table to the data set ds.Tables.Add(dt); //Here's the easy part. Create the Excel worksheet from the data set ExcelLibrary.DataSetHelper.CreateWorkbook("MyExcelFile.xls", ds); Creating the Excel file is as easy as that. You can also manually create Excel files, but the above functionality is what really impressed me. A: If you make data table or datagridview from the code you can save all data using this simple method.this method not recomended but its working 100%, even you are not install MS Excel in your computer. try { SaveFileDialog saveFileDialog1 = new SaveFileDialog(); saveFileDialog1.Filter = "Excel Documents (*.xls)|*.xls"; saveFileDialog1.FileName = "Employee Details.xls"; if (saveFileDialog1.ShowDialog() == DialogResult.OK) { string fname = saveFileDialog1.FileName; StreamWriter wr = new StreamWriter(fname); for (int i = 0; i <DataTable.Columns.Count; i++) { wr.Write(DataTable.Columns[i].ToString().ToUpper() + "\t"); } wr.WriteLine(); //write rows to excel file for (int i = 0; i < (DataTable.Rows.Count); i++) { for (int j = 0; j < DataTable.Columns.Count; j++) { if (DataTable.Rows[i][j] != null) { wr.Write(Convert.ToString(getallData.Rows[i][j]) + "\t"); } else { wr.Write("\t"); } } //go to next line wr.WriteLine(); } //close file wr.Close(); } } catch (Exception) { MessageBox.Show("Error Create Excel Sheet!"); } A: Some time ago, I created a DLL on top of NPOI. It's very simple to use it: IList<DummyPerson> dummyPeople = new List<DummyPerson>(); //Add data to dummyPeople... IExportEngine engine = new ExcelExportEngine(); engine.AddData(dummyPeople); MemoryStream memory = engine.Export(); You could read more about it on here. By the way, is 100% open source. Feel free to use, edit and share ;) A: To save xls into xlsx format, we just need to call SaveAs method from Microsoft.Office.Interop.Excel library. This method will take around 16 parameters and one of them is file format as well. Microsoft document: Here SaveAs Method Arguments The object we need to pass is like wb.SaveAs(filename, 51, System.Reflection.Missing.Value, System.Reflection.Missing.Value, false, false, 1,1, true, System.Reflection.Missing.Value, System.Reflection.Missing.Value, System.Reflection.Missing.Value) Here, 51 is is enumeration value for XLSX For SaveAs in different file formats you can refer the xlFileFormat A: I wonder why nobody suggested PowerShell with the free ImportExcel Module; it creates XML-Excel files (xlsx) with ease. Especially easy when creating Excel-sheets coming from Databases like SQL Server... A: In my projects, I use some several .net libraries to extract Excel File (both .xls and .xlsx) To Export data, I frequently use rdlc. To modify excel files I use (Sample code when I try to set blank Cell A15): ClosedXML //Closed XML var workbook = new XLWorkbook(sUrlFile); // load the existing excel file var worksheet = workbook.Worksheets.Worksheet(1); worksheet.Cell("A15").SetValue(""); workbook.Save(); IronXL string sUrlFile = "G:\\ReportAmortizedDetail.xls"; WorkBook workbook = WorkBook.Load(sUrlFile); WorkSheet sheet = workbook.WorkSheets.First(); //Select cells easily in Excel notation and return the calculated value sheet["A15"].First().Value = ""; sheet["A15"].First().FormatString = ""; workbook.Save(); workbook.Close(); workbook = null; SpireXLS (When I try, the library print additional sheet to give information that we use the trial library string sUrlFile = "G:\\ReportAmortizedDetail.xls"; Workbook workbook = new Workbook(); workbook.LoadFromFile(sUrlFile); //Get the 1st sheet Worksheet sheet = workbook.Worksheets[0]; //Specify the cell range CellRange range = sheet.Range["A15"]; //Find all matched text in the range CellRange[] cells = range.FindAllString("hi", false, false); //Replace text foreach (CellRange cell in range) { cell.Text = ""; } //Save workbook.Save(); Jet Oledb //ExcelTool Class public static int ExcelUpdateSheets(string path, string sWorksheetName, string sCellLocation, string sValue) { int iResult = -99; String sConnectionString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + path + ";Extended Properties='Excel 8.0;HDR=NO'"; OleDbConnection objConn = new OleDbConnection(sConnectionString); objConn.Open(); OleDbCommand objCmdSelect = new OleDbCommand("UPDATE [" + sWorksheetName + "$" + sCellLocation + "] SET F1=" + UtilityClass.ValueSQL(sValue), objConn); objCmdSelect.ExecuteNonQuery(); objConn.Close(); return iResult; } Usage : ExcelTool.ExcelUpdateSheets(sUrlFile, "ReportAmortizedDetail", "A15:A15", ""); Aspose var workbook = new Aspose.Cells.Workbook(sUrlFile); // access first (default) worksheet var sheet = workbook.Worksheets[0]; // access CellsCollection of first worksheet var cells = sheet.Cells; // write HelloWorld to cells A1 cells["A15"].Value = ""; // save spreadsheet to disc workbook.Save(sUrlFile); workbook.Dispose(); workbook = null; A: I am using following code for create excel 2007 file which create the file and write in that file but when i open the file but it give me error that exel cannot open the file bcz file might be coruupted or extension of the file is not compatible. but if i used .xls for file it work fines for (int i = 0; i < TotalFile; i++) { Contact.Clear(); if (innerloop == SplitSize) { for (int j = 0; j < SplitSize; j++) { string strContact = DSt.Tables[0].Rows[i * SplitSize + j][0].ToString(); Contact.Add(strContact); } string strExcel = strFileName + "_" + i.ToString() + ".xlsx"; File.WriteAllLines(strExcel, Contact.ToArray()); } } also refer link http://dotnet-magic.blogspot.in/2011/10/createformat-excel-file-from-cnet.html A: check this out no need for third party libraries you can simply export datatable data to excel file using this var dt = "your code for getting data into datatable"; Response.ClearContent(); Response.AddHeader("content-disposition", string.Format("attachment;filename={0}.xls", DateTime.Now.ToString("yyyy-MM-dd"))); Response.ContentType = "application/vnd.ms-excel"; string tab = ""; foreach (DataColumn dataColumn in dt.Columns) { Response.Write(tab + dataColumn.ColumnName); tab = "\t"; } Response.Write("\n"); int i; foreach (DataRow dataRow in dt.Rows) { tab = ""; for (i = 0; i < dt.Columns.Count; i++) { Response.Write(tab + dataRow[i].ToString()); tab = "\t"; } Response.Write("\n"); } Response.End();
{ "language": "en", "url": "https://stackoverflow.com/questions/151005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2124" }
Q: Is it OK to use static variables to cache information in ASP.net? At the moment I am working on a project admin application in C# 3.5 on ASP.net. In order to reduce hits to the database, I'm caching a lot of information using static variables. For example, a list of users is kept in memory in a static class. The class reads in all the information from the database on startup, and will update the database whenever changes are made, but it never needs to read from the datebase. The class pings other webservers (if they exist) with updated information at the same time as a write to the database. The pinging mechanism is a Windows service to which the cache object registers using a random available port. It is used for other things as well. The amount of data isn't all that great. At the moment I'm using it just to cache the users (password hashes, permissions, name, email etc.) It just saves a pile of calls being made to the database. I was wondering if there are any pitfalls to this method and/or if there are better ways to cache the data? A: As long as you can expect that the cache will never grow to a size greater than the amount of available memory, it's fine. Also, be sure that there will only be one instance of this application per database, or the caches in the different instances of the app could "fall out of sync." Where I work, we have a homegrown O/RM, and we do something similar to what you're doing with certain tables which are not expected to grow or change much. So, what you're doing is not unprecedented, and in fact in our system, is tried and true. A: Another Pitfall you must consider is thread safety. All of your application requests are running in the same AppDomain but may come on different threads. Accessing a static variable must account for it being accessed from multiple threads. Probably a bit more overhead than you are looking for. Cache object is better for this purpose. A: A pitfall: A static field is scoped per app domain, and increased load will make the server generate more app domains in the pool. This is not necessarily a problem if you only read from the statics, but you will get duplicate data in memory, and you will get a hit every time an app domain is created or recycled. Better to use the Cache object - it's intended for things like this. Edit: Turns out I was wrong about AppDomains (as pointed out in comments) - more instances of the Application will be generated under load, but they will all run in the same AppDomain. (But you should still use the Cache object!) A: Hmmm... The "classic" method would be the application cache, but provided you never update the static variables, or understand the locking issues if you do, and you understand that they can disappear at anytime with an appdomain restart then I don't really see the harm in using a static. A: I suggest you look into ways of having a distributed cache for your app. You can take a look at NCache or indeXus.Net The reason I suggested that is because you rolled your own ad-hoc way of updating information that you're caching. Static variables/references are fine but they don't update/refresh (so you'll have to handle aging on your own) and you seem to have a distributed setup.
{ "language": "en", "url": "https://stackoverflow.com/questions/151021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: How can I upgrade the *console* version of vim on OS X? I'm sure this is a newbie question, but every time I've compiled/dl'ed a new version of vim for os x, running vim on the command-line opens up the gvim app. I just want to upgrade the console version (so I can, for example, have python compiled in to use omnicomplete). A: If I understsood the question correcty, here is another solution: check out http://www.andrewvos.com/2011/07/23/upgrading-vim-on-os-x-with-homebrew/ Really simple, fast, painless. It uses homebrew-alt and you also need to have mercurial installed (it will prompt you if not). A: You can also use MacPorts to handle the installation for you. Once you've installed it, run the /opt/local/bin/vim binary. I place this in my PATH before the system binary dirs (although be aware that this may cause problems for cmdline tools that rely on the versions of tools shipped with OS X). A: This may sound stupid, but are you copying the vim binary to /usr/bin? by default, the "vim" path is /usr/bin/vim. If you compile from source, you'll likely need to either copy the vim binary to /usr/bin/vim (thus overwriting the original vim), or launch the compiled version via absolute path (eg. ~/vim-checkout/build/vim). that's just a guess, however. I can't see it being anything more than that. A: With Homebrew: brew install macvim ln -s /usr/local/bin/mvim /usr/local/bin/vim A: Can also symlink your new binary to /usr/local/bin/
{ "language": "en", "url": "https://stackoverflow.com/questions/151024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How do I unlock a SQLite database? When I enter this query: sqlite> DELETE FROM mails WHERE (id = 71); SQLite returns this error: SQL error: database is locked How do I unlock the database so this query will work? A: This error can be thrown if the file is in a remote folder, like a shared folder. I changed the database to a local directory and it worked perfectly. A: Some functions, like INDEX'ing, can take a very long time - and it locks the whole database while it runs. In instances like that, it might not even use the journal file! So the best/only way to check if your database is locked because a process is ACTIVELY writing to it (and thus you should leave it the hell alone until its completed its operation) is to md5 (or md5sum on some systems) the file twice. If you get a different checksum, the database is being written, and you really really REALLY don't want to kill -9 that process because you can easily end up with a corrupt table/database if you do. I'll reiterate, because it's important - the solution is NOT to find the locking program and kill it - it's to find if the database has a write lock for a good reason, and go from there. Sometimes the correct solution is just a coffee break. The only way to create this locked-but-not-being-written-to situation is if your program runs BEGIN EXCLUSIVE, because it wanted to do some table alterations or something, then for whatever reason never sends an END afterwards, and the process never terminates. All three conditions being met is highly unlikely in any properly-written code, and as such 99 times out of 100 when someone wants to kill -9 their locking process, the locking process is actually locking your database for a good reason. Programmers don't typically add the BEGIN EXCLUSIVE condition unless they really need to, because it prevents concurrency and increases user complaints. SQLite itself only adds it when it really needs to (like when indexing). Finally, the 'locked' status does not exist INSIDE the file as several answers have stated - it resides in the Operating System's kernel. The process which ran BEGIN EXCLUSIVE has requested from the OS a lock be placed on the file. Even if your exclusive process has crashed, your OS will be able to figure out if it should maintain the file lock or not!! It is not possible to end up with a database which is locked but no process is actively locking it!! When it comes to seeing which process is locking the file, it's typically better to use lsof rather than fuser (this is a good demonstration of why: https://unix.stackexchange.com/questions/94316/fuser-vs-lsof-to-check-files-in-use). Alternatively if you have DTrace (OSX) you can use iosnoop on the file. A: I added "Pooling=true" to connection string and it worked. A: The SQLite wiki DatabaseIsLocked page offers an explanation of this error message. It states, in part, that the source of contention is internal (to the process emitting the error). What this page doesn't explain is how SQLite decides that something in your process holds a lock and what conditions could lead to a false positive. This error code occurs when you try to do two incompatible things with a database at the same time from the same database connection. Changes related to file locking introduced in v3 and may be useful for future readers and can be found here: File Locking And Concurrency In SQLite Version 3 A: I found the documentation of the various states of locking in SQLite to be very helpful. Michael, if you can perform reads but can't perform writes to the database, that means that a process has gotten a RESERVED lock on your database but hasn't executed the write yet. If you're using SQLite3, there's a new lock called PENDING where no more processes are allowed to connect but existing connections can sill perform reads, so if this is the issue you should look at that instead. A: If you want to remove a "database is locked" error then follow these steps: * *Copy your database file to some other location. *Replace the database with the copied database. This will dereference all processes which were accessing your database file. A: Deleting the -journal file sounds like a terrible idea. It's there to allow sqlite to roll back the database to a consistent state after a crash. If you delete it while the database is in an inconsistent state, then you're left with a corrupted database. Citing a page from the sqlite site: If a crash or power loss does occur and a hot journal is left on the disk, it is essential that the original database file and the hot journal remain on disk with their original names until the database file is opened by another SQLite process and rolled back. [...] We suspect that a common failure mode for SQLite recovery happens like this: A power failure occurs. After power is restored, a well-meaning user or system administrator begins looking around on the disk for damage. They see their database file named "important.data". This file is perhaps familiar to them. But after the crash, there is also a hot journal named "important.data-journal". The user then deletes the hot journal, thinking that they are helping to cleanup the system. We know of no way to prevent this other than user education. The rollback is supposed to happen automatically the next time the database is opened, but it will fail if the process can't lock the database. As others have said, one possible reason for this is that another process currently has it open. Another possibility is a stale NFS lock, if the database is on an NFS volume. In that case, a workaround is to replace the database file with a fresh copy that isn't locked on the NFS server (mv database.db original.db; cp original.db database.db). Note that the sqlite FAQ recommends caution regarding concurrent access to databases on NFS volumes, because of buggy implementations of NFS file locking. I can't explain why deleting a -journal file would let you lock a database that you couldn't before. Is that reproducible? By the way, the presence of a -journal file doesn't necessarily mean that there was a crash or that there are changes to be rolled back. Sqlite has a few different journal modes, and in PERSIST or TRUNCATE modes it leaves the -journal file in place always, and changes the contents to indicate whether or not there are partial transactions to roll back. A: In windows you can try this program http://www.nirsoft.net/utils/opened_files_view.html to find out the process is handling db file. Try closed that program for unlock database In Linux and macOS you can do something similar, for example, if your locked file is development.db: $ fuser development.db This command will show what process is locking the file: > development.db: 5430 Just kill the process... kill -9 5430 ...And your database will be unlocked. A: I have such problem within the app, which access to SQLite from 2 connections - one was read-only and second for writing and reading. It looks like that read-only connection blocked writing from second connection. Finally, it is turns out that it is required to finalize or, at least, reset prepared statements IMMEDIATELY after use. Until prepared statement is opened, it caused to database was blocked for writing. DON'T FORGET CALL: sqlite_reset(xxx); or sqlite_finalize(xxx); A: I just had something similar happen to me - my web application was able to read from the database, but could not perform any inserts or updates. A reboot of Apache solved the issue at least temporarily. It'd be nice, however, to be able to track down the root cause. A: Should be a database's internal problem... For me it has been manifested after trying to browse database with "SQLite manager"... So, if you can't find another process connect to database and you just can't fix it, just try this radical solution: * *Provide to export your tables (You can use "SQLite manager" on Firefox) *If the migration alter your database scheme delete the last failed migration *Rename your "database.sqlite" file *Execute "rake db:migrate" to make a new working database *Provide to give the right permissions to database for table's importing *Import your backed up tables *Write the new migration *Execute it with "rake db:migrate" A: lsof command on my Linux environment helped me to figure it out that a process was hanging keeping the file open. Killed the process and problem was solved. A: This link solve the problem. : When Sqlite gives : Database locked error It solved my problem may be useful to you. And you can use begin transaction and end transaction to not make database locked in future. A: In my experience, this error is caused by: You opened multiple connections. e.g.: * *1 or more sqlitebrowser (GUI) *1 or more electron thread *rails thread I am nore sure about the details of SQLITE3 how to handle the multiple thread/request, but when I close the sqlitebrowser and electron thread, then rails is running well and won't block any more. A: the SQLite db files are just files, so the first step would be to make sure it isn't read-only. The other thing to do is to make sure that you don't have some sort of GUI SQLite DB viewer with the DB open. You could have the DB open in another shell, or your code may have the DB open. Typically you would see this if a different thread, or application such as SQLite Database Browser has the DB open for writing. A: My lock was caused by the system crashing and not by a hanging process. To resolve this, I simply renamed the file then copied it back to its original name and location. Using a Linux shell that would be: mv mydata.db temp.db cp temp.db mydata.db A: If a process has a lock on an SQLite DB and crashes, the DB stays locked permanently. That's the problem. It's not that some other process has a lock. A: I had this problem just now, using an SQLite database on a remote server, stored on an NFS mount. SQLite was unable to obtain a lock after the remote shell session I used had crashed while the database was open. The recipes for recovery suggested above did not work for me (including the idea to first move and then copy the database back). But after copying it to a non-NFS system, the database became usable and not data appears to have been lost. A: I caused my sqlite db to become locked by crashing an app during a write. Here is how i fixed it: echo ".dump" | sqlite old.db | sqlite new.db Taken from: http://random.kakaopor.hu/how-to-repair-an-sqlite-database A: I ran into this same problem on Mac OS X 10.5.7 running Python scripts from a terminal session. Even though I had stopped the scripts and the terminal window was sitting at the command prompt, it would give this error the next time it ran. The solution was to close the terminal window and then open it up again. Doesn't make sense to me, but it worked. A: Before going down the reboot option, it is worthwhile to see if you can find the user of the sqlite database. On Linux, one can employ fuser to this end: $ fuser database.db $ fuser database.db-journal In my case I got the following response: philip 3556 4700 0 10:24 pts/3 00:00:01 /usr/bin/python manage.py shell Which showed that I had another Python program with pid 3556 (manage.py) using the database. A: I just had the same error. After 5 minets google-ing I found that I didun't closed one shell witch were using the db. Just close it and try again ;) A: I had the same problem. Apparently the rollback function seems to overwrite the db file with the journal which is the same as the db file but without the most recent change. I've implemented this in my code below and it's been working fine since then, whereas before my code would just get stuck in the loop as the database stayed locked. Hope this helps my python code ############## #### Defs #### ############## def conn_exec( connection , cursor , cmd_str ): done = False try_count = 0.0 while not done: try: cursor.execute( cmd_str ) done = True except sqlite.IntegrityError: # Ignore this error because it means the item already exists in the database done = True except Exception, error: if try_count%60.0 == 0.0: # print error every minute print "\t" , "Error executing command" , cmd_str print "Message:" , error if try_count%120.0 == 0.0: # if waited for 2 miutes, roll back print "Forcing Unlock" connection.rollback() time.sleep(0.05) try_count += 0.05 def conn_comit( connection ): done = False try_count = 0.0 while not done: try: connection.commit() done = True except sqlite.IntegrityError: # Ignore this error because it means the item already exists in the database done = True except Exception, error: if try_count%60.0 == 0.0: # print error every minute print "\t" , "Error executing command" , cmd_str print "Message:" , error if try_count%120.0 == 0.0: # if waited for 2 miutes, roll back print "Forcing Unlock" connection.rollback() time.sleep(0.05) try_count += 0.05 ################## #### Run Code #### ################## connection = sqlite.connect( db_path ) cursor = connection.cursor() # Create tables if database does not exist conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS fix (path TEXT PRIMARY KEY);''') conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS tx (path TEXT PRIMARY KEY);''') conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS completed (fix DATE, tx DATE);''') conn_comit( connection ) A: One common reason for getting this exception is when you are trying to do a write operation while still holding resources for a read operation. For example, if you SELECT from a table, and then try to UPDATE something you've selected without closing your ResultSet first. A: I was having "database is locked" errors in a multi-threaded application as well, which appears to be the SQLITE_BUSY result code, and I solved it with setting sqlite3_busy_timeout to something suitably long like 30000. (On a side-note, how odd that on a 7 year old question nobody found this out already! SQLite really is a peculiar and amazing project...) A: An old question, with a lot of answers, here's the steps I've recently followed reading the answers above, but in my case the problem was due to cifs resource sharing. This case is not reported previously, so hope it helps someone. * *Check no connections are left open in your java code. *Check no other processes are using your SQLite db file with lsof. *Check the user owner of your running jvm process has r/w permissions over the file. *Try to force the lock mode on the connection opening with final SQLiteConfig config = new SQLiteConfig(); config.setReadOnly(false); config.setLockingMode(LockingMode.NORMAL); connection = DriverManager.getConnection(url, config.toProperties()); If your using your SQLite db file over a NFS shared folder, check this point of the SQLite faq, and review your mounting configuration options to make sure your avoiding locks, as described here: //myserver /mymount cifs username=*****,password=*****,iocharset=utf8,sec=ntlm,file,nolock,file_mode=0700,dir_mode=0700,uid=0500,gid=0500 0 0 A: I got this error in a scenario a little different from the ones describe here. The SQLite database rested on a NFS filesystem shared by 3 servers. On 2 of the servers I was able do run queries on the database successfully, on the third one thought I was getting the "database is locked" message. The thing with this 3rd machine was that it had no space left on /var. Everytime I tried to run a query in ANY SQLite database located in this filesystem I got the "database is locked" message and also this error over the logs: Aug 8 10:33:38 server01 kernel: lockd: cannot monitor 172.22.84.87 And this one also: Aug 8 10:33:38 server01 rpc.statd[7430]: Failed to insert: writing /var/lib/nfs/statd/sm/other.server.name.com: No space left on device Aug 8 10:33:38 server01 rpc.statd[7430]: STAT_FAIL to server01 for SM_MON of 172.22.84.87 After the space situation was handled everything got back to normal. A: If you're trying to unlock the Chrome database to view it with SQLite, then just shut down Chrome. Windows %userprofile%\Local Settings\Application Data\Google\Chrome\User Data\Default\Web Data or %userprofile%\Local Settings\Application Data\Google\Chrome\User Data\Default\Chrome Web Data Mac ~/Library/Application Support/Google/Chrome/Default/Web Data A: From your previous comments you said a -journal file was present. This could mean that you have opened and (EXCLUSIVE?) transaction and have not yet committed the data. Did your program or some other process leave the -journal behind?? Restarting the sqlite process will look at the journal file and clean up any uncommitted actions and remove the -journal file. A: As Seun Osewa has said, sometimes a zombie process will sit in the terminal with a lock aquired, even if you don't think it possible. Your script runs, crashes, and you go back to the prompt, but there's a zombie process spawned somewhere by a library call, and that process has the lock. Closing the terminal you were in (on OSX) might work. Rebooting will work. You could look for "python" processes (for example) that are not doing anything, and kill them. A: you can try this: .timeout 100 to set timeout . I don't know what happen in command line but in C# .Net when I do this: "UPDATE table-name SET column-name = value;" I get Database is locked but this "UPDATE table-name SET column-name = value" it goes fine. It looks like when you add ;, sqlite'll look for further command. A: I got this error when using Delphi with the LiteDAC components. Turned out it only happened while running my app from the Delphi IDE if the Connected property was set True for the SQLite connection component (in this case TLiteConnection). A: For some reason the database got locked. Here is how I fixed it. * *I downloaded the sqlite file to my system (FTP) *Deleted the online sqlite file *Uploaded the file back to the hosting provider It works fine now. A: In my case, I also got this error. I already checked for other processes that might be the cause of locked database such as (SQLite Manager, other programs that connects to my database). But there's no other program that connects to it, it's just another active SQLConnection in the same application that stays connected. Try checking your previous active SQLConnection that might be still connected (disconnect it first) before you establish a new SQLConnection and new command. A: This is because some other query is running on that database. SQLite is a database where query execute synchronously. So if some one else is using that database then if you perform a query or transaction it will give this error. So stop that process which is using the particular database and then execute your query. A: I was receiving sqlite locks as well in a C# .NET 4.6.1 app I built when it was trying to write data, but not when running the app in Visual Studio on my dev machine. Instead it was only happening when the app was installed and running on a remote Windows 10 machine. Initially I thought it was file system permissions, however it turns out that the System.Data.SQLite package drivers (v1.0.109.2) I installed in the project using Nuget were causing the problem. I removed the NuGet package, and manually referenced an older version of the drivers in the project, and once the app was reinstalled on the remote machine the locking issues magically disappeared. Can only think there was a bug with the latest drivers or the Nuget package. A: I had the tool "DB Browser for SQLite" running and was also working in there. Obviously that tool also puts locks on things. After clicking on "Write Changes" or "Revert Changes", the lock was gone and the other process (A React-Native script) did not give that error anymore. A: I encountered this error while looking at stored passwords in Google Chrome. # ~/.config/google-chrome/Default $ sqlite3 Login\ Data SQLite version 3.35.5 2021-04-19 18:32:05 sqlite> .tables Error: database is locked If you don't particularly care about the parent process or if you don't want to stop the current chrome process which is using the database, simply copy the file somewhere else. $ cp Login\ Data ~/tmp/ld.sql $ sqlite3 ~/tmp/ld.sql .tables field_info meta sync_model_metadata insecure_credentials stats logins sync_entities_metadata Doing so will allow you to read the contents of the database without disturbing or stopping the main chrome process.
{ "language": "en", "url": "https://stackoverflow.com/questions/151026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "321" }
Q: How can I call controller/view helper methods from the console in Ruby on Rails? When I load script/console, sometimes I want to play with the output of a controller or a view helper method. Are there ways to: * *simulate a request? *call methods from a controller instance on said request? *test helper methods, either via said controller instance or another way? A: In Ruby on Rails 3, try this: session = ActionDispatch::Integration::Session.new(Rails.application) session.get(url) body = session.response.body The body will contain the HTML of the URL. How to route and render (dispatch) from a model in Ruby on Rails 3 A: The earlier answers are calling helpers, but the following will help for calling controller methods. I have used this on Ruby on Rails 2.3.2. First add the following code to your .irbrc file (which can be in your home directory) class Object def request(options = {}) url=app.url_for(options) app.get(url) puts app.html_document.root.to_s end end Then in the Ruby on Rails console you can type something like... request(:controller => :show, :action => :show_frontpage) ...and the HTML will be dumped to the console. A: Here's one way to do this through the console: >> foo = ActionView::Base.new => #<ActionView::Base:0x2aaab0ac2af8 @assigns_added=nil, @assigns={}, @helpers=#<ActionView::Base::ProxyModule:0x2aaab0ac2a58>, @controller=nil, @view_paths=[]> >> foo.extend YourHelperModule => #<ActionView::Base:0x2aaab0ac2af8 @assigns_added=nil, @assigns={}, @helpers=#<ActionView::Base::ProxyModule:0x2aaab0ac2a58>, @controller=nil, @view_paths=[]> >> foo.your_helper_method(args) => "<html>created by your helper</html>" Creating a new instance of ActionView::Base gives you access to the normal view methods that your helper likely uses. Then extending YourHelperModule mixes its methods into your object letting you view their return values. A: For controllers, you can instantiate a controller object in the Ruby on Rails console. For example, class CustomPagesController < ApplicationController def index @customs = CustomPage.all end def get_number puts "Got the Number" end protected def get_private_number puts 'Got private Number' end end custom = CustomPagesController.new 2.1.5 :011 > custom = CustomPagesController.new => #<CustomPagesController:0xb594f77c @_action_has_layout=true, @_routes=nil, @_headers={"Content-Type"=>"text/html"}, @_status=200, @_request=nil, @_response=nil> 2.1.5 :014 > custom.get_number Got the Number => nil # For calling private or protected methods, 2.1.5 :048 > custom.send(:get_private_number) Got private Number => nil A: To call helpers, use the helper object: $ ./script/console >> helper.number_to_currency('123.45') => "R$ 123,45" If you want to use a helper that's not included by default (say, because you removed helper :all from ApplicationController), just include the helper. >> include BogusHelper >> helper.bogus => "bogus output" As for dealing with controllers, I quote Nick's answer: > app.get '/posts/1' > response = app.response # you now have a rails response object much like the integration tests > response.body # get you the HTML > response.cookies # hash of the cookies # etc, etc A: Inside any controller action or view, you can invoke the console by calling the console method. For example, in a controller: class PostsController < ApplicationController def new console @post = Post.new end end Or in a view: <% console %> <h2>New Post</h2> This will render a console inside your view. You don't need to care about the location of the console call; it won't be rendered on the spot of its invocation but next to your HTML content. See: http://guides.rubyonrails.org/debugging_rails_applications.html A: One possible approach for Helper method testing in the Ruby on Rails console is: Struct.new(:t).extend(YourHelper).your_method(*arg) And for reload do: reload!; Struct.new(:t).extend(YourHelper).your_method(*arg) A: If you have added your own helper and you want its methods to be available in console, do: * *In the console execute include YourHelperName *Your helper methods are now available in console, and use them calling method_name(args) in the console. Example: say you have MyHelper (with a method my_method) in 'app/helpers/my_helper.rb`, then in the console do: * *include MyHelper *my_helper.my_method A: If the method is the POST method then: app.post 'controller/action?parameter1=value1&parameter2=value2' (Here parameters will be as per your applicability.) Else if it is the GET method then: app.get 'controller/action' A: Here is how to make an authenticated POST request, using Refinery as an example: # Start Rails console rails console # Get the login form app.get '/community_members/sign_in' # View the session app.session.to_hash # Copy the CSRF token "_csrf_token" and place it in the login request. # Log in from the console to create a session app.post '/community_members/login', {"authenticity_token"=>"gT7G17RNFaWUDLC6PJGapwHk/OEyYfI1V8yrlg0lHpM=", "refinery_user[login]"=>'chloe', 'refinery_user[password]'=>'test'} # View the session to verify CSRF token is the same app.session.to_hash # Copy the CSRF token "_csrf_token" and place it in the request. It's best to edit this in Notepad++ app.post '/refinery/blog/posts', {"authenticity_token"=>"gT7G17RNFaWUDLC6PJGapwHk/OEyYfI1V8yrlg0lHpM=", "switch_locale"=>"en", "post"=>{"title"=>"Test", "homepage"=>"0", "featured"=>"0", "magazine"=>"0", "refinery_category_ids"=>["1282"], "body"=>"Tests do a body good.", "custom_teaser"=>"", "draft"=>"0", "tag_list"=>"", "published_at(1i)"=>"2014", "published_at(2i)"=>"5", "published_at(3i)"=>"27", "published_at(4i)"=>"21", "published_at(5i)"=>"20", "custom_url"=>"", "source_url_title"=>"", "source_url"=>"", "user_id"=>"56", "browser_title"=>"", "meta_description"=>""}, "continue_editing"=>"false", "locale"=>:en} You might find these useful too if you get an error: app.cookies.to_hash app.flash.to_hash app.response # long, raw, HTML A: An easy way to call a controller action from a script/console and view/manipulate the response object is: > app.get '/posts/1' > response = app.response # You now have a Ruby on Rails response object much like the integration tests > response.body # Get you the HTML > response.cookies # Hash of the cookies # etc., etc. The app object is an instance of ActionController::Integration::Session This works for me using Ruby on Rails 2.1 and 2.3, and I did not try earlier versions. A: Another way to do this is to use the Ruby on Rails debugger. There's a Ruby on Rails guide about debugging at http://guides.rubyonrails.org/debugging_rails_applications.html Basically, start the server with the -u option: ./script/server -u And then insert a breakpoint into your script where you would like to have access to the controllers, helpers, etc. class EventsController < ApplicationController def index debugger end end And when you make a request and hit that part in the code, the server console will return a prompt where you can then make requests, view objects, etc. from a command prompt. When finished, just type 'cont' to continue execution. There are also options for extended debugging, but this should at least get you started. A: You can access your methods in the Ruby on Rails console like the following: controller.method_name helper.method_name A: If you need to test from the console (tested on Ruby on Rails 3.1 and 4.1): Call Controller Actions: app.get '/' app.response app.response.headers # => { "Content-Type"=>"text/html", ... } app.response.body # => "<!DOCTYPE html>\n<html>\n\n<head>\n..." ApplicationController methods: foo = ActionController::Base::ApplicationController.new foo.public_methods(true||false).sort foo.some_method Route Helpers: app.myresource_path # => "/myresource" app.myresource_url # => "http://www.example.com/myresource" View Helpers: foo = ActionView::Base.new foo.javascript_include_tag 'myscript' #=> "<script src=\"/javascripts/myscript.js\"></script>" helper.link_to "foo", "bar" #=> "<a href=\"bar\">foo</a>" ActionController::Base.helpers.image_tag('logo.png') #=> "<img alt=\"Logo\" src=\"/images/logo.png\" />" Render: views = Rails::Application::Configuration.new(Rails.root).paths["app/views"] views_helper = ActionView::Base.new views views_helper.render 'myview/mytemplate' views_helper.render file: 'myview/_mypartial', locals: {my_var: "display:block;"} views_helper.assets_prefix #=> '/assets' ActiveSupport methods: require 'active_support/all' 1.week.ago => 2013-08-31 10:07:26 -0300 a = {'a'=>123} a.symbolize_keys => {:a=>123} Lib modules: > require 'my_utils' => true > include MyUtils => Object > MyUtils.say "hi" evaluate: hi => true
{ "language": "en", "url": "https://stackoverflow.com/questions/151030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "469" }
Q: How do you solicit testers for an open source project? In order to improve my open source project, I need testers. I have created my project independently, so up to now I have been the sole coder and tester. I have tested the thing to death, but as we all know it is dangerous as a developer to test your own code. I'm looking for ideas on how I can get some other eyes to exercise it. To clarify, I have released it on sourceforge and posted it on freshmeat, dzone, reddit, etc. A: Are you looking for "testers" or "users"? There's a world of difference. A tester uses his time and energy to find your bugs. How many people are willing to do that? At a rough guess, I'd say zero. A user uses your software to solve his problems. He reports bugs to you because he thinks that you might fix them for him. So you've got to find people with a problem, and convince them that your software will fix it. One thing you'll need is lots of documentation. A 1-minute screencast, in-depth API, and everything in between. You need to persuade someone that, "If I use tox, I will totally rock!" That's your tester. A: Release an early version, announce it on freshmeat, and wait for the world to beat a path to your door? A: Go to where the testers are. Find sites where testers go. http://www.stickyminds.com, local QA groups (like mine) http://redearthqa.blogspot.com/ or local recruiters that have QA people looking for experience.
{ "language": "en", "url": "https://stackoverflow.com/questions/151033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Does the Eclipse editor have an equivalent of Emacs's "align-regex"? I've been using Eclipse pretty regularly for several years now, but I admit to not having explored all the esoterica it has to offer, particularly in the areas of what formatting features the editors offer. The main thing I miss from (X)emacs is the "align-regex" command, which let me take several lines into a region and then format them so that some common pattern in all lines was aligned. The simplest example of this is a series of variable assignments: var str = new String('aString'); var index = 0; var longCamelCaseObjectName = new LongNameObject(); After doing align-regex on "=", that would become: var str = new String('aString'); var index = 0; var longCamelCaseObjectName = new LongNameObject(); Now, you may have your own thoughts on stylistic (ab)use of white space and alignment, etc., but that's just an example (I'm actually trying to align a different kind of mess entirely). Can anyone tell me off-hand if there's an easy key-combo-shortcut for this in Eclipse? Or even a moderately-tricky one? A: Version 2.8.7 of the Emacs+ plugin now supplies a fairly complete align-regexp implementation in Eclipse A: If you wish to do something more complex than just aligning fields (or anything else the Eclipse code formatter offers you) you are pretty much left with having to write your own plugin. A: You can set the formatter to do this: Preferences -> Java -> Code Style -> Formatter. Click 'Edit' on the profile (you may need to make a new one since you can't edit the default). In the indentation section select 'Align fields with columns'. Then, in your code CTRL+SHIFT+F will run that formatter. That will of course run all the other rules, so you may want to peruse the various options here. A: columns4eclipse is a nice option. It is an Eclipse plugin that allow you to do the alignments you mention in your question. I use it with Eclipse 4.3 and 4.5, it works well. I had made a gif video showing its use but my answer got deleted by a mod, so I let you try and see by yourself. A: This plug-in does exactly what you want: OCDFormat It works in all text files, not only Java.
{ "language": "en", "url": "https://stackoverflow.com/questions/151034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: How can I detect the last iteration in a loop over std::map? I'm trying to figure out the best way to determine whether I'm in the last iteration of a loop over a map in order to do something like the following: for (iter = someMap.begin(); iter != someMap.end(); ++iter) { bool last_iteration; // do something for all iterations if (!last_iteration) { // do something for all but the last iteration } } There seem to be several ways of doing this: random access iterators, the distance function, etc. What's the canonical method? Edit: no random access iterators for maps! A: Modified Mark Ransom's so it actually work as intended. finalIter = someMap.end(); --finalIter; if (iter != final_iter) A: Surprised no one mentioned it yet, but of course boost has something ;) Boost.Next (and the equivalent Boost.Prior) Your example would look like: for (iter = someMap.begin(); iter != someMap.end(); ++iter) { // do something for all iterations if (boost::next(iter) != someMap.end()) { // do something for all but the last iteration } } A: The following code would be optimized by a compiler so that to be the best solution for this task by performance as well as by OOP rules: if (&*it == &*someMap.rbegin()) { //the last iteration } This is the best code by OOP rules because std::map has got a special member function rbegin for the code like: final_iter = someMap.end(); --final_iter; A: Why to work to find the EOF so that you dont give something to it. Simply, exclude it; for (iter = someMap.begin(); someMap.end() - 1; ++iter) { //apply to all from begin to second last element } KISS (KEEP IT SIMPLY SIMPLE) A: Canonical? I can't claim that, but I'd suggest final_iter = someMap.end(); --final_iter; if (iter != final_iter) ... Edited to correct as suggested by KTC. (Thanks! Sometimes you go too quick and mess up on the simplest things...) A: Since C++11, you can also use std::next() for (auto iter = someMap.begin(); iter != someMap.end(); ++iter) { // do something for all iterations if (std::next(iter) != someMap.end()) { // do something for all but the last iteration } } Although the question was asked a while ago, I thought it would be worth sharing. A: This seems like the simplest: bool last_iteration = iter == (--someMap.end()); A: If you just want to use a ForwardIterator, this should work: for ( i = c.begin(); i != c.end(); ) { iterator cur = i++; // do something, using cur if ( i != c.end() ) { // do something using cur for all but the last iteration } } A: #include <boost/lambda/lambda.hpp> #include <boost/lambda/bind.hpp> #include <algorithm> using namespace boost::lambda; // call the function foo on each element but the last... if( !someMap.empty() ) { std::for_each( someMap.begin(), --someMap.end(), bind( &Foo, _1 ) ); } Using std::for_each will ensure that the loop is tight and accurate... Note the introduction of the function foo() which takes a single argument (the type should match what is contained in someMap). This approach has the added addition of being 1 line. Of course, if Foo is really small, you can use a lambda function and get rid of the call to &Foo. A: How about this, no one mentioning but... for (iter = someMap.begin(); iter != someMap.end(); ++iter) { // do something for all iterations if (iter != --someMap.end()) { // do something for all but the last iteration } } this seems simple, mm... A: For someone who likes C++11 range-based loop: for (const auto& pair : someMap) { if (&pair != &*someMap.rbegin()) ... } Notice only reference type works here, not auto pair A: A simple, yet effective, approach: size_t items_remaining = someMap.size(); for (iter = someMap.begin(); iter != someMap.end(); iter++) { bool last_iteration = items_remaining-- == 1; } A: You can just pull an element out of the map prior to iteration, then perform your "last iteration" work out of the loop and then put the element back into the map. This is horribly bad for asynchronous code, but considering how bad the rest of C++ is for concurrency, I don't think it'll be an issue. :-) A: Full program: #include <iostream> #include <list> void process(int ii) { std::cout << " " << ii; } int main(void) { std::list<int> ll; ll.push_back(1); ll.push_back(2); ll.push_back(3); ll.push_back(4); ll.push_back(5); ll.push_back(6); std::list<int>::iterator iter = ll.begin(); if (iter != ll.end()) { std::list<int>::iterator lastIter = iter; ++ iter; while (iter != ll.end()) { process(*lastIter); lastIter = iter; ++ iter; } // todo: think if you need to process *lastIter std::cout << " | last:"; process(*lastIter); } std::cout << std::endl; return 0; } This program yields: 1 2 3 4 5 | last: 6 A: Here's my optimized take: iter = someMap.begin(); do { // Note that curr = iter++ may involve up to three copy operations curr = iter; // Do stuff with curr if (++iter == someMap.end()) { // Oh, this was the last iteration break; } // Do more stuff with curr } while (true);
{ "language": "en", "url": "https://stackoverflow.com/questions/151046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: When should I use GC.SuppressFinalize()? In .NET, under which circumstances should I use GC.SuppressFinalize()? What advantage(s) does using this method give me? A: SupressFinalize tells the system that whatever work would have been done in the finalizer has already been done, so the finalizer doesn't need to be called. From the .NET docs: Objects that implement the IDisposable interface can call this method from the IDisposable.Dispose method to prevent the garbage collector from calling Object.Finalize on an object that does not require it. In general, most any Dispose() method should be able to call GC.SupressFinalize(), because it should clean up everything that would be cleaned up in the finalizer. SupressFinalize is just something that provides an optimization that allows the system to not bother queuing the object to the finalizer thread. A properly written Dispose()/finalizer should work properly with or without a call to GC.SupressFinalize(). A: Dispose(true); GC.SuppressFinalize(this); If object has finalizer, .net put a reference in finalization queue. Since we have call Dispose(true), it clear object, so we don't need finalization queue to do this job. So call GC.SuppressFinalize(this) remove reference in finalization queue. A: SuppressFinalize should only be called by a class that has a finalizer. It's informing the Garbage Collector (GC) that this object was cleaned up fully. The recommended IDisposable pattern when you have a finalizer is: public class MyClass : IDisposable { private bool disposed = false; protected virtual void Dispose(bool disposing) { if (!disposed) { if (disposing) { // called via myClass.Dispose(). // OK to use any private object references } // Release unmanaged resources. // Set large fields to null. disposed = true; } } public void Dispose() // Implement IDisposable { Dispose(true); GC.SuppressFinalize(this); } ~MyClass() // the finalizer { Dispose(false); } } Normally, the CLR keeps tabs on objects with a finalizer when they are created (making them more expensive to create). SuppressFinalize tells the GC that the object was cleaned up properly and doesn't need to go onto the finalizer queue. It looks like a C++ destructor, but doesn't act anything like one. The SuppressFinalize optimization is not trivial, as your objects can live a long time waiting on the finalizer queue. Don't be tempted to call SuppressFinalize on other objects mind you. That's a serious defect waiting to happen. Design guidelines inform us that a finalizer isn't necessary if your object implements IDisposable, but if you have a finalizer you should implement IDisposable to allow deterministic cleanup of your class. Most of the time you should be able to get away with IDisposable to clean up resources. You should only need a finalizer when your object holds onto unmanaged resources and you need to guarantee those resources are cleaned up. Note: Sometimes coders will add a finalizer to debug builds of their own IDisposable classes in order to test that code has disposed their IDisposable object properly. public void Dispose() // Implement IDisposable { Dispose(true); #if DEBUG GC.SuppressFinalize(this); #endif } #if DEBUG ~MyClass() // the finalizer { Dispose(false); } #endif A: If a class, or anything derived from it, might hold the last live reference to an object with a finalizer, then either GC.SuppressFinalize(this) or GC.KeepAlive(this) should be called on the object after any operation that might be adversely affected by that finalizer, thus ensuring that the finalizer won't run until after that operation is complete. The cost of GC.KeepAlive() and GC.SuppressFinalize(this) are essentially the same in any class that doesn't have a finalizer, and classes that do have finalizers should generally call GC.SuppressFinalize(this), so using the latter function as the last step of Dispose() may not always be necessary, but it won't be wrong. A: That method must be called on the Dispose method of objects that implements the IDisposable, in this way the GC wouldn't call the finalizer another time if someones calls the Dispose method. See: GC.SuppressFinalize(Object) Method - Microsoft Docs
{ "language": "en", "url": "https://stackoverflow.com/questions/151051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "369" }
Q: How do I implement Section-specific navigation in Ruby on Rails? I have a Ruby/Rails app that has two or three main "sections". When a user visits that section, I wish to display some sub-navigation. All three sections use the same layout, so I can't "hard code" the navigation into the layout. I can think of a few different methods to do this. I guess in order to help people vote I'll put them as answers. Any other ideas? Or what do you vote for? A: You can easily do this using partials, assuming each section has it's own controller. Let's say you have three sections called Posts, Users and Admin, each with it's own controller: PostsController, UsersController and AdminController. In each corresponding views directory, you declare a _subnav.html.erb partial: /app/views/users/_subnav.html.erb /app/views/posts/_subnav.html.erb /app/views/admin/_subnav.html.erb In each of these subnav partials you declare the options specific to that section, so /users/_subnav.html.erb might contain: <ul id="subnav"> <li><%= link_to 'All Users', users_path %></li> <li><%= link_to 'New User', new_user_path %></li> </ul> Whilst /posts/_subnav.html.erb might contain: <ul id="subnav"> <li><%= link_to 'All Posts', posts_path %></li> <li><%= link_to 'New Post', new_post_path %></li> </ul> Finally, once you've done this, you just need to include the subnav partial in the layout: <div id="header">...</div> <%= render :partial => "subnav" %> <div id="content"><%= yield %></div> <div id="footer">...</div> A: *Partial render. This is very similar to the helper method except perhaps the layout would have some if statements, or pass that off to a helper... A: As for the content of your submenus, you can go at it in a declarative manner in each controller. class PostsController < ApplicationController #... protected helper_method :menu_items def menu_items [ ['Submenu 1', url_for(me)], ['Submenu 2', url_for(you)] ] end end Now whenever you call menu_items from a view, you'll have the right list to iterate over for the specific controller. This strikes me as a cleaner solution than putting this logic inside view templates. Note that you may also want to declare a default (empty?) menu_items inside ApplicationController as well. A: Warning: Advanced Tricks ahead! Render them all. Hide the ones that you don't need using CSS/Javascript, which can be trivially initialized in any number of ways. (Javascript can read the URL used, query parameters, something in a cookie, etc etc.) This has the advantage of potentially playing much better with your cache (why cache three views and then have to expire them all simultaneously when you can cache one?), and can be used to present a better user experience. For example, let's pretend you have a common tab bar interface with sub navigation. If you render the content of all three tabs (i.e. its written in the HTML) and hide two of them, switching between two tabs is trivial Javascript and doesn't even hit your server. Big win! No latency for the user. No server load for you. Want another big win? You can use a variation on this technique to cheat on pages which might but 99% common across users but still contain user state. For example, you might have a front page of a site which is relatively common across all users but say "Hiya Bob" when they're logged in. Put the non-common part ("Hiya, Bob") in a cookie. Have that part of the page be read in via Javascript reading the cookie. Cache the entire page for all users regardless of login status in page caching. This is literally capable of slicing 70% of the accesses off from the entire Rails stack on some sites. Who cares if Rails can scale or not when your site is really Nginx serving static assets with new HTML pages occasionally getting delivered by some Ruby running on every thousandth access or so ;) A: You could use something like the navigation plugin at http://rpheath.com/posts/309-rails-plugin-navigation-helper It doesn't do sub-section navigation out of the box, but with a little tweaking you could probably set it up to do something similar. A: I suggest you use partials. There are a few ways you can go about it. When I create partials that are a bit picky in that they need specific variables, I also create a helper method for it. module RenderHelper #options: a nested array of menu names and their corresponding url def render_submenu(menu_items=[[]]) render :partial => 'shared/submenu', :locals => {:menu_items => menu_items} end end Now the partial has a local variable named menu_items over which you can iterate to create your submenu. Note that I suggest a nested array instead of a hash because a hash's order is unpredictable. Note that the logic deciding what items should be displayed in the menu could also be inside render_submenu if that makes more sense to you. A: I asked pretty much the same question myself: Need advice: Structure of Rails views for submenus? The best solution was probably to use partials. A: There is another possible way to do this: Nested Layouts i don't remember where i found this code so apologies to the original author. create a file called nested_layouts.rb in your lib folder and include the following code: module NestedLayouts def render(options = nil, &block) if options if options[:layout].is_a?(Array) layouts = options.delete(:layout) options[:layout] = layouts.pop inner_layout = layouts.shift options[:text] = layouts.inject(render_to_string(options.merge({:layout=>inner_layout}))) do |output,layout| render_to_string(options.merge({:text => output, :layout => layout})) end end end super end end then, create your various layouts in the layouts folder, (for example 'admin.rhtml' and 'application.rhtml'). Now in your controllers add this just inside the class: include NestedLayouts And finally at the end of your actions do this: def show ... render :layout => ['admin','application'] end the order of the layouts in the array is important. The admin layout will be rendered inside the application layout wherever the 'yeild' is. this method can work really well depending on the design of the site and how the various elements are organized. for instance one of the included layouts could just contain a series of divs that contain the content that needs to be shown for a particular action, and the CSS on a higher layout could control where they are positioned. A: There are few approaches to this problem. You might want to use different layouts for each section. You might want to use a partial included by all views in a given directory. You might want to use content_for that is filled by either a view or a partial, and called in the global layout, if you have one. Personally I believe that you should avoid more abstraction in this case.
{ "language": "en", "url": "https://stackoverflow.com/questions/151066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: "name" web pdf for better default save filename in Acrobat? My app generates PDFs for user consumption. The "Content-Disposition" http header is set as mentioned here. This is set to "inline; filename=foo.pdf", which should be enough for Acrobat to give "foo.pdf" as the filename when saving the pdf. However, upon clicking the "Save" button in the browser-embedded Acrobat, the default name to save is not that filename but instead the URL with slashes changed to underscores. Huge and ugly. Is there a way to affect this default filename in Adobe? There IS a query string in the URLs, and this is non-negotiable. This may be significant, but adding a "&foo=/title.pdf" to the end of the URL doesn't affect the default filename. Update 2: I've tried both content-disposition inline; filename=foo.pdf Content-Type application/pdf; filename=foo.pdf and content-disposition inline; filename=foo.pdf Content-Type application/pdf; name=foo.pdf (as verified through Firebug) Sadly, neither worked. A sample url is /bar/sessions/958d8a22-0/views/1493881172/export?format=application/pdf&no-attachment=true which translates to a default Acrobat save as filename of http___localhost_bar_sessions_958d8a22-0_views_1493881172_export_format=application_pdf&no-attachment=true.pdf Update 3: Julian Reschke brings actual insight and rigor to this case. Please upvote his answer. This seems to be broken in FF (https://bugzilla.mozilla.org/show_bug.cgi?id=433613) and IE but work in Opera, Safari, and Chrome. http://greenbytes.de/tech/tc2231/#inlwithasciifilenamepdf A: Like you, I tried and tried to get this to work. Finally I gave up on this idea, and just opted for a workaround. I'm using ASP.NET MVC Framework, so I modified my routes for that controller/action to make sure that the served up PDF file is the last part of the location portion of the URI (before the query string), and pass everything else in the query string. Eg: Old URI: http://server/app/report/showpdf?param1=foo&param2=bar&filename=myreport.pdf New URI: http://server/app/report/showpdf/myreport.pdf?param1=foo&param2=bar The resulting header looks exactly like what you've described (content-type is application/pdf, disposition is inline, filename is uselessly part of the header). Acrobat shows it in the browser window (no save as dialog) and the filename that is auto-populated if a user clicks the Acrobat Save button is the report filename. A few considerations: In order for the filenames to look decent, they shouldn't have any escaped characters (ie, no spaces, etc)... which is a bit limiting. My filenames are auto-generated in this case, and before had spaces in them, which were showing up as '%20's in the resulting save dialog filename. I just replaced the spaces with underscores, and that worked out. This is by no names the best solution, but it does work. It also means that you have to have the filename available to make it part of the original URI, which might mess with your program's workflow. If it's currently being generated or retrieved from a database during the server-side call that generates the PDF, you might need to move the code that generates the filename to javascript as part of a form submission or if it comes from a database make it a quick ajax call to get the filename when building the URL that results in the inlined PDF. If you're taking the filename from a user input on a form, then that should be validated not to contain escaped characters, which will annoy users. Hope that helps. A: Try placing the file name at the end of the URL, before any other parameters. This worked for me. http://www.setasign.de/support/tips-and-tricks/filename-in-browser-plugin/ A: In ASP.NET 2.0 change the URL from http://www. server.com/DocServe.aspx?DocId=XXXXXXX to http://www. server.com/DocServe.aspx/MySaveAsFileName?DocId=XXXXXXX This works for Acrobat 8 and the default SaveAs filename is now MySaveAsFileName.pdf. However, you have to restrict the allowed characters in MySaveAsFileName (no periods, etc.). A: Apache's mod_rewrite can solve this. I have a web service with an endpoint at /foo/getDoc.service. Of course Acrobat will save files as getDoc.pdf. I added the following lines in apache.conf: LoadModule RewriteModule modules/mod_rewrite.so RewriteEngine on RewriteRule ^/foo/getDoc/(.*)$ /foo/getDoc.service [P,NE] Now when I request /foo/getDoc/filename.pdf?bar&qux, it gets internally rewritten to /foo/getDoc.service?bar&qux, so I'm hitting the correct endpoint of the web service, but Acrobat thinks it will save my file as filename.pdf. A: If you use asp.net, you can control pdf filename through page (url) file name. As other users wrote, Acrobat is a bit s... when it choose the pdf file name when you press "save" button: it takes the page name, removes the extension and add ".pdf". So /foo/bar/GetMyPdf.aspx gives GetMyPdf.pdf. The only solution I found is to manage "dynamic" page names through an asp.net handler: * *create a class that implements IHttpHandler *map an handler in web.config bounded to the class Mapping1: all pages have a common radix (MyDocument_): <httpHandlers> <add verb="*" path="MyDocument_*.ashx" type="ITextMiscWeb.MyDocumentHandler"/> Mapping2: completely free file name (need a folder in path): <add verb="*" path="/CustomName/*.ashx" type="ITextMiscWeb.MyDocumentHandler"/> Some tips here (the pdf is dynamically created using iTextSharp): http://fhtino.blogspot.com/2006/11/how-to-show-or-download-pdf-file-from.html A: Set the file name in ContentType as well. This should solve the problem. context.Response.ContentType = "application/pdf; name=" + fileName; // the usual stuff context.Response.AddHeader("content-disposition", "inline; filename=" + fileName); After you set content-disposition header, also add content-length header, then use binarywrite to stream the PDF. context.Response.AddHeader("Content-Length", fileBytes.Length.ToString()); context.Response.BinaryWrite(fileBytes); A: Part of the problem is that the relevant RFC 2183 doesn't really state what to do with a disposition type of "inline" and a filename. Also, as far as I can tell, the only UA that actually uses the filename for type=inline is Firefox (see test case). Finally, it's not obvious that the plugin API actually makes that information available (maybe someboy familiar with the API can elaborate). That being said, I have sent a pointer to this question to an Adobe person; maybe the right people will have a look. Related: see attempt to clarify Content-Disposition in HTTP in draft-reschke-rfc2183-in-http -- this is early work in progress, feedback appreciated. Update: I have added a test case, which seems to indicate that the Acrobat reader plugin doesn't use the response headers (in Firefox), although the plugin API provides access to them. A: Instead of attachment you can try inline: Response.AddHeader("content-disposition", "inline;filename=MyFile.pdf"); I used inline in a previous web application that generated Crystal Reports output into PDF and sent that in browser to the user. A: I believe this has already been mentioned in one flavor or another but I'll try and state it in my own words. Rather than this: /bar/sessions/958d8a22-0/views/1493881172/export?format=application/pdf&no-attachment=true I use this: /bar/sessions/958d8a22-0/views/1493881172/NameThatIWantPDFToBe.pdf?GeneratePDF=1 Rather than having "export" process the request, when a request comes in, I look in the URL for GeneratePDF=1. If found, I run whatever code was running in "export" rather than allowing my system to attempt to search and serve a PDF in the location /bar/sessions/958d8a22-0/views/1493881172/NameThatIWantPDFToBe.pdf. If GeneratePDF is not found in the URL, I simply transmit the file requested. (note that I can't simply redirect to the file requested - or else I'd end up in an endless loop) A: File download dialog (PDF) with save and open option Points To Remember: * *Return Stream with correct array size from service *Read the byte arrary from stream with correct byte length on the basis of stream length. *set correct contenttype Here is the code for read stream and open the File download dialog for PDF file private void DownloadSharePointDocument() { Uri uriAddress = new Uri("http://hyddlf5187:900/SharePointDownloadService/FulfillmentDownload.svc/GetDocumentByID/1/drmfree/"); HttpWebRequest req = WebRequest.Create(uriAddress) as HttpWebRequest; // Get response using (HttpWebResponse httpWebResponse = req.GetResponse() as HttpWebResponse) { Stream stream = httpWebResponse.GetResponseStream(); int byteCount = Convert.ToInt32(httpWebResponse.ContentLength); byte[] Buffer1 = new byte[byteCount]; using (BinaryReader reader = new BinaryReader(stream)) { Buffer1 = reader.ReadBytes(byteCount); } Response.Clear(); Response.ClearHeaders(); // set the content type to PDF Response.ContentType = "application/pdf"; Response.AddHeader("Content-Disposition", "attachment;filename=Filename.pdf"); Response.Buffer = true; Response.BinaryWrite(Buffer1); Response.Flush(); // Response.End(); } } A: You could always have two links. One that opens the document inside the browser, and another to download it (using an incorrect content type). This is what Gmail does. A: I was redirected here because i have the same problem. I also tried Troy Howard's workaround but it is doesn't seem to work. The approach I did on this one is to NO LONGER use response object to write the file on the fly. Since the PDF is already existing on the server, what i did was to redirect my page pointing to that PDF file. Works great. http://forums.asp.net/t/143631.aspx I hope my vague explanation gave you an idea. A: The way I solved this (with PHP) is as follows: Suppose your URL is SomeScript.php?id=ID&data=DATA and the file you want to use is TEST.pdf. Change the URL to SomeScript.php/id/ID/data/DATA/EXT/TEST.pdf. It's important that the last parameter is the file name you want Adobe to use (the 'EXT' can be about anything). Make sure there are no special chars in the above string, BTW. Now, at the top of SomeScript.php, add: $_REQUEST = MakeFriendlyURI( $_SERVER['PHP\_SELF'], $_SERVER['SCRIPT_FILENAME']); Then add this function to SomeScript.php (or your function library): function MakeFriendlyURI($URI, $ScriptName) { /* Need to remove everything up to the script name */ $MyName = '/^.*'.preg_quote(basename($ScriptName)."/", '/').'/'; $Str = preg_replace($MyName,'',$URI); $RequestArray = array(); /* Breaks down like this 0 1 2 3 4 5 PARAM1/VAL1/PARAM2/VAL2/PARAM3/VAL3 */ $tmp = explode('/',$Str); /* Ok so build an associative array with Key->value This way it can be returned back to $_REQUEST or $_GET */ for ($i=0;$i < count($tmp); $i = $i+2){ $RequestArray[$tmp[$i]] = $tmp[$i+1]; } return $RequestArray; }//EO MakeFriendlyURI Now $_REQUEST (or $_GET if you prefer) is accessed like normal $_REQUEST['id'], $_REQUEST['data'], etc. And Adobe will use your desired file name as the default save as or email info when you send it inline. A: For anyone still looking at this, I used the solution found here and it worked wonderfully. Thanks Fabrizio! A: Credits to Vivek. Nginx location /file.pdf { # more_set_headers "Content-Type: application/pdf; name=save_as_file.pdf"; add_header Content-Disposition "inline; filename=save_as_file.pdf"; alias /var/www/file.pdf; } Check with curl -I https://example.com/file.pdf Firefox 62.0b5 (64-bit): OK. Chrome 67.0.3396.99 (64-Bit): OK. IE 11: No comment. A: Try this, if your executable is "get.cgi" http://server,org/get.cgi/filename.pdf?file=filename.pdf Yes, it's completely insane. There is no file called "filename.pdf" on the server, there is directory at all under the executable get.cgi. But it seems to work. The server ignores the filename.pdf and the pdf reader ignores the "get.cgi" Dan
{ "language": "en", "url": "https://stackoverflow.com/questions/151079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: How can I prevent link_to from escaping slashes in URL parameters in Rails? Having this route: map.foo 'foo/*path', :controller => 'foo', :action => 'index' I have the following results for the link_to call link_to "Foo", :controller => 'foo', :path => 'bar/baz' # <a href="/foo/bar%2Fbaz">Foo</a> Calling url_for or foo_url directly, even with :escape => false, give me the same url: foo_url(:path => 'bar/baz', :escape => false, :only_path => true) # /foo/bar%2Fbaz I want the resulting url to be: /foo/bar/baz Is there a way around this without patching rails? A: Instead of passing path a string, give it an array. link_to "Foo", :controller => 'foo', :path => %w(bar baz) # <a href="/foo/bar/baz">Foo</a> If you didn't have the route in your routes file, this same link_to would instead create this: # <a href="/foo?path[]=bar&path[]=baz">Foo</a> The only place I could find this documented is in this ticket. A: Any reason why you're needing to generate the URL with that path though? It would be cleaner to just define an extra route to cover that URL.
{ "language": "en", "url": "https://stackoverflow.com/questions/151083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I find records that are not joined? I have two tables that are joined together. A has many B Normally you would do: select * from a,b where b.a_id = a.id To get all of the records from a that has a record in b. How do I get just the records in a that does not have anything in b? A: Another approach: select * from a where not exists (select * from b where b.a_id = a.id) The "exists" approach is useful if there is some other "where" clause you need to attach to the inner query. A: SELECT id FROM a EXCEPT SELECT a_id FROM b; A: You will probably get a lot better performance (than using 'not in') if you use an outer join: select * from a left outer join b on a.id = b.a_id where b.a_id is null; A: select * from a left outer join b on a.id = b.a_id where b.a_id is null A: SELECT <columnns> FROM a WHERE id NOT IN (SELECT a_id FROM b) A: select * from a where id not in (select a_id from b) Or like some other people on this thread says: select a.* from a left outer join b on a.id = b.a_id where b.a_id is null A: The following image will help to understand SQL LET JOIN : A: Another way of writing it select a.* from a left outer join b on a.id = b.id where b.id is null Ouch, beaten by Nathan :) A: In case of one join it is pretty fast, but when we are removing records from database which has about 50 milions records and 4 and more joins due to foreign keys, it takes a few minutes to do it. Much faster to use WHERE NOT IN condition like this: select a.* from a where a.id NOT IN(SELECT DISTINCT a_id FROM b where a_id IS NOT NULL) //And for more joins AND a.id NOT IN(SELECT DISTINCT a_id FROM c where a_id IS NOT NULL) I can also recommended this approach for deleting in case we don't have configured cascade delete. This query takes only a few seconds. A: The first approach is select a.* from a where a.id not in (select b.ida from b) the second approach is select a.* from a left outer join b on a.id = b.ida where b.ida is null The first approach is very expensive. The second approach is better. With PostgreSql 9.4, I did the "explain query" function and the first query as a cost of cost=0.00..1982043603.32. Instead the join query as a cost of cost=45946.77..45946.78 For example, I search for all products that are not compatible with no vehicles. I've 100k products and more than 1m compatibilities. select count(*) from product a left outer join compatible c on a.id=c.idprod where c.idprod is null The join query spent about 5 seconds, instead the subquery version has never ended after 3 minutes. A: This will protect you from nulls in the IN clause, which can cause unexpected behavior. select * from a where id not in (select [a id] from b where [a id] is not null)
{ "language": "en", "url": "https://stackoverflow.com/questions/151099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "84" }
Q: How can I serve an image to the browser using Struts 2 + Hibernate 3? I am developing a web application using Struts 2.1.2 and Hibernate 3.2.6.GA. I have an entity, User, which I have mapped to a table USERS in the DB using Hibernate. I want to have an image associated with this entity, which I plan to store as a BLOB in the DB. I also want to display the image on a webpage along with other attributes of the User. The solution I could think of was to have a table IMAGES(ID, IMAGE) where IMAGE is a BLOB column. USERS will have an FK column called IMAGEID, which points to the IMAGES table. I will then map a property on User entity, called imageId mapped to this IMAGEID as a Long. When rendering the page with a JSP, I would add images as <img src="images.action?id=1"/> etc, and have an Action which reads the image and streams the content to the browser, with the headers set to cache the image for a long time. Will this work? Is there a better approach for rendering images stored in a DB? Is storing such images in the DB the right approach in the first place? A: Yes your suggested solution will work. Given that you are working in a Java environment storing the images in the database is the best way to go. If you are running in a single server environment with an application server that will let you deploy in an exploded format technically you could store the images on disk but that wouldn't be the best practice. One suggestion would be to use a servlet instead of a JSP. To get good browser behavior you want the browser to think that the file type that it is displaying matches the file type that it is expecting. Despite the existence of mime type headers the file extension is still really important. So you want a link that looks like this: <a href="foo.jsp"><img src="imageservlet/123456789.png"></a> Where 123456789 is the primary key of your image in the database. Your servlet mapping would look like this: <servlet> <servlet-name>ImageServlet</servlet-name> <servlet-class>com.example.ImageServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>ImageServlet</servlet-name> <url-pattern>/imageservlet/*</url-pattern> </servlet-mapping> Then in your servlet simply parse the request URL for the image ID rather than using the query string as the query string will confuse some browsers. Using the query string won't break browsers outright but you'll get odd behavior with regards to caching and some browsers may report the content as unsafe. A: If you want to display the user image directly with their properties perhaps you can consider embedding the image data directly in the HTML. Using a special data: URL scheme you are able to embed any mime data inside a HTML page, the format is as follows: data:<mimetype>;base64,<data> needs to be replaced by the mime-type of your data (image/png for instance) and is the base64 encoded string of the actual bytes of the file. See RFC 2557. Example: <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEYAAAAmCAYAAAB52u3eAAAABmJLR0QA/wD/AP+gvaeTAAAACXBIWXMAAAsSAAALEgHS3X78AAAAB3RJTUUH1AkICzQgAc6C2QAACrxJREFUeNrtmnl0VdUVxn/3vnnIPBKmJECYkoAEERArgxGqCDIUkKVoqRZKUVREnC1SXXWJlKo4VrEi4EAUEARUtM5oVcAwZhZIyEySlzfe4fSPlzwSEzGUuLpa2Gu9te49Z99zzvvu3vt8e58rCSEAUBWNoyV1wZtzVHokR0pGkwEASQhB2bF6sX/vCZxhFjzuwDkJSlWFm4LDVYyf1I9ho3pKRiWg8f23ZfTPTCQ+0XkuGwzPr/qCZ1Z8zsDBXTCWFNYKq81EVIwNm910TgNzwbBufPlxCR9uzxMyQEO9j/MSlLpaLwDyeSjal3MOmBunb/gvABNQoepku12qqrVpUxTtjKdQVY1mivFTY7Wn09x/rKSuQ/MYOwsTUViKOn0RxifvRYqLAuDw/goeuO1dIiJtVJa7+PWUAcy//WJUVWPhdTl43AFqqz0sXT4utDOOndCHS/r/jb88fRUjLk1mRvbLbNw1FyEEdy14h9Kj9TS6/MyZfyEDMhP50+LtGI0GMrOSWLJsbBudSTPSmTfzDVRVJyrG1uGX0SnAiGPlKBfPhIp6pAG9Wrw5nfJSF6/tvB6AsYNWM/OGC/j0g0Ji4uw8u2EGleUuZmS/zCNPTmT7pkN07R5Ot56RfLSzAGe4hR4pQZB3bjmMxWrk1W3XoSgaEy58llVrplBZ3sgHexYgyzI7Nh9qo2M0yvRIieLBFRMoKaxl6ugXfxlgPn6vgEaXnyunDQy1abPvRVQUIPfMgghHK/0BgxKR5aDH9ktP4GjxSQrzasjM6gpAfGIYHneArBHdWb50J8m9orlx0QjWrP6K6Bgbo8amhqxv3zel3HJ9DkIIEpPC8HlVBrYYvz2d/XvLSb+gCwDJvaIJj7R2PjB+v8q82a+zbMUVoTZ966fon+1CIgoMMhgMrZ4pyqtudZ3ULYJuPSM5cqAyuD2e9GKxGLFYjERG2di6cT/rts8h59V9bM05yMubZof+VGZWV5avOjX3/r0nkGUpdN+ezptr94bWUHHChauD1OSMgHl4yXvU1LhbWYu+ZjMgBeO4xwdeP9gsp6xJE9w8ZyMN9X4uvbw3cQlOrvrNQN5ev49b575FcX4Ndz+SDcCosal8sO0IdruZkaNTOLD3BPGJYQBMnD6Q9945wtyp64mJc+DzKsy7/eJW62tP57HnJ3PdxFdZMm8zXo9CWETHLIa8g5Vi8+u5or7OK04ne746JszcJn7V/4lTjQFFBLqNF376Cz8Zwm+4QOiFx0PduXvKxC3XbxQeT0DUVLvbjFlZ7hJeb0CcidSd9IiqCtcZ6ei6LsrLGoSmaad97uP3C8T4oc+ILW/kig5ZjM+rMP/aN9EReNwBVFXHaJQRpZWI8uomwzOAVof4OhcpNRg/TGYD0bEObDYTNlvbdCMu4cxzs4hI2xnrSJJEQpewM5qnQ8DcOW8L3+QfJRYHxcdqKThcRb/0BHC5QQ00uRKAhL5+J/KsCQD0HRDPgyuC17ouePetgxQX1+JtCODxBBCAxWIgLSOecdlpRMXaQ3OWHq1n08bvkZGwWc1cM3cIFquRb3cf47PPi7DJrYEWgCo0ps0cRGLXcHLW7aO8ogGT1Drm+XWVPr3imHB1/7MDZlvOQZ5b+wWxOJGABuFh82u59PtzAiTFg8MJblcTV3Sgb9+FOFKC1De51TiapnPLDTkUu6uxYMKPH9AAMyYMJMVE8OTT07hqRnqzi7Nw8euAgXicTJ6ZjsVqZMemQ9z3aA5mnE2AnCJyCl769I4jsWs4j96/i38VF2HG8iMdH9lDMn4WmNMyX0XRePiu9zBjCNmEAwuvvfQdXo+CFBOBPDwD8DT1GkB1oT3wTLvjRUbZiMSBAzNTRw9h0Q3ZpMd1wYmFkzUefjdrAyUFtUE3NBkIw0E0dqIi7UhScAU2uwlTU3skNpxYQj+wUF/na3InK2E4iMFORAs9Ezb8PvXsXCnvQBWHCspxYA61WTGSd6KK3Z+WMGZ8H+SbpqHv2tniqXD0NzYhbr4GadTgdsd1E+Dmu37F6PG9OV5Sx5ispzhZ66FGePhwRx5zFw7/2YV7CHDRkJ489txkmtm/puskp0a30qvHx9LFlzF5VgZCCHQhsNtMZx9jNFrnHBLgR6XgUFUQmKljkVLSEcWFgLXJCHXUBY9g2vNakNv8SCQkGl1+ALolR9KvTwL//CofCfC4lY7lTGhERdsZNLTr6dM3NNIHd2HQ0KTOC7590+MZc3Eftnz+PTE4kZHwoiAh0T8zMahkMiIvmIG25IEmYACciNzd6C9tRr5pStsUAoHDGbTC/XtO8P2BUkwYUNAYfknPDma/Mu7GAOVlLhDBKCJJEvGJzlakz46JbW8foK7Oi6bp+ITCyOEpZA3v/p8DYzTKrNk0m6W3OPloZz5+j0pKagyLbrs0RNUB5EmXot0ZCUJvEbasaE9tQL7xapCkVuM6sbBi2UesWf0Vn35YhMvjJ4DG0lsvY+jIHh0CxomFPV8fZ3ivlUELEjp2q5kv8xcRE+dspffGW3v4x1tfIwEqHpbdMeXsgAGIjrXzwvpZNPh0PPUeEtvjHj4/COVHw9kQ+/MRxWUhXtMsFozs2H0QkIjGhkDw1FPTuOGPF3XY1CUk/LpKua8hSAfQsfstaFpr19cR2DFjx4xAUAtYOyPGhEJqVTnh3dv3U23VOsAHhLeORroXyirhR8D4UZkyZhBF+dUUHa/BgMQLT+9m0swMoltwmdPmbSik9IhhyZxxoaBqMhlwhlnaBN/77x7PtGsHAQJNEx0q+ncYGPXxdYj9uRiX34E0dACYjIii4+grX0Ffsx5oj1kawNGWqTbiZ8mysST3jubCtMfxNip8ebCYh+7YwaqXp3ZoPT4UeqXFsnT5uNPqKWik9okhbUDcL1PalC/KQN+1DWXkNSi9rkJJm4zS92q01X8HHC3Y76n9QIpLaEP0mt2gsryRhC5hLL5rDHV4icXBhrXf8UNRbYdd6XRFJ10PupQJmd2fleD1KPh9Kj6vis+nInTRScBkD0dypAX3lGM/IPLzmtKBiHZAkYB65N9PA3v72WxzPL7+D8PoER6FjqBGd7Nx7b5OqSiaTAY0dCKw8epL3zAs5XEuSl3J0NQVjOj9V8qO13dSzTc2EsOdvwXqAHPT1mz4CeVqpCGXYLjvxlatrnof9Xjx4Q297choG5df0Zca6tDQWP/itwgBQghceKnHS0O9L1TD9ftUFLy48eJu/OlT04nTB+LBSx1eGvFzqLKC3BMnOFBeRnFpNRarsfPqMYYHbkLk/YC+7pUmYGwtsBWAH2hEGp2NaeMqsJ4KhLIkMWlmBhXVjShCpXvPqFDfnPnDqPG6sUtmZCQ87gBJ3SO4ZvJQjJJMuMOK2RJc6sDBXZg+aRhGSWZw+k+Tu4VLL8HuMPHJJ4VIASlkoQo6MZF2nGHWzqnHtBRt7VYRGDFb+K1Zwk9f4SdN+MkUgfRJQn1infhflTOux7Txv2uvRL72SkRZJZSUgaJC13ik3j34f5GzOiWQkuKDpYfzJ5Hnj2jPAwNQUdZwHolmolHZGAwTAb/KPTdvFWHhVkaNTTmnQVkybwtVFY0cqrlHMprMBq6elcnKhz7i689+CJUGzzWpq/VQW+1h5YtTcDjNwW/wABpdft7feuSc/jgxe2JfqTk7/zeNpiqnFESz8wAAAABJRU5ErkJggg=="> A: Internet Explorer does not support that style of image embedding. A: Your suggested solution would work perfectly. I have done the same thing. But you don't need a servlet for this. Struts2 has already a stream result. See this Struts 2 Example which describes exactly what you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/151100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Which is correct? catch (_com_error e) or catch (_com_error& e)? Which one should I use? catch (_com_error e) or catch (_com_error& e) A: Also, note that, when using MFC, you may have to catch by pointer. Otherwise, @JaredPar's answer is the way you should normally go (and hopefully never have to deal with things that throw a pointer). A: Definitely the second. If you had the following: class my_exception : public exception { int my_exception_data; }; void foo() { throw my_exception; } void bar() { try { foo(); } catch (exception e) { // e is "sliced off" - you lose the "my_exception-ness" of the exception object } } A: The second. Here is my attempt at quoting Sutter "Throw by value, catch by reference" Learn to catch properly: Throw exceptions by value (not pointer) and catch them by reference (usually to const). This is the combination that meshes best with exception semantics. When rethrowing the same exception, prefer just throw; to throw e;. Here's the full Item 73. Throw by value, catch by reference. The reason to avoid catching exceptions by value is that it implicitly makes a copy of the exception. If the exception is of a subclass, then information about it will be lost. try { throw MyException ("error") } catch (Exception e) { /* Implies: Exception e (MyException ("error")) */ /* e is an instance of Exception, but not MyException */ } Catching by reference avoids this issue by not copying the exception. try { throw MyException ("error") } catch (Exception& e) { /* Implies: Exception &e = MyException ("error"); */ /* e is an instance of MyException */ } A: Personally, I would go for the third option: catch (const _com_error& e)
{ "language": "en", "url": "https://stackoverflow.com/questions/151124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Annotated Spring-MVC controller not recognized when controller extends interface I'm using spring 2.5, and am using annotations to configure my controllers. My controller works fine if I do not implement any additional interfaces, but the spring container doesn't recognize the controller/request mapping when I add interface implementations. I can't figure out why adding an interface implementation messes up the configuration of the controller and the request mappings. Any ideas? So, this works: package com.shaneleopard.web.controller; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.providers.encoding.Md5PasswordEncoder; import org.springframework.stereotype.Controller; import org.springframework.validation.Errors; import org.springframework.validation.Validator; import org.springframework.web.bind.annotation.ModelAttribute; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import com.shaneleopard.model.User; import com.shaneleopard.service.UserService; import com.shaneleopard.validator.RegistrationValidator; import com.shaneleopard.web.command.RegisterCommand; @Controller public class RegistrationController { @Autowired private UserService userService; @Autowired private Md5PasswordEncoder passwordEncoder; @Autowired private RegistrationValidator registrationValidator; @RequestMapping( method = RequestMethod.GET, value = "/register.html" ) public void registerForm(@ModelAttribute RegisterCommand registerCommand) { // no op } @RequestMapping( method = RequestMethod.POST, value = "/register.html" ) public String registerNewUser( @ModelAttribute RegisterCommand command, Errors errors ) { String returnView = "redirect:index.html"; if ( errors.hasErrors() ) { returnView = "register"; } else { User newUser = new User(); newUser.setUsername( command.getUsername() ); newUser.setPassword( passwordEncoder.encodePassword( command .getPassword(), null ) ); newUser.setEmailAddress( command.getEmailAddress() ); newUser.setFirstName( command.getFirstName() ); newUser.setLastName( command.getLastName() ); userService.registerNewUser( newUser ); } return returnView; } public Validator getValidator() { return registrationValidator; } } but this doesn't: package com.shaneleopard.web.controller; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.providers.encoding.Md5PasswordEncoder; import org.springframework.stereotype.Controller; import org.springframework.validation.Errors; import org.springframework.validation.Validator; import org.springframework.web.bind.annotation.ModelAttribute; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import com.shaneleopard.model.User; import com.shaneleopard.service.UserService; import com.shaneleopard.validator.RegistrationValidator; import com.shaneleopard.web.command.RegisterCommand; @Controller public class RegistrationController extends ValidatingController { @Autowired private UserService userService; @Autowired private Md5PasswordEncoder passwordEncoder; @Autowired private RegistrationValidator registrationValidator; @RequestMapping( method = RequestMethod.GET, value = "/register.html" ) public void registerForm(@ModelAttribute RegisterCommand registerCommand) { // no op } @RequestMapping( method = RequestMethod.POST, value = "/register.html" ) public String registerNewUser( @ModelAttribute RegisterCommand command, Errors errors ) { String returnView = "redirect:index.html"; if ( errors.hasErrors() ) { returnView = "register"; } else { User newUser = new User(); newUser.setUsername( command.getUsername() ); newUser.setPassword( passwordEncoder.encodePassword( command .getPassword(), null ) ); newUser.setEmailAddress( command.getEmailAddress() ); newUser.setFirstName( command.getFirstName() ); newUser.setLastName( command.getLastName() ); userService.registerNewUser( newUser ); } return returnView; } public Validator getValidator() { return registrationValidator; } } A: layne, you described the problem as happening when your controller class implements an interface, but in the code sample you provided, the problem occurs when your controller class extends another class of yours, ValidatingController. Perhaps the parent class also defines some Spring annotations, and the Spring container noticed them first and classified the controller class as that type of managed object and did not bother to check for the @Controller annotation you also defined in the subclass. Just a guess, but if that pans out, I'd suggest reporting it to the Spring team, as it sounds like a bug. A: By default JDK proxy are created using interface and if controller implements an interface the RequestMapping annotation gets ignored as the targetClass is not being used Add this in your servlet context config: <aop:aspectj-autoproxy proxy-target-class="true"/> A: I think you'll find that the problem is to do with inheritance and using annotations, they do not mix well. Have you tried to implement the above using inheritance and SimpleFormController with all other details configured in your application context? This will at least narrow down the problem to an annotations and inheritance issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/151152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Stopping MaskedEditExtender from validating input in asp.net I have an asp.net textbox and a MaskedEditExtender control attached to it. The textbox is used for date input. The MaskedEditExtender has MaskType="Date" Mask="99/99/9999". When the form is submitted with an invalid date, the browser shows a Javascript error "... string was not recognized as a valid datetime". I know why the error shows up. Is there a way to use the extender to just control what the user enters and not validate or convert the input? A: Stop the form from submitting with an invalid date. Use a MaskedEditValidator A: on the text box you and set up a keypress function. Validate if the key pressed is a number String.fromCharCode(event.which) or event.keycode (ie or FF) Then can check that the text box is contains valid code and format. If invalid you can set to a default that is valid or just prevent the keypress by using preventDefault() if invalid format can disable the submit button also .... Good luck A: Don't specify mask type as "Date", that should stop this error.
{ "language": "en", "url": "https://stackoverflow.com/questions/151173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Sidewinder x6 keyboard macro for Visual Studio? the new keyboard from Microsoft, Sidewinder x6 can record in game macro. I was woundering if it could be used in Visual Studio (record key in application too)? (This could be very useful to press 1 key instead of Ctrl+M,M to Toggle Outline.) A: YES IT CAN! Sure, why not? Okay, some reasoning behind my answer. Just create a "gaming profile" for devenv.exe instead of a game. BAM! There you go. A: Yes its perfect for vs2008. A+++++++++ A: I have been using side winder x6 and logictech revolution MX for a while with visual studio, I even bought same keyboard and mouse for my work and home. I couldn't go back to normal keyboard and mouse. They both can be assigned to a specific application. I don't realy use its own build-in macros functionality. instead I assigned them with alot of keystrokes and link to my custom refactoring programs. Also get a programmable mouse, you can maximise visual studio to a certain level. For example, assign CTRL+ALT+SHIFT to one of mouse button, then you just hold the button and press any key. You can produce CTRL+ALT+SHIFT+ [A-Z] keystrokes!!! Some good programs for you if you bought one. Try them with your own clever ideas, trust me you won't go back to normal keyboard and mouse. Resharper AutoIt3 QMenu DualMonitorTools
{ "language": "en", "url": "https://stackoverflow.com/questions/151183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: pl/sql dollar operator? I encountered the following ddl in a pl/sql script this morning: create index genuser.idx$$_0bdd0011 ... My initial thought was that the index name was generated by a tool...but I'm also not a pl/sql superstar so I could very well be incorrect. Does the double dollar sign have any special significance in this statement? A: No special meaning or significance. SQL> create table t (col number) 2 / Table created. SQL> create index idx$$_0bdd0011 on t(col) 2 / Index created. Note: CREATE INDEX is a DDL statement which is usually executed in a SQL script, not in PL/SQL. A: Your initial thought seems to be correct. That would look to be an index name generated by a tool (but not assigned by Oracle because an index name wasn't specified). Dollar signs don't have any particular meaning other than being valid symbols that are rarely used by human developers and so are handy to reduce the risk that a system-generated name conflicts with a human-generated name. A: Regardless of where he name came from, it's contrary to Oracle's documented advice: http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/sql_elements008.htm#SQLRF00223 Oracle strongly discourages you from using $ and # in nonquoted identifiers
{ "language": "en", "url": "https://stackoverflow.com/questions/151190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Possible to use SQL to sort by date but put null dates at the back of the results set? I have a bunch of tasks in a MySQL database, and one of the fields is "deadline date". Not every task has to have to a deadline date. I'd like to use SQL to sort the tasks by deadline date, but put the ones without a deadline date in the back of the result set. As it is now, the null dates show up first, then the rest are sorted by deadline date earliest to latest. Any ideas on how to do this with SQL alone? (I can do it with PHP if needed, but an SQL-only solution would be great.) Thanks! A: Here's a solution using only standard SQL, not ISNULL(). That function is not standard SQL, and may not work on other brands of RDBMS. SELECT * FROM myTable WHERE ... ORDER BY CASE WHEN myDate IS NULL THEN 1 ELSE 0 END, myDate; A: SELECT foo, bar, due_date FROM tablename ORDER BY CASE ISNULL(due_date, 0) WHEN 0 THEN 1 ELSE 0 END, due_date So you have 2 order by clauses. The first puts all non-nulls in front, then sorts by due date after that A: The easiest way is using the minus operator with DESC. SELECT * FROM request ORDER BY -date DESC In MySQL, NULL values are considered lower in order than any non-NULL value, so sorting in ascending (ASC) order NULLs are listed first, and if descending (DESC) they are listed last. When a - (minus) sign is added before the column name, NULL become -NULL. Since -NULL == NULL, adding DESC make all the rows sort by date in ascending order followed by NULLs at last. A: SELECT * FROM myTable WHERE ... ORDER BY ISNULL(myDate), myDate
{ "language": "en", "url": "https://stackoverflow.com/questions/151195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "57" }
Q: How to calculate number of days between two given dates If I have two dates (ex. '8/18/2008' and '9/26/2008'), what is the best way to get the number of days between these two dates? A: For calculating dates and times, there are several options but I will write the simple way: from datetime import timedelta, datetime, date import dateutil.relativedelta # current time date_and_time = datetime.now() date_only = date.today() time_only = datetime.now().time() # calculate date and time result = date_and_time - timedelta(hours=26, minutes=25, seconds=10) # calculate dates: years (-/+) result = date_only - dateutil.relativedelta.relativedelta(years=10) # months result = date_only - dateutil.relativedelta.relativedelta(months=10) # week results = date_only - dateutil.relativedelta.relativedelta(weeks=1) # days result = date_only - dateutil.relativedelta.relativedelta(days=10) # calculate time result = date_and_time - timedelta(hours=26, minutes=25, seconds=10) result.time() Hope it helps A: Days until Christmas: >>> import datetime >>> today = datetime.date.today() >>> someday = datetime.date(2008, 12, 25) >>> diff = someday - today >>> diff.days 86 More arithmetic here. A: There is also a datetime.toordinal() method that was not mentioned yet: import datetime print(datetime.date(2008,9,26).toordinal() - datetime.date(2008,8,18).toordinal()) # 39 https://docs.python.org/3/library/datetime.html#datetime.date.toordinal date.toordinal() Return the proleptic Gregorian ordinal of the date, where January 1 of year 1 has ordinal 1. For any date object d, date.fromordinal(d.toordinal()) == d. Seems well suited for calculating days difference, though not as readable as timedelta.days. A: from datetime import date def d(s): [month, day, year] = map(int, s.split('/')) return date(year, month, day) def days(start, end): return (d(end) - d(start)).days print days('8/18/2008', '9/26/2008') This assumes, of course, that you've already verified that your dates are in the format r'\d+/\d+/\d+'. A: Here are three ways to go with this problem : from datetime import datetime Now = datetime.now() StartDate = datetime.strptime(str(Now.year) +'-01-01', '%Y-%m-%d') NumberOfDays = (Now - StartDate) print(NumberOfDays.days) # Starts at 0 print(datetime.now().timetuple().tm_yday) # Starts at 1 print(Now.strftime('%j')) # Starts at 1 A: everyone has answered excellently using the date, let me try to answer it using pandas dt = pd.to_datetime('2008/08/18', format='%Y/%m/%d') dt1 = pd.to_datetime('2008/09/26', format='%Y/%m/%d') (dt1-dt).days This will give the answer. In case one of the input is dataframe column. simply use dt.days in place of days (dt1-dt).dt.days A: You want the datetime module. >>> from datetime import datetime >>> datetime(2008,08,18) - datetime(2008,09,26) datetime.timedelta(4) Another example: >>> import datetime >>> today = datetime.date.today() >>> print(today) 2008-09-01 >>> last_year = datetime.date(2007, 9, 1) >>> print(today - last_year) 366 days, 0:00:00 As pointed out here A: Using the power of datetime: from datetime import datetime date_format = "%m/%d/%Y" a = datetime.strptime('8/18/2008', date_format) b = datetime.strptime('9/26/2008', date_format) delta = b - a print delta.days # that's it A: from datetime import datetime start_date = datetime.strptime('8/18/2008', "%m/%d/%Y") end_date = datetime.strptime('9/26/2008', "%m/%d/%Y") print abs((end_date-start_date).days) A: It also can be easily done with arrow: import arrow a = arrow.get('2017-05-09') b = arrow.get('2017-05-11') delta = (b-a) print delta.days For reference: http://arrow.readthedocs.io/en/latest/ A: If you have two date objects, you can just subtract them, which computes a timedelta object. from datetime import date d0 = date(2008, 8, 18) d1 = date(2008, 9, 26) delta = d1 - d0 print(delta.days) The relevant section of the docs: https://docs.python.org/library/datetime.html. See this answer for another example. A: without using Lib just pure code: #Calculate the Days between Two Date daysOfMonths = [ 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31] def isLeapYear(year): # Pseudo code for this algorithm is found at # http://en.wikipedia.org/wiki/Leap_year#Algorithm ## if (year is not divisible by 4) then (it is a common Year) #else if (year is not divisable by 100) then (ut us a leap year) #else if (year is not disible by 400) then (it is a common year) #else(it is aleap year) return (year % 4 == 0 and year % 100 != 0) or year % 400 == 0 def Count_Days(year1, month1, day1): if month1 ==2: if isLeapYear(year1): if day1 < daysOfMonths[month1-1]+1: return year1, month1, day1+1 else: if month1 ==12: return year1+1,1,1 else: return year1, month1 +1 , 1 else: if day1 < daysOfMonths[month1-1]: return year1, month1, day1+1 else: if month1 ==12: return year1+1,1,1 else: return year1, month1 +1 , 1 else: if day1 < daysOfMonths[month1-1]: return year1, month1, day1+1 else: if month1 ==12: return year1+1,1,1 else: return year1, month1 +1 , 1 def daysBetweenDates(y1, m1, d1, y2, m2, d2,end_day): if y1 > y2: m1,m2 = m2,m1 y1,y2 = y2,y1 d1,d2 = d2,d1 days=0 while(not(m1==m2 and y1==y2 and d1==d2)): y1,m1,d1 = Count_Days(y1,m1,d1) days+=1 if end_day: days+=1 return days # Test Case def test(): test_cases = [((2012,1,1,2012,2,28,False), 58), ((2012,1,1,2012,3,1,False), 60), ((2011,6,30,2012,6,30,False), 366), ((2011,1,1,2012,8,8,False), 585 ), ((1994,5,15,2019,8,31,False), 9239), ((1999,3,24,2018,2,4,False), 6892), ((1999,6,24,2018,8,4,False),6981), ((1995,5,24,2018,12,15,False),8606), ((1994,8,24,2019,12,15,True),9245), ((2019,12,15,1994,8,24,True),9245), ((2019,5,15,1994,10,24,True),8970), ((1994,11,24,2019,8,15,True),9031)] for (args, answer) in test_cases: result = daysBetweenDates(*args) if result != answer: print "Test with data:", args, "failed" else: print "Test case passed!" test() A: If you want to code the calculation yourself, then here is a function that will return the ordinal for a given year, month and day: def ordinal(year, month, day): return ((year-1)*365 + (year-1)//4 - (year-1)//100 + (year-1)//400 + [ 0,31,59,90,120,151,181,212,243,273,304,334][month - 1] + day + int(((year%4==0 and year%100!=0) or year%400==0) and month > 2)) This function is compatible with the date.toordinal method in the datetime module. You can get the number of days of difference between two dates as follows: print(ordinal(2021, 5, 10) - ordinal(2001, 9, 11)) A: Without using datetime object in python. # A date has day 'd', month 'm' and year 'y' class Date: def __init__(self, d, m, y): self.d = d self.m = m self.y = y # To store number of days in all months from # January to Dec. monthDays = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 ] # This function counts number of leap years # before the given date def countLeapYears(d): years = d.y # Check if the current year needs to be considered # for the count of leap years or not if (d.m <= 2) : years-= 1 # An year is a leap year if it is a multiple of 4, # multiple of 400 and not a multiple of 100. return int(years / 4 - years / 100 + years / 400 ) # This function returns number of days between two # given dates def getDifference(dt1, dt2) : # COUNT TOTAL NUMBER OF DAYS BEFORE FIRST DATE 'dt1' # initialize count using years and day n1 = dt1.y * 365 + dt1.d # Add days for months in given date for i in range(0, dt1.m - 1) : n1 += monthDays[i] # Since every leap year is of 366 days, # Add a day for every leap year n1 += countLeapYears(dt1) # SIMILARLY, COUNT TOTAL NUMBER OF DAYS BEFORE 'dt2' n2 = dt2.y * 365 + dt2.d for i in range(0, dt2.m - 1) : n2 += monthDays[i] n2 += countLeapYears(dt2) # return difference between two counts return (n2 - n1) # Driver program dt1 = Date(31, 12, 2018 ) dt2 = Date(1, 1, 2019 ) print(getDifference(dt1, dt2), "days") A: If you don't have a date handling library (or you suspect it has bugs in it), here's an abstract algorithm that should be easily translatable into most languages. Perform the following calculation on each date, and then simply subtract the two results. All quotients and remainders are positive integers. Step A. Start by identifying the parts of the date as Y (year), M (month) and D (day). These are variables that will change as we go along. Step B. Subtract 3 from M (so that January is -2 and December is 9). Step C. If M is negative, add 12 to M and subtract 1 from the year Y. (This changes the "start of the year" to 1 March, with months numbered 0 (March) through 11 (February). The reason to do this is so that the "day number within a year" doesn't change between leap years and ordinary years, and so that the "short" month is at the end of the year, so there's no following month needing special treatment.) Step D. Divide M by 5 to get a quotient Q₁ and remainder R₁. Add Q₁ Γ— 153 to D. Use R₁ in the next step. (There are 153 days in every 5 months starting from 1 March.) Step E. Divide R₁ by 2 to get a quotient Qβ‚‚ and ignore the remainder. Add R₁ Γ— 31 - Qβ‚‚ to D. (Within each group of 5 months, there are 61 days in every 2 months, and within that the first of each pair of months is 31 days. It's safe to ignore the fact that Feb is shorter than 30 days because at this point you only care about the day number of 1-Feb, not of 1-Mar the following year.) Steps D & E combined - alternative method Before the first use, set L=[0,31,61,92,122,153,184,214,245,275,306,337] (This is a tabulation of the cumulative number of days in the (adjusted) year before the first day of each month.) Add L[M] to D. Step F Skip this step if you use Julian calendar dates rather than Gregorian calendar dates; the change-over varies between countries, but is taken as 3 Sep 1752 in most English-speaking countries, and 4 Oct 1582 in most of Europe. You can also skip this step if you're certain that you'll never have to deal with dates outside the range 1-Mar-1900 to 28-Feb-2100, but then you must make the same choice for all dates that you process. Divide Y by 100 to get a quotient Q₃ and remainder R₃. Divide Q₃ by 4 to get another quotient Qβ‚„ and ignore the remainder. Add Qβ‚„ + 36524 Γ— Q₃ to D. Assign R₃ to Y. Step G. Divide the Y by 4 to get a quotient Qβ‚… and ignore the remainder. Add Qβ‚… + 365 Γ— Y to D. Step H. (Optional) You can add a constant of your choosing to D, to force a particular date to have a particular day-number. Do the steps A~G for each date, getting D₁ and Dβ‚‚. Step I. Subtract D₁ from Dβ‚‚ to get the number of days by which Dβ‚‚ is after D₁. Lastly, a comment: exercise extreme caution dealing with dates prior to about 1760, as there was not agreement on which month was the start of the year; many places counted 1 March as the new year.
{ "language": "en", "url": "https://stackoverflow.com/questions/151199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "732" }
Q: How to configure Caucho Resin's Java classpath to the system's library directory I have a folder, '/var/unity/conf' with some properties files in it, and I'd like the Caucho's Resin JVM to have that directory on the classpath. What is the best way to modifiy resin.conf so that Resin knows to add this directory to the classpath? A: With Resin 3.1.6 and above, use <server-default> ... <jvm-classpath>/var/unity/conf/...</jvm-classpath> ... </server-default> (I know, very late to the game, I was searching for the answer to this myself and found this post here, as well as the solution, so thought I'd add back to the collective). A: cd %RESIN_HOME%/lib | ln -s /var/unity/conf/....
{ "language": "en", "url": "https://stackoverflow.com/questions/151204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Are these interview questions too challenging for beginners? So I just interviewed two people today, and gave them "tests" to see what their skills were like. Both are entry level applicants, one of which is actually still in college. Neither applicant saw anything wrong with the following code. I do, obviously or I wouldn't have picked those examples. Do you think these questions are too harsh for newbie programmers? I guess I should also note neither of them had much experience with C#... but I don't think the issues with these are language dependent. //For the following functions, evaluate the code for quality and discuss. E.g. //E.g. could it be done more efficiently? could it cause bugs? public void Question1() { int active = 0; CheckBox chkactive = (CheckBox)item.FindControl("chkactive"); if (chkactive.Checked == true) { active = 1; } dmxdevice.Active = Convert.ToBoolean(active); } public void Question2(bool IsPostBack) { if (!IsPostBack) { BindlistviewNotification(); } if (lsvnotificationList.Items.Count == 0) { BindlistviewNotification(); } } //Question 3 protected void lsvnotificationList_ItemUpdating(object sender, ListViewUpdateEventArgs e) { ListViewDataItem item = lsvnotificationList.Items[e.ItemIndex]; string Email = ((TextBox)item.FindControl("txtEmailAddress")).Text; int id = Convert.ToInt32(((HiddenField)item.FindControl("hfID")).Value); ESLinq.ESLinqDataContext db = new ESLinq.ESLinqDataContext(); var compare = from N in db.NotificationLists where N.ID == id select N; if (compare.Count() > 0) { lblmessage.Text = "Record Already Exists"; } else { ESLinq.NotificationList Notice = db.NotificationLists.Where(N => N.ID == id).Single(); Notice.EmailAddress = Email; db.SubmitChanges(); } lsvnotificationList.EditIndex = -1; BindlistviewNotification(); } A: Do you think these questions are too harsh for newbie programmers? Yes, IMO they are too harsh. Neither applicant saw anything wrong with the following code. * *While there are plenty of 'possible problems', like not checking for null pointers, casting, etc, there don't appear to be any 'actual problems.' (eg: given sane input, the program looks like it will actually run). I'd guess that a newbie programmer will get hung up on that. *As linq is pretty new, and still not in wide use, it's going to go way over the head of your newbies. *What is an ESLinqDataContext? If people have no idea what your object is or how it behaves, how are they supposed to know if it is being used correctly or not? * evaluate the code for quality and discuss You only really learn to pick up stuff like invalid cast exceptions (let alone being able to judge and comment on 'code quality') from reasonable experience working with code similar to what's in front of you. Perhaps I'm misunderstanding, but to me, an "entry level" position pretty much by definition has no expectation of prior experience, so it doesn't seem fair to grade them on criteria which require experience. A: I am not a C# programmer so I don't know what BindlistviewNotification does, but changing public void Question2(bool IsPostBack) { if (!IsPostBack) { foo(); } if (lsvnotificationList.Items.Count == 0) { foo(); } } to public void Question2(bool IsPostBack) { if (!IsPostBack || lsvnotificationList.Items.Count == 0) { foo(); } } changes the function! If IsPostBack is false, foo is executed. If lsvnotificationList.Items.Count == 0 then foo is executed again. The revised code will only execute foo once. You could argue that BindlistviewNotification can be executed several times without side effects or that IsPostBack can never be false and lsvnotificationList.Items.Count equal 0 at the same time, but those are language dependent and implementation dependent issues that cannot be resolved with the given code snippet. Also, if this is a bug that's "supposed" to be caught in the interview, this isn't language agnostic at all. There's nothing that would tell me that this is supposed to be a bug. A: As a newbie, I would expect employers to care more about what my thought processes were rather than whether the answer was "correct" or not. I could come up with some answers to these questions, but they probably wouldn't be right. :) So with that said, I think you could get by with these questions, but you should definitely be a bit more liberal with what the "correct" answer is. As long as those conditions were made clear, I think that it's a bad thing to get a blank sheet with no thoughts. This means that they either genuinely think the code is perfect (which we know is almost never true) or are too sheepish to share their thoughts (which is also a bad thing). A: I don't think 1 and 2 are too difficult, #3 requires a decent understanding on how databinding and LINQ works in .NET, so it may be somewhat hard for an entry level person. I think these are fairly good questions for junior level developers who have some .NET experience. For what its worth, my notes: Question 1: * *Using an integer as boolean *No null check on findControl *Excessive verbosity My revision: public void Question1() { CheckBox chkactive = item.FindControl("chkactive") as CheckBox; if (chkActive != null) dmxdevice.Active = chkActive.Checked; else dmxdevice.Active = false; } Question 2: * *Excessive verbosity *Databinding will happen twice if its not a postback, and there are no items to bind. My revision: public void Question2(bool IsPostBack) { if (!IsPostBack || lsnotificationList.Items.Count == 0) { BindlistviewNotification(); } } Question 3: * *Replace indexed loopup with getting e.Item.DataItem; *Add nullchecks to findControl calls. *Switch to TryParse and add a default id value. *Added better error handling *Document some major architectural issues, why are you querying the database from the frontend? Those LINQ queries could be optimized too. *Why not check for duplicates within the list items collection, and why not batch all updates with a single submit later? A: So you asked this to someone with no c#, .net, asp.net or linq knowledge? I wouldn't expected anything on the paper? A: I don't typically throw code at someone interviewing for a position and say "what's wrong?", mainly because I'm not convinced it really finds me the best candidate. Interviews are sometimes stressful and a bit overwhelming and coders aren't always on their A-game. Regarding the questions, honestly I think that if I didn't know C#, I'd have a hard time with question 3. Question #2 is a bit funky too. Yes, I get what you're going for there but what if the idea was that BindlistviewNotification() was supposed to be called twice? It isn't clear and one could argue there isn't enough info. Question 1 is easy enough to clean up, but I'm not convinced even it proves anything for an entry-level developer without a background in C#. I think I'd rather have something talk me through how they'd attack a problem (in pseudo-code or whatever language they are comfortable with) and assess them from that. Just a personal opinion, though. A: My only advice is to make sure your test questions actually compile. I think the value in FizzBuzz type questions is watching HOW somebody solves your problems. Watching them load the solution in to the IDE, compile it, step through the code with a step through debugger, write tests for the apparent intended behavior and then refactoring the code such that it is more correct/maintainable is more valuable than knowing that they can read code and comprehend it. A: I am a junior programmer, so I can give it a try: * *"active" is unnecessary: CheckBox chkactive = (CheckBox)item.FindControl("chkactive"); dmxdevice.Active = chkactive.Checked *You should use safe casting to cast to a CheckBox object. Of course, you should be able to find the checkbox through its variable name anyway.: CheckBox chkactive = item.FindControl("chkactive") as CheckBox; *The second function could be more concise: public void Question2(bool IsPostBack) { if (!IsPostBack || lsvnotificationList.Items.Count == 0) { BindlistviewNotification(); } } Only have time for those two, work is calling! EDIT: I just realized that I didn't answer your question. I don't think this is complicated at all. I am no expert by any means and I can easily see the inefficiencies here. I do however think that this is the wrong approach in general. These language specific tests are not very useful in my opinion. Try to get a feeling for how they would attack and solve a problem. Anyone who can get past that test will be able to easily pick up a language and learn from their mistakes. A: I think you are testing the wrong thing. You are obviously looking for a C# programmer, rather than a talented programmer (not that you cannot be a talented C# programmer). The guys might be great C++ programmers, for example. C# can be learned, smarts cannot. I prefer to ask for code during an interview, rather than presenting code in a specific language (example: implement an ArrayList and a LinkedList in any language). When I was looking for 3 programmers earlier this year, to work mostly in C#, Java, PL/SQL, Javascript and Delphi, I looked for C/C++ programmers, and have not been disappointed. Any one can learn Java, not everyone has a sense of good arachitecture, data strutures and a grasp of new complex problems. C++ is hard, so it acts as a good filter. If I had asked find errors in this Java code, I would have lost them. BTW, I am a team lead, been programming for 20 years with dozens of large projects developed on time and on budget, and I had no clue with what was wrong with question 2 or 3, having only a passing familiarity with C#, and certainly not with Linq, Not that I could not learn it.... I figured it out after a couple minutes, but would not expect a recent graduate to grasp it, all the LINQ code in question 3 is a distraction that hides the real problems. A: Not knowing C#, it took me a bit longer, but I'm assuming #1 could be expressed as dmxdevice.Active = ((CheckBox)item.FindControl("chkactive")).Checked == true And in #2 the two conditions could be joined as an A OR B statement? If that's what you're looking for, then no, those aren't too hard. I think #1 is something you might learn only after programming for a little while, but #2 seems easier. Are you looking for them to catch null pointer exceptions also? A: I think the first two are fine. The third may be a wee bit complicated for a graduate level interview, but maybe not, it depends whether they've done any .net coding before. It has LINQ statements in there, and that's pretty new. Especially since many unis/colleges are a bit behind in teaching the latest technology. So I would say run with 1 & 2 and either simplify 3 or heavily comment it as others have mentioned A: The first two appear to be more a test to see if a person can follow logically and realize that there is extra code. I'm not convinced that an entry level developer would understand that 'less is more' yet. However, if you explained the answer to Question 1 and they did not then extraplolate that answer to #2, I would be worried. A: Question 3 appears to be a big ball of mud type of implementation. This is almost expected to be the style of a junior developer straight from college. I remember most of my profs/TAs in college never read my code -- they only ran the executable and then put in test sets. I would not expect a new developer to understand what was wrong with it... A: What did you expect to get out of this interview? Do your employees have to debug code without a debugger or something? Are you hiring somebody who will be doing only maintenance programming? In my opinion these questions do little to enlighten you as to the abilities of the candidates. A: This is a fine question if you're looking for a maintenance programmer, or tester. However, this isn't a good test to detect a good programmer. A good programmer will pass this test, certainly, but many programmers that are not good will also pass it. If you want a good programmer, you need to define a test that only a good programmer would pass. A good programmer has excellent problem solving skills, and knows how to ask questions to get to the kernel of a problem before they start working - saving both them and you time. A good programmer can program in many different languages with only a little learning curve, so your 'code' test can consist of pseudo code. Tell them you want them to solve a problem and have them write the solution in pseudo code - which means they don't have access to all those nifty libraries. A good programmer knows how the libraries function and can re-create them if needed. So... yeah, you're essentially asking textbook knowledge questions - items that show memorization and language knowledge, but not skills necessary to solve a problem. -Adam A: It's funny to see everyone jumping to change or fix the code. The questions targeted "efficiently? could it cause bugs?" Answers: Given enough time and money, sure each one could probably be made more efficient. Bugs, please try to avoid casting and writing difficult to read code (code should be self-documenting). If it doesn't have bugs it might after the next junior programmer tries to change it... Also, avoid writing code that appears to rely on state contained outside the scope of the method/function, those nasty global variables. If I got some insightful comments like this it might be appropriate to use this as a tool to create some good conversation. But, I think some better ice-breakers exist to determine if a persons critical thinking skills are appropriate and if they will fit in with the rest of the team. I don't think playing stump the programmer is very effective. A: Question #1 boolean active = true; Question #2 if ((!IsPostBack) || (lsvnotificationList.Items.Count == 0)) Question #3: Do a total re-write and add comments. After a 30 second read I still can't tell what the code is trying todo. A: I'm not a C# programmer. On Q1, there seem to be undeclared objects dmxdevice and item, which confuses me. However, there does seem to be a lot of obfuscation in the rest of the code. On Q2, lsvnotificationList is not declared, and it not clear to me why one test is abbreviated with ! and the other with == 0 -- but the tests could be combined with ||, it seems. In Q3, lsvnotificationList is not self-evidently declared, again. For the rest, it seems to be doing a database lookup using LINQ. I'd at least expect that to be factored into a function that validates the hidden field ID more transparently. But if you have other ideas, well...I'm still not a C# programmer. A: Disclaimer: I come from a 4 year degree and a year's worth of professional Java experience. The first two questions are quite straightforward and if a candidate doesn't see a better approach I would suspect it's because they haven't been paying attention in class ;-) Most of the answers to the second question presented so far alter the functions behaviour. The function could very well be evaluated twice in the original code, although I can't say if that is the intent of the function. Side effects are important. I would probably one-line the first function, myself. The questions are fairly language agnostic, but they're not library agnostic, which I would argue is equally as important. If you're specifically looking for .NET knowledge, well and good, but without Google I couldn't tell you what an ESLinq.DataContext is, and my answer to the third question suffers accordingly. As it is, it's nearly incomprehensible to me. I think you also have to be careful how you present the questions. There's nothing incorrect about the first two methods, per se. They're just a little more verbose than they should be. I would just present them with the sheet and say, "What do you think of this code?" Make it open ended, that way if they want to bring up error-handling/logging/commenting or other things, it doesn't limit the discussion. A: A cursory glance indicates that most of the rest of the code suffers from poor structure and unnecessary conditionals etc. There's nothing inherently "wrong" with that, especially if the program runs as expected. Maybe you should change the question? On the other hand, the casting doesn't look like it's being done correctly at all eg. (cast)object.Method() vs (cast)(object.Method()) vs ((cast)object).Method(). In the first case, it's not a language agnostic problem though - it depends on rules of precedence. I don't think it was that hard, but it all depends on what you wanted to test. IMO, the smart candidate should have asked a lot of questions about the function of the program and the structure of the classes before attempting to answer. eg. How are they supposed to know if "item" is a global/member var if they don't ask? How do they know it's type? Do they even know if it supports a FindControl method? What about FindControl's return type? I'm not sure how many colleges teach Linq yet though, so maybe you should remove that part. A: No one's answering #3 with code. That should indicate how people feel about it. Usually stackoverflowers meet these head-first. Here's my stab at it. I had to look up the EventArgs on msdn to know the properties. I know LINQ because I've studied it closely for the past 8 months. I don't have much UI experience, so I can't tell if the call to bind in the event handler is bad (or other such things that would be obvious to a UI coder). protected void lsvnotificationList_ItemUpdating(object sender, ListViewUpdateEventArgs e) { string Email = e.NewValues["EmailAddress"].ToString(); int id = Convert.ToInt32(e.NewValues["ID"]); using (ESLinq.ESLinqDataContext db = new ESLinq.ESLinqDataContext(connectionString)) { List<NotificationList> compare = db.NotificationLists.Where(n => n.ID = id).ToList(); if (!compare.Any()) { lblmessage.Text = "Record Does Not Exist"; } else { NotificationList Notice = compare.First(); Notice.EmailAddress = Email; db.SubmitChanges(); } } lsvnotificationList.EditIndex = -1; BindlistviewNotification(); } A: While people here obviously have no trouble hitting this code in their spare time, as someone who went through the whole job search/interviewing process fresh out of collage about a year ago I think you have to remember how stressful questions like these can be. I understand you were just looking for thought process, but I think you would get more out of people if you brought questions like this up casually and conversationally after you calm the interviewee down. This may sound like a cop out, but questions about code that will technically work, but needs some pruning, can be much more difficult then correcting code that doesn't compile, because people will assume that the examples are suppose to not compile, and will drive themselves up a wall trying to figure out the trick to your questions. Some people never get stressed by interview questions, but alot do, even some talented programmers that you probably don't want to rule out, unless you are preparing them for a situation where they have to program with a loaded gun to their head. The code itself in question 3 seems very C# specific. I only know that as LINQ because someone pointed it out in the answers here, but coming in as a Java developer, I would not recognize that at all. I mean do you really expect colleges to teach a feature that was only recently introduced in .net 3.5? I'd also liked to point out how many people here were tripped up by question 2, by streamlining the code, they accidentally changed the behavior of the code. That should tell you alot about the difficulty of your questions. A: Ok, so after staying up well past my bedtime to read all the answers and comment on most of them... General concensus seems to be that the questions aren't too bad but, especially for Q3, could be better served by using pseudo-code or some other technique to hide some of the language specific stuff. I guess for now I'll just not weigh these questions in too heavily. (Of course, their lack of SQL knowledge is still disturbing... if only because they both had SQL on their resume. :( ) A: I'll have to say that my answer to these problems is that without comments (or documentation) explaining what the code is MEANT to do, there is little reason to even look at the code. The code does EXACTLY what it does. If you change it to do something else, even change it to prevent throwing an exception, you may make it do something unintended and break the larger program. The problem with all three questions is that there is no intent. If you modify the code, you are assuming that you know the intent of the original coder. And that assumption will often be wrong. And to answer the question: Yes, this is too difficult for most junior programmers, because documenting code is never taught. A: Okey I'm not going to answer the C# questions from what I see here you have enough candidates that would do fine in a job interview with you. I do think that the tests won't give you a good view of a persons programming skills. Have a look at Joel's interviewing Guide: http://www.joelonsoftware.com/articles/fog0000000073.html He talks about two things when it comes to possible candidates: are they smart AND do they get the job done (now that's a powerful combination).Let your candidates talk a bit about projects they did or what they're toying around with at home. Find out if they are passionate about programming. Some experience is nice of course, just don't ask them to do tricks. A: Q1 also has a potential InvalidCastException on the item.FindControl() line. I don't think Q1 or Q2 are outside the realms of being too hard, even for non C# users. Any level code should be able to see that you should be using a boolean for active, and only using one if statement. Q3 though atleast needs comments as someone else noted. That's not basic code, especially if you're targeting non-C# users with it too. A: In question 2 for better modularity I would suggest passing the count of lsvnotificationList.Items as a parameter: public void Question2(bool IsPostBack, int listItemsCount) { if (!IsPostBack || listItemsCount == 0) BindlistviewNotification(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/151210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: VMware Server 2.0 - The VMware Infrastructure Web Service not responding After installing VMware Server I get the following error when I try to access the VMware web-based server manager: The VMware Infrastructure Web Service at "http://localhost:8222/sdk" is not responding A: Go into the services manager and check that the 'VMware Host Agent' service is running. If not, then start it and then try browsing to the site again. A: Vmware Hostd was not working for me either. However, in trying to start the service it stopped automatically. Typically when this happens it is because there is an error in your config.xml. C:\ProgramData\VMware\VMware Server\hostd\config.xml In my case, checking the logs at: C:\ProgramData\VMware\VMware Server showed it erroring out after "Trying hostsvc". Searching the config.xml for hostsvc showed references to several things, the first thing was the datastore. In checking my datastores.xml file: C:\ProgramData\VMware\VMware Server\hostd\datastores.xml . I found it full of all sorts of random characters instead of a properly formed XML document. Renaming datastores.xml to datastorex.xml.bad allowed me to start the service. At which point I had to add back my datastores through the GUI. Hopefully this will help someone else out. I did not find any other references in Google to this issue. A: Try accessing via "http://localhost:8222" without the /sdk. You can also try the secure site via "https://localhost:8333".
{ "language": "en", "url": "https://stackoverflow.com/questions/151228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I get the Local Network IP address of a computer programmatically? I need to get the actual local network IP address of the computer (e.g. 192.168.0.220) from my program using C# and .NET 3.5. I can't just use 127.0.0.1 in this case. How can I accomplish this? A: If you are looking for the sort of information that the command line utility, ipconfig, can provide, you should probably be using the System.Net.NetworkInformation namespace. This sample code will enumerate all of the network interfaces and dump the addresses known for each adapter. using System; using System.Net; using System.Net.NetworkInformation; class Program { static void Main(string[] args) { foreach ( NetworkInterface netif in NetworkInterface.GetAllNetworkInterfaces() ) { Console.WriteLine("Network Interface: {0}", netif.Name); IPInterfaceProperties properties = netif.GetIPProperties(); foreach ( IPAddress dns in properties.DnsAddresses ) Console.WriteLine("\tDNS: {0}", dns); foreach ( IPAddressInformation anycast in properties.AnycastAddresses ) Console.WriteLine("\tAnyCast: {0}", anycast.Address); foreach ( IPAddressInformation multicast in properties.MulticastAddresses ) Console.WriteLine("\tMultiCast: {0}", multicast.Address); foreach ( IPAddressInformation unicast in properties.UnicastAddresses ) Console.WriteLine("\tUniCast: {0}", unicast.Address); } } } You are probably most interested in the UnicastAddresses. A: Using Dns requires that your computer be registered with the local DNS server, which is not necessarily true if you're on a intranet, and even less likely if you're at home with an ISP. It also requires a network roundtrip -- all to find out info about your own computer. The proper way: NetworkInterface[] nics = NetworkInterface.GetAllNetworkInterfaces(); foreach(NetworkInterface adapter in nics) { foreach(var x in adapter.GetIPProperties().UnicastAddresses) { if (x.Address.AddressFamily == AddressFamily.InterNetwork && x.IsDnsEligible) { Console.WriteLine(" IPAddress ........ : {0:x}", x.Address.ToString()); } } } (UPDATE 31-Jul-2015: Fixed some problems with the code) Or for those who like just a line of Linq: NetworkInterface.GetAllNetworkInterfaces() .SelectMany(adapter=> adapter.GetIPProperties().UnicastAddresses) .Where(adr=>adr.Address.AddressFamily == AddressFamily.InterNetwork && adr.IsDnsEligible) .Select (adr => adr.Address.ToString()); A: In How to get IP addresses in .NET with a host name by John Spano, it says to add the System.Net namespace, and use the following code: //To get the local IP address string sHostName = Dns.GetHostName (); IPHostEntry ipE = Dns.GetHostByName (sHostName); IPAddress [] IpA = ipE.AddressList; for (int i = 0; i < IpA.Length; i++) { Console.WriteLine ("IP Address {0}: {1} ", i, IpA[i].ToString ()); } A: As a machine can have multiple ip addresses, the correct way to figure out your ip address that you're going to be using to route to the general internet is to open a socket to a host on the internet, then inspect the socket connection to see what the local address that is being used in that connection is. By inspecting the socket connection, you will be able to take into account weird routing tables, multiple ip addresses and whacky hostnames. The trick with the hostname above can work, but I wouldn't consider it entirely reliable. A: If you know there are one or more IPv4 addresses for your computer, this will provide one of them: Dns.GetHostAddresses(Dns.GetHostName()) .First(a => a.AddressFamily == AddressFamily.InterNetwork).ToString() GetHostAddresses normally blocks the calling thread while it queries the DNS server, and throws a SocketException if the query fails. I don't know whether it skips the network call when looking up your own host name.
{ "language": "en", "url": "https://stackoverflow.com/questions/151231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Has anyone ever got a remote JMX JConsole to work? It seems that I've never got this to work in the past. Currently, I KNOW it doesn't work. But we start up our Java process: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=6002 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false I can telnet to the port, and "something is there" (that is, if I don't start the process, nothing answers, but if I do, it does), but I can not get JConsole to work filling in the IP and port. Seems like it should be so simple, but no errors, no noise, no nothing. Just doesn't work. Anyone know the hot tip for this? A: Adding -Djava.rmi.server.hostname='<host ip>' resolved this problem for me. A: Sushicutta's steps 4-7 can be skipped by adding the following line to step 3: -Dcom.sun.management.jmxremote.rmi.port=<same port as jmx-remote-port> e.g. Add to start up parameters: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=12345 -Dcom.sun.management.jmxremote.rmi.port=12345 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=localhost For the port forwarding, connect using: ssh -L 12345:localhost:12345 <username>@<host> if your host is a stepping stone, simply chain the port forward by running the following on the step stone after the above: ssh -L 12345:localhost:12345 <username>@<host2> Mind that the hostname=localhost is needed to make sure the jmxremote is telling the rmi connection to use the tunnel. Otherwise it might try to connect directy and hit the firewall. A: Tried with Java 8 and newer versions This solution works well also with firewalls 1. Add this to your java startup script on remote-host: -Dcom.sun.management.jmxremote.port=1616 -Dcom.sun.management.jmxremote.rmi.port=1616 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=localhost 2. Execute this on your computer. * *Windows users: putty.exe -ssh user@remote-host -L 1616:remote-host:1616 *Linux and Mac Users: ssh user@remote-host -L 1616:remote-host:1616 3. Start jconsole on your computer jconsole localhost:1616 4. Have fun! P.S.: during step 2, using ssh and -L you specify that the port 1616 on the local (client) host must be forwarded to the remote side. This is an ssh tunnel and helps to avoids firewalls or various networks problems. A: PROTIP: The RMI port are opened at arbitrary portnr's. If you have a firewall and don't want to open ports 1024-65535 (or use vpn) then you need to do the following. You need to fix (as in having a known number) the RMI Registry and JMX/RMI Server ports. You do this by putting a jar-file (catalina-jmx-remote.jar it's in the extra's) in the lib-dir and configuring a special listener under server: <Listener className="org.apache.catalina.mbeans.JmxRemoteLifecycleListener" rmiRegistryPortPlatform="10001" rmiServerPortPlatform="10002" /> (And ofcourse the usual flags for activating JMX -Dcom.sun.management.jmxremote \ -Dcom.sun.management.jmxremote.ssl=false \ -Dcom.sun.management.jmxremote.authenticate=false \ -Djava.rmi.server.hostname=<HOSTNAME> \ See: JMX Remote Lifecycle Listener at http://tomcat.apache.org/tomcat-6.0-doc/config/listeners.html Then you can connect using this horrific URL: service:jmx:rmi://<hostname>:10002/jndi/rmi://<hostname>:10001/jmxrmi A: Check if your server is behind the firewall. JMX is base on RMI, which open two port when it start. One is the register port, default is 1099, and can be specified by the com.sun.management.jmxremote.port option. The other is for data communication, and is random, which is what cause problem. A good news is that, from JDK6, this random port can be specified by the com.sun.management.jmxremote.rmi.port option. export CATALINA_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8991 -Dcom.sun.management.jmxremote.rmi.port=8991 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false" A: Getting JMX through the Firewall is really hard. The Problem is that standard RMI uses a second random assigned port (beside the RMI registry). We have three solution that work, but every case needs a different one: * *JMX over SSH Tunnel with Socks proxy, uses standard RMI with SSH magic http://simplygenius.com/2010/08/jconsole-via-socks-ssh-tunnel.html *JMX MP (alternative to standard RMI), uses only one fixed port, but needs a special jar on server and client http://meteatamel.wordpress.com/2012/02/13/jmx-rmi-vs-jmxmp/ *Start JMX Server form code, there it is possible to use standard RMI and use a fixed second port: https://issues.apache.org/bugzilla/show_bug.cgi?id=39055 A: After putting my Google-fu to the test for the last couple of days, I was finally able to get this to work after compiling answers from Stack Overflow and this page http://help.boomi.com/atomsphere/GUID-F787998C-53C8-4662-AA06-8B1D32F9D55B.html. Reposting from the Dell Boomi page: To Enable Remote JMX on an Atom If you want to monitor the status of an Atom, you need to turn on Remote JMX (Java Management Extensions) for the Atom. Use a text editor to open the <atom_installation_directory>\bin\atom.vmoptions file. Add the following lines to the file: -Dcom.sun.management.jmxremote.port=5002 -Dcom.sun.management.jmxremote.rmi.port=5002 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false The one line that I haven't seen any Stack Overflow answer cover is -Dcom.sun.management.jmxremote.rmi.port=5002 In my case, I was attempting to retrieve Kafka metrics, so I simply changed the above option to match the -Dcom.sun.management.jmxremote.port value. So, without authentication of any kind, the bare minimum config should look like this: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=(jmx remote port) -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.rmi.port=(jmx remote port) -Djava.rmi.server.hostname=(CNAME|IP Address) A: When testing/debugging/diagnosing remote JMX problems, first always try to connect on the same host that contains the MBeanServer (i.e. localhost), to rule out network and other non-JMX specific problems. A: There are already some great answers here, but, there is a slightly simpler approach that I think it is worth sharing. sushicutta's approach is good, but is very manual as you have to get the RMI Port every time. Thankfully, we can work around that by using a SOCKS proxy rather than explicitly opening the port tunnels. The downside of this approach is JMX app you run on your machine needs to be able to be configured to use a Proxy. Most processes you can do this from adding java properties, but, some apps don't support this. Steps: * *Add the JMX options to the startup script for your remote Java service: -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=8090 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false *Set up a SOCKS proxy connection to your remote machine: ssh -D 9696 user@remotemachine.com *Configure your local Java monitoring app to use the SOCKS proxy (localhost:9696). Note: You can sometimes do this from the command line, i.e.: jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=9696 A: The following worked for me (though I think port 2101 did not really contribute to this): -Dcom.sun.management.jmxremote.port=2100 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.rmi.port=2101 -Djava.rmi.server.hostname=<IP_ADDRESS>OR<HOSTNAME> I am connecting from a remote machine to a server which has Docker running and the process is inside the container. Also, I stopped firewallD but I don't think that was the issue as I could telnet to 2100 even with the firewall open. Hope it helps. A: You are probably experiencing an issue with a firewall. The 'problem' is that the port you specify is not the only port used, it uses 1 or maybe even 2 more ports for RMI, and those are probably blocked by a firewall. One of the extra ports will not be know up front if you use the default RMI configuration, so you have to open up a big range of ports - which might not amuse the server administrator. There is a solution that does not require opening up a lot of ports however, I've gotten it to work using the combined source snippets and tips from http://forums.sun.com/thread.jspa?threadID=5267091 - link doesn't work anymore http://blogs.oracle.com/jmxetc/entry/connecting_through_firewall_using_jmx http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html It's even possible to setup an ssh tunnel and still get it to work :-) A: I have a solution for this: If your Java process is running on Linux behind a firewall and you want to start JConsole / Java VisualVM / Java Mission Control on Windows on your local machine to connect it to the JMX Port of your Java process. You need access to your linux machine via SSH login. All Communication will be tunneled over the SSH connection. TIP: This Solution works no matter if there is a firewall or not. Disadvantage: Everytime you restart your java process, you will need to do all steps from 4 - 9 again. 1. You need the putty-suite for your Windows machine from here: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html At least the putty.exe 2. Define one free Port on your linux machine: <jmx-remote-port> Example: jmx-remote-port = 15666 3. Add arguments to java process on the linux machine This must be done exactly like this. If its done like below, it works for linux Machines behind firewalls (It works cause of the -Djava.rmi.server.hostname=localhost argument). -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=<jmx-remote-port> -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=localhost Example: java -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=15666 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=localhost ch.sushicutta.jmxremote.Main 4. Get Process-Id of your Java Process ps -ef | grep <java-processname> result ---> <process-id> Example: ps -ef | grep ch.sushicutta.jmxremote.Main result ---> 24321 5. Find arbitrary Port for RMIServer stubs download The java process opens a new TCP Port on the linux machine, where the RMI Server-Stubs will be available for download. This port also needs to be available via SSH Tunnel to get a connection to the Java Virtual Machine. With netstat -lp this port can be found also the lsof -i gives hints what port has been opened form the java process. NOTE: This port always changes when java process is started. netstat -lp | grep <process-id> tcp 0 0 *:<jmx-remote-port> *:* LISTEN 24321/java tcp 0 0 *:<rmi-server-port> *:* LISTEN 24321/java result ---> <rmi-server-port> Example: netstat -lp | grep 24321 tcp 0 0 *:15666 *:* LISTEN 24321/java tcp 0 0 *:37123 *:* LISTEN 24321/java result ---> 37123 6. Enable two SSH-Tunnels from your Windows machine with putty Source port: <jmx-remote-port> Destination: localhost:<jmx-remote-port> [x] Local [x] Auto Source port: <rmi-server-port> Destination: localhost:<rmi-server-port> [x] Local [x] Auto Example: Source port: 15666 Destination: localhost:15666 [x] Local [x] Auto Source port: 37123 Destination: localhost:37123 [x] Local [x] Auto 7. Login to your Linux machine with Putty with this SSH-Tunnel enabled. Leave the putty session open. When you are logged in, Putty will tunnel all TCP-Connections to the linux machine over the SSH port 22. JMX-Port: Windows machine: localhost:15666 >>> SSH >>> linux machine: localhost:15666 RMIServer-Stub-Port: Windows Machine: localhost:37123 >>> SSH >>> linux machine: localhost:37123 8. Start JConsole / Java VisualVM / Java Mission Control to connect to your Java Process using the following URL This works, cause JConsole / Java VisualVM / Java Mission Control thinks you connect to a Port on your local Windows machine. but Putty send all payload to the port 15666 to your linux machine. On the linux machine first the java process gives answer and send back the RMIServer Port. In this example 37123. Then JConsole / Java VisualVM / Java Mission Control thinks it connects to localhost:37123 and putty will send the whole payload forward to the linux machine The java Process answers and the connection is open. [x] Remote Process: service:jmx:rmi:///jndi/rmi://localhost:<jndi-remote-port>/jmxrmi Example: [x] Remote Process: service:jmx:rmi:///jndi/rmi://localhost:15666/jmxrmi 9. ENJOY #8-] A: Are you running on Linux? Perhaps the management agent is binding to localhost: http://java.sun.com/j2se/1.5.0/docs/guide/management/faq.html#linux1 A: I am running JConsole/JVisualVm on windows hooking to tomcat running Linux Redhat ES3. Disabling packet filtering using the following command did the trick for me: /usr/sbin/iptables -I INPUT -s jconsole-host -p tcp --destination-port jmxremote-port -j ACCEPT where jconsole-host is either the hostname or the host address on which JConsole runs on and jmxremote-port is the port number set for com.sun.management.jmxremote.port for remote management. A: I'm using boot2docker to run docker containers with Tomcat inside and I've got the same problem, the solution was to: * *Add -Djava.rmi.server.hostname=192.168.59.103 *Use the same JMX port in host and docker container, for instance: docker run ... -p 9999:9999 .... Using different ports does not work. A: You need to also make sure that your machine name resolves to the IP that JMX is binding to; NOT localhost nor 127.0.0.1. For me, it has helped to put an entry into hosts that explicitly defines this. A: Getting JMX through the firewall isn't that hard at all. There is one small catch. You have to forward both your JMX configured port ie. 9010 and one of dynamic ports its listens to on my machine it was > 30000 A: These are the steps that worked for me (debian behind firewall on the server side, reached over VPN from my local Mac): check server ip hostname -i use JVM params: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=[jmx port] -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=[server ip from step 1] run application find pid of the running java process check all ports used by JMX/RMI netstat -lp | grep [pid from step 4] open all ports from step 5 on the firewall Voila. A: In order to make a contribution, this is what I did on CentOS 6.4 for Tomcat 6. * *Shutdown iptables service service iptables stop *Add the following line to tomcat6.conf CATALINA_OPTS="${CATALINA_OPTS} -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8085 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Djava.rmi.server.hostname=[host_ip]" This way I was able to connect from another PC using JConsole. A: I'm trying to JMC to run the Flight Recorder (JFR) to profile NiFi on a remote server that doesn't offer a graphical environment on which to run JMC. Based on the other answers given here, and upon much trial and error, here is what I'm supplying to the JVM (conf/bootstrap.conf)when I launch NiFi: java.arg.90=-Dcom.sun.management.jmxremote=true java.arg.91=-Dcom.sun.management.jmxremote.port=9098 java.arg.92=-Dcom.sun.management.jmxremote.rmi.port=9098 java.arg.93=-Dcom.sun.management.jmxremote.authenticate=false java.arg.94=-Dcom.sun.management.jmxremote.ssl=false java.arg.95=-Dcom.sun.management.jmxremote.local.only=false java.arg.96=-Djava.rmi.server.hostname=10.10.10.92 (the IP address of my server running NiFi) I did put this in /etc/hosts, though I doubt it's needed: 10.10.10.92 localhost Then, upon launching JMC, I create a remote connection with these properties: Host: 10.10.10.92 Port: 9098 User: (nothing) Password: (ibid) Incidentally, if I click the Custom JMX service URL, I see: service:jmx:rmi:///jndi/rmi://10.10.10.92:9098/jmxrmi This finally did it for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/151238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "125" }
Q: ASP.NET AJAX nested updatePanel modalPopup funkiness It seems that in some cases, if you end up with nested modalPopups wrapped with updatePanels (not ideal I know, and should probably be refactored, but that's what we're working with because of how some of the user controls we wanted to re-use were written), when you fire a postback that should open the nested modalPopup, instead it closes the parent one. For the sake of argument, if I set a breakpoint and run ((ModalPopupExtender)this.Parent.Parent.FindControl("modalPopupExtender'sID").Show(); right before the child modalPopup's Show() method is called, it works as we originally expected. It seems to me that, because when updatePanels are nested, they can post back their parent, the parent modalPopup "doesn't know" it's supposed to be showing and reloads its panel's visibility from scratch as false. Because the child modalPopup is then nested inside a parent panel whose visibility is false, calling Show() on it has no effect either. So instead of getting another modalPopup open, the current one closes. This is not an error, just behavior we didn't expect, so it was difficult to track down with no exception thrown anywhere, but I think the above explanation makes sense... If I've understood the problem incorrectly, please clarify it and enlighten me, because this doesn't seem to happen all the time I'd think it would! At this point for this particular situation we're stuck re-writing some of those controls to not end up with nested updatePanels so this doesn't happen, but I'm curious: Has anyone run into this problem before, and did you come up with any clever work-around that doesn't involve a call to FindControl() to re-Show() the modalPopup in question? A: I have solved this problem! If you change the UpdatePanel's UpdateMode to "Conditional", the parent UpdatePanel doesn't post back when the child UpdatePanel posts back, and then nesting them is no issue at all! I'm not sure why UpdateMode="Always" is the default, but, lesson learned.
{ "language": "en", "url": "https://stackoverflow.com/questions/151241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I embed a File Version in an MSI file with Visual Studio? I have a setup project for my C# program, and this setup project has a Version in its properties. I'd like for the MSI file that is generated to have this Version embedded in it, so I can mouse over it in explorer and see what version the file is. I'm using VS2008. How can I do this? A: If you simply add the "Version: 1.5.0" text into the Description property of the Setup Project, the version number also shows on the MSI file like so: http://screencast.com/t/A499i6jS A: As far as I know MSI file will never show version. Simple reason is that MSI files are not PE files, they are sort-of database. Msiexec.exe then interprets this database to do the actual installation. The version property you mention is used by MSI engine internally for upgrades, uninstalls etc and is never displayed. A: That's a good question but I don't know any setup tool that could do that. Moreover I never encountered an MSI file with file version resource embedded in it, so it's not a common practice. Usually if I want to find out version of an MSI file I have to open it in Orca and check ProductVersion property there (in Property table). A: Open up the associated .vdproj file in a text editor. Look for the "Product" section, then modify the "ProductVersion", and the "Manufacturer" fields. "Product" { "Name" = "8:Microsoft Visual Studio" "ProductName" = "8:tidAxCleanupScript" "ProductCode" = "8:{0949AAAD-2C29-415E-851C-825C74C9CA81}" "PackageCode" = "8:{8F012EF1-D5D0-43DC-BBFD-761A639DDB07}" "UpgradeCode" = "8:{38DE1949-0782-4EF3-BDC2-080EB5B73EF8}" "RestartWWWService" = "11:FALSE" "RemovePreviousVersions" = "11:TRUE" "DetectNewerInstalledVersion" = "11:TRUE" "InstallAllUsers" = "11:FALSE" "ProductVersion" = "8:**1.5.0**" "Manufacturer" = "8:**Default Company Name**" "ARPHELPTELEPHONE" = "8:" A: I might be wrong, but doesn't the msi version follow the version in the AssemblyInfo file of your startup project?
{ "language": "en", "url": "https://stackoverflow.com/questions/151250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to force abort on "glibc detected *** free(): invalid pointer" In Linux environment, when getting "glibc detected *** free(): invalid pointer" errors, how do I identify which line of code is causing it? Is there a way to force an abort? I recall there being an ENV var to control this? How to set a breakpoint in gdb for the glibc error? A: I recommend you get valgrind: valgrind --tool=memcheck --leak-check=full ./a.out A: In general, it looks like you might have to recompile glibc, ugh. You don't say what environment you're running on, but if you can recompile your code for OS X, then its version of libc has a free() that listens to this environment variable: MallocErrorAbort If set, causes abort(3) to be called if an error was encountered in malloc(3) or free(3) , such as a calling free(3) on a pointer previously freed. The man page for free() on OS X has more information. If you're on Linux, then try Valgrind, it can find some impossible-to-hunt bugs. A: How to set a breakpoint in gdb? (gdb) b filename:linenumber // e.g. b main.cpp:100 Is there a way to force an abort? I recall there being an ENV var to control this? I was under the impression that it aborted by default. Make sure you have the debug version installed. Or use libdmalloc5: "Drop in replacement for the system's malloc',realloc', calloc',free' and other memory management routines while providing powerful debugging facilities configurable at runtime. These facilities include such things as memory-leak tracking, fence-post write detection, file/line number reporting, and general logging of statistics." Add this to your link command -L/usr/lib/debug/lib -ldmallocth gdb should automatically return control when glibc triggers an abort. Or you can set up a signal handler for SIGABRT to dump the stacktrace to a fd (file descriptor). Below, mp_logfile is a FILE* void *array[512 / sizeof(void *)]; // 100 is just an arbitrary number of backtraces, increase if you want. size_t size; size = backtrace (array, 512 / sizeof(void *)); backtrace_symbols_fd (array, size, fileno(mp_logfile)); A: I believe if you setenv MALLOC_CHECK_ to 2, glibc will call abort() when it detects the "free(): invalid pointer" error. Note the trailing underscore in the name of the environment variable. If MALLOC_CHECK_ is 1 glibc will print "free(): invalid pointer" (and similar printfs for other errors). If MALLOC_CHECK_ is 0, glibc will silently ignore such errors and simply return. If MALLOC_CHECK_ is 3 glibc will print the message and then call abort(). I.e. its a bitmask. You can also call mallopt(M_CHECK_ACTION, arg) with an argument of 0-3, and get the same result as with MALLOC_CHECK_. Since you're seeing the "free(): invalid pointer" message I think you must already be setting MALLOC_CHECK_ or calling mallopt(). By default glibc does not print those messages. As for how to debug it, installing a handler for SIGABRT is probably the best way to proceed. You can set a breakpoint in your handler or deliberately trigger a core dump.
{ "language": "en", "url": "https://stackoverflow.com/questions/151268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Given an IMDB movie id, how do I programmatically get its poster image? movie id tt0438097 can be found at http://www.imdb.com/title/tt0438097/ What's the url for its poster image? A: As I'm sure you know, the actual url for that image is http://ia.media-imdb.com/images/M/MV5BMTI0MDcxMzE3OF5BMl5BanBnXkFtZTcwODc3OTYzMQ@@._V1._SX100_SY133_.jpg You're going to be hard pressed to figure out how it's generated though and they don't seem to have a publicly available API. Screenscraping is probably your best bet. The picture seems to generally be inside a div with class=photo and the name of the a tag is poster. The image itself is just inside the a tag. A: The URL is a random string as far as I can tell. It can still be easily retrieved. It is the only img inside the anchor named poster. So, if you are reading the source, simply search for <a name="poster" and it will be the text following the first src=" from there. However, you will need to keep the screen scraping code updated because that will probably change. You should also be aware that the images are copyrighted, so be careful to only use the image under a good "fair use" rationale. A: If a thumb is enough, you can use the Facebook Graph API: http://graph.facebook.com/?ids=http://www.imdb.com/title/tt0438097/ Gets you a thumbnail: http://profile.ak.fbcdn.net/hprofile-ak-ash2/50289_117058658320339_650214_s.jpg A: I know that it is way too late, but in my project I used this:- * *Use omdbapi, Lets take example of Inception, use www.omdbapi.com/?t=inception it will return a json object. *In that json object get the "Poster" object, it contains the poster for the image. A: You can use imdb-cli tool to download movie's poster, e.g. omdbtool -t "Ice Age: The Meltdown" | wget `sed -n '/^poster/{n;p;}'` A: Check out http://www.imdbapi.com/, It returns Poster url in string. For example, check http://www.imdbapi.com/?i=&t=inception and you'll get the poster address: Poster":"http://ia.media-imdb.com/images/M/MV5BMjAxMzY3NjcxNF5BMl5BanBnXkFtZTcwNTI5OTM0Mw@@._V1._SX320.jpg" Update: Seems like the site owner had some arguments with IMDB legal staff. As mentioned in the original site, new site's address is http://www.omdbapi.com/ A: The best solution is to use tmdb.org : * *use your imdbid in this api url after find/: https://api.themoviedb.org/3/find/tt0111161?api_key=__YOURAPIKEY__&external_source=imdb_id *Retrieve the json response and select the poster_path attribute: e.g. "poster_path":"/9O7gLzmreU0nGkIB6K3BsJbzvNv.jpg" *Prepend this path with "http://image.tmdb.org/t/p/w150", and you will have the poster URL that you can use in an img tag :-) omdbapi works, but I found out you cannot really use these images (because of screen scraping and they are blocked anyway if you use them in an img tag) A: Be aware tough, that the terms of service explicitly forbid screenscraping. You can download the IMDB database as a set of text files, but as I understand it, the IMDB movie ID is nowhere to be found in these text files. A: You can use Trakt API, you have to make a search request with the imdb ID, and the Json result given by Trakt API contains links for two images of that movie (poster and fan art) http://trakt.tv/api-docs/search-movies A: I've done something similar using phantomjs and wget. This bit of phantomjs accepts a search query and returns the first result's movie poster url. You could easily change it to your needs. var system = require('system'); if (system.args.length === 1) { console.log('Usage: moviePoster.js <movie name>'); phantom.exit(); } var formattedTitle = encodeURIComponent(system.args[1]).replace(/%20/g, "+"); var page = require('webpage').create(); page.open('http://m.imdb.com/find?q=' + formattedTitle, function() { var url = page.evaluate(function() { return 'http://www.imdb.com' + $(".title").first().find('a').attr('href'); }); page.close(); page = require('webpage').create(); page.open(url, function() { var url = page.evaluate(function() { return 'http://www.imdb.com' + $("#img_primary").find('a').attr('href'); }); page.close(); page = require('webpage').create(); page.open(url, function() { var url = page.evaluate(function() { return $(".photo").first().find('img').attr('src'); }); console.log(url); page.close(); phantom.exit(); }); }); }); I download the image using wget for many movies in a directory using this bash script. The mp4 files have names that the IMDB likes, and that's why the first search result is nearly guaranteed to be correct. Names like "Love Exposure (2008).mp4". for file in *.mp4; do title="${file%.mp4}" if [ ! -f "${title}.jpg" ] then wget `phantomjs moviePoster.js "$title"` -O "${title}.jpg" fi done Then minidlna uses the movie poster when it builds the thumbnail database, because it has the same name as the video file. A: $Movies = Get-ChildItem -path "Z:\MOVIES\COMEDY" | Where-Object {$_.Extension -eq ".avi" -or $_.Extension -eq ".mp4" -or $_.Extension -eq ".mkv" -or $_.Extension -eq<br> <br>".flv" -or $_.Extension -eq ".xvid" -or $_.Extension -eq ".divx"} | Select-Object Name, FullName | Sort Name <br> #Grab all the extension types and filter the ones I ONLY want <br> <br> $COMEDY = ForEach($Movie in $Movies) <br> {<br> $Title = $($Movie.Name)<br> #Remove the file extension<br> $Title = $Title.split('.')[0] <br> <br> #Changing the case to all lower <br> $Title = $Title.ToLower()<br> <br> #Replace a space w/ %20 for the search structure<br> $searchTitle = $Title.Replace(' ','%20') <br> <br> #Fetching search results<br> $moviesearch = Invoke-WebRequest "http://www.imdb.com/search/title?title=$searchTitle&title_type=feature"<br> <br> #Moving html elements into variable<br> $titleclassarray = $moviesearch.AllElements | where Class -eq 'title' | select -First 1<br> <br> #Checking if result contains movies<br> try<br><br> { $titleclass = $titleclassarray[0]<br> }<br> catch<br> {<br> Write-Warning "No movie found matching that title http://www.imdb.com/search/title?title=$searchTitle&title_type=feature"<br> } <br> <br> #Parcing HTML for movie link<br> $regex = "<\s*a\s*[^>]*?href\s*=\s*[`"']*([^`"'>]+)[^>]*?>"<br> $linksFound = [Regex]::Matches($titleclass.innerHTML, $regex, "IgnoreCase")<br> <br><br> #Fetching the first result from <br> $titlelink = New-Object System.Collections.ArrayList<br> foreach($link in $linksFound)<br> {<br> $trimmedlink = $link.Groups[1].Value.Trim()<br> if ($trimmedlink.Contains('/title/'))<br> {<br> [void] $titlelink.Add($trimmedlink)<br> }<br> }<br> #Fetching movie page<br> $movieURL = "http://www.imdb.com$($titlelink[0])"<br> <br> #Grabbing the URL for the Movie Poster<br> $MoviePoster = ((Invoke-WebRequest –Uri $movieURL).Images | Where-Object {$_.title -like "$Title Poster"} | Where src -like "http:*").src <br> <br> $MyVariable = "<a href=" + '"' + $($Movie.FullName) + '"' + " " + "title='$Title'" + ">"<br> $ImgLocation = "<img src=" + '"' + "$MoviePoster" + '"' + "width=" + '"' + "225" + '"' + "height=" + '"' + "275" + '"' + "border=" + '"' + "0" + '"' + "alt=" +<br> '"' + $Title + '"' + "></a>" + "&nbsp;" + "&nbsp;" + "&nbsp;"+ "&nbsp;" + "&nbsp;" + "&nbsp;"+ "&nbsp;" + "&nbsp;" + "&nbsp;"<br> <br> Write-Output $MyVariable, $ImgLocation<br> <br> }$COMEDY | Out-File z:\db\COMEDY.htm <br> <br> $after = Get-Content z:\db\COMEDY.htm <br> <br> #adding a back button to the Index <br> $before = Get-Content z:\db\before.txt<br> <br> #adding the back button prior to the poster images content<br> Set-Content z:\db\COMEDY.htm –value $before, $after<br> A: After playing around with @Hawk's BASE64 discovery above, I found that everything after the BASE64 code is display info. If you remove everything between the last @ and .jpg it will load the image in the highest res it has. https://m.media-amazon.com/images/M/MV5BMjAwODg3OTAxMl5BMl5BanBnXkFtZTcwMjg2NjYyMw@@._V1_UX182_CR0,0,182,268_AL_.jpg becomes https://m.media-amazon.com/images/M/MV5BMjAwODg3OTAxMl5BMl5BanBnXkFtZTcwMjg2NjYyMw@@.jpg A: There is one API service provider which will provide you poster image URL and many other details based on the movie name you have provided in their query string. Over here is the link to the above service provider's website. You can sign up and use the API service within your code. A: Those poster images don't appear to have any correlation to the title page, so you'll have to retrieve the title page first, and then retrieve the img element for the page. The good news is that the img tag is wrapped in an a tag with name="poster". You didn't say what kind of tools you are using, but this basically a screen scraping operation. A: Here is my program to generate human readable html summary page for movie companies found on imdb page. Change the initial url to your liking and it generates a html file where you can see title, summary, score and thumbnail. npm install -g phantomjs Here is the script, save it to imdb.js var system = require('system'); var page = require('webpage').create(); page.open('http://www.imdb.com/company/co0026841/?ref_=fn_al_co_1', function() { console.log('Fetching movies list'); var movies = page.evaluate(function() { var list = $('ol li'); var json = [] $.each(list, function(index, listItem) { var link = $(listItem).find('a'); json.push({link: 'http://www.imdb.com' + link.attr('href')}); }); return json; }); page.close(); console.log('Found ' + movies.length + ' movies'); fetchMovies(movies, 0); }); function fetchMovies(movies, index) { if (index == movies.length) { console.log('Done'); console.log('Generating HTML'); genHtml(movies); phantom.exit(); return; } var movie = movies[index]; console.log('Requesting data for '+ movie.link); var page = require('webpage').create(); page.open(movie.link, function() { console.log('Fetching data'); var data = page.evaluate(function() { var title = $('.title_wrapper h1').text().trim(); var summary = $('.summary_text').text().trim(); var rating = $('.ratingValue strong').attr('title'); var thumb = $('.poster img').attr('src'); if (title == undefined || thumb == undefined) { return null; } return { title: title, summary: summary, rating: rating, thumb: thumb }; }); if (data != null) { movie.title = data.title; movie.summary = data.summary; movie.rating = data.rating; movie.thumb = data.thumb; console.log(movie.title) console.log('Request complete'); } else { movies.slice(index, 1); index -= 1; console.log('No data found'); } page.close(); fetchMovies(movies, index + 1); }); } function genHtml(movies) { var fs = require('fs'); var path = 'movies.html'; var content = Array(); movies.forEach(function(movie) { var section = ''; section += '<div>'; section += '<h3>'+movie.title+'</h3>'; section += '<p>'+movie.summary+'</p>'; section += '<p>'+movie.rating+'</p>'; section += '<img src="'+movie.thumb+'">'; section += '</div>'; content.push(section); }); var html = '<html>'+content.join('\n')+'</html>'; fs.write(path, html, 'w'); } And run it like so phantomjs imdb.js A: $Title = $($Movie.Name) $searchTitle = $Title.Replace(' ','%20') $moviesearch = Invoke-WebRequest "http://www.imdb.com/search/title?title=$searchTitle&title_type=feature" $titleclassarray = $moviesearch.AllElements | where Class -eq 'loadlate' | select -First 1 $MoviePoster = $titleclassarray.loadlate A: Now a days, all modern browser have "Inspect" section: 100% Correct for Google Chrome only: * *Take your cursor on image. *Right click on it, select "Inspect Element". *In the window appear, under Elements tab you will find the highlighted text as *Just click on it. *In the Resource tab, right click on image. *Select "Copy image URL" option. Try to paste it any where as URL in any browser, you will only get the image.
{ "language": "en", "url": "https://stackoverflow.com/questions/151272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: What combination do you use for your polyglot solution? Those of us who use multiple languages to solve problems can combine them in a lot of ways. Personally I use PL/SQL, XSLT, JavaScript, and Java plus the pseudo languages HTML, XML, CSS, Ant, and Bash. What do you use? A: Paraphrasing one of my favorite quotes: Always write your code as if it were going to be maintained by a homicidal maniac that knows your home address. A: I have a D/MySQL/JavaScript[1]/HTML/CPP[2] app. [1] compile time D template generated [2] C pre-processor used to generate apache configs and SQL sprocs Yes, I am trying to take things to the insane! ;) A: I work on a desktop application, so my alphabet soup looks like: C# and C++ as well as XML and T-SQL. A: Java + Clojure works very well as a combination for me. * *Java is good for the low level code that needs to be well optimized. It also gives you access to the huge array of libraries in the Java ecosystem. *Clojure is great for rapid development of higher level code, working interactively in a REPL. It has great support for meta-programming and concurrency, and I often use Clojure to "glue together" Java based components into a working application. It helps enormously that Java and Clojure run in the same JVM - calling between the two is very easy and has effectively zero performance overhead.
{ "language": "en", "url": "https://stackoverflow.com/questions/151290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Can I use "System.Currency" in .NET? Is it possible to use system.currency. It says system.currency is inaccessible due to its protection level. what is the alternative of currency. A: Use Decimal. All of the functions that Currency provides are static methods on Decimal, FromOACurrency, and ToOACurrency. A: It may be possible via reflection but the reasons that it's there are for FromOACurrency() and ToOACurrency() static methods on System.Decimal, which is for convering from/to the Ole Automation Currency type that Visual Basic 6 uses. A: You have to use Decimal data type.. The decimal keyword indicates a 128-bit data type. Compared to floating-point types, the decimal type has more precision and a smaller range, which makes it appropriate for financial and monetary calculations. A: You can't actually use Decimal for Currency. You'll face bigger problems later on when you divide. Say if you have $1 split into 3 which is 1/3 = 0.33(rounded), but 3 x 0.33 = 0.99 != 1. It might be small, but when you do that in accounting ans stack up your sheets, it will be a huge figure. Also that's why the default rounding up/down behavior in .Net (not sure about other programming languages) is to the next even value, also called bankers' rounding, to minimized error in accounting comparing to our normal "human" way of rounding. Read this page for clearer explanation and a special class to handle money. Code Project : A Money type for the CLR Also about Rounding
{ "language": "en", "url": "https://stackoverflow.com/questions/151291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Is there a CLR that runs on the CLR? I was wondering if there was a .NET-compatible CLR that was implemented using the CLI (common language infrastructure), e.g., using .NET itself, or at least if there were any resources that would help with building one. Basically, something like a .NET program that loads assemblies as MemoryStreams, parses the bytecode, constructs the types, and executes the instructions. Optionally, it can JIT-compile to standard IL using Reflection.Emit or however. I don't want to compile a .NET language to be run by the original CLR. I want a CLR that's written in a .NET language (not unmanaged C++ or C as it usually is) and runs CIL. If done right, it should be able to run itself. Any thoughts on using Mono.Cecil for this kind of thing? A: I don't think there are currently any standalone .net VMs that are self hosting but both Cosmos and SharpOS are .net runtimes written in C#. It may be possible to reuse some of their runtime code to extra a standalone runtime. Cosmos can be used to host a custom application on boot: http://www.codeproject.com/KB/system/CosmosIntro.aspx A: You should check out the IKVM.NET Project. It includes a Java Virtual Machine written in .NET. http://www.ikvm.net/ I know it's not an actual CLR that runs on top of the CLR, but it's the closest thing I know of that does what you want. A: I am not aware of one, but ideas frm JVM running on JVM should be helpful. * *Jikes RVM *Maxine VM A: If you're willing to expand your definition of "runs CIL" to "JIT-Compiles CIL to Native Code," then you should look at the Managed Operating System Alliance -- a group of people (myself included) working toward creating the runtime pieces necessary to write a managed operating system kernel. Currently, there is quite a bit left to do, but it is possible to JIT-compile and run simple methods (Win32 only -- we currently use P/Invoke to create the native code buffers) A: It is possible in principle by combining technologies: * *Jikes RVM is a Java Virtual Machine implementation written in Java. *IKVM.NET, an implementation of the Java platform on .NET. It might also be possible to take Mono, compile to LLVM bytecode, compile the bytecode to Javascript using Emscripten, and run the Javascript on .NET using any of various interpreters. A: Look at the System.Reflection.Emit namespace, specifically the ILGenerator class. You can emit IL on the fly. http://msdn.microsoft.com/en-us/library/system.reflection.emit.ilgenerator_members.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/151298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Embedding SVN Revision number at compile time in a Windows app I'd like my .exe to have access to a resource string with my svn version. I can type this in by hand, but I'd prefer an automated way to embed this at compile time. Is there any such capability in Visual Studio 2008? A: You can get SVN to embed it for you, if that will solve the problem. See the $Rev$ keyword on that page. A: Have a look at svn keyword substitution here. There is another SO question here which I found through google! A: antik's solution is the one we use. Be careful of using environment variables, the .h file ensures you can have a dependency which will cause any files that need it to be recompiled when the svn rev number changes. A: I wanted a similar availability and found $Rev$ to be insufficient because it was only updated for a file if that file's revision was changed (which meant it would have to be edited and committed very time: not something I wanted to do.) Instead, I wanted something that was based on the repository's revision number. For the project I'm working on now, I wrote a Perl script that runs svnversion -n from the top-most directory of my working copy and outputs the most recent revision information to a .h file (I actually compare it to a saved reversion in a non-versioned file in my working copy so that I'm not overwriting current revision information at every compile but whether you chose to do so is up to you.) This .h file (or a number of files if necessary, depending on your approach) is referenced both in my application code and in the resource files to get the information where I'd like it. This script is run as a pre-build step so that everything is up-to-date before the build kicks off and the appropriate files are automatically rebuilt by your build tool. A: How about using SubWCRev the command line tool that ships with TortoiseSVN. You create a template file with tokens in it like $WCREV$ $WCDATE$ etc. Then have a pre-build step that run SubWCRev on your template file to create the actual source file that is fed to the compiler.
{ "language": "en", "url": "https://stackoverflow.com/questions/151299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: AddHandler/RemoveHandler Not Disposing Correctly Using the AddHandler method, if I never use RemoveHandler, will that lead to memory leaks in some conditions and situations? I'm not so sure about the truth of this. And are there other causes to memory leaks that are solely available in VB as opposed to C#? A: Well usually it doesn't.. but the possibility exists. When you subscribe to an event, you basically give a delegate (a func pointer if you will) to your method to the event publisher, who holds on to it as long as you do not unsubscribe with the -= operator. So take for example, the case where you spawn a child form and the form subscribes to the Click button event on the form. button1.Click += new EventHandler(Form_Click_Handler); Now the button object will hold on to the form reference.. When the form is closed/disposed/set to null both form and button are not needed anymore; memory is reclaimed. The trouble happens when you have a global structure or object which has a bigger lifetime. Lets say the Application object maintains a list of open child windows. So whenever a child form is created, the application object subscribes to a Form event so that it can keep tabs on it. In this case, even when the form is closed/disposed the application object keeps it alive (a non-garbage object holds a ref to the form) and doesn't allow its memory to be reclaimed. As you keep creating and closing windows, you have a leak with your app hogging more and more memory. Hence you need to explicitly unsubscribe to remove the form reference from the application. childForm.Event -= new EventHandler(Form_Handler) So its recommended that you have a unsubscribe block (-=) complementing your subscribe routine (+=)... however you could manage without it for the stock scenarios. A: If object a is suscribed to the object b event then object b will not be collected until object a is collected. An event suscription counts as a reference to the publisher object. And yes, this happens on C# too, i has nothing to do with the language.
{ "language": "en", "url": "https://stackoverflow.com/questions/151303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: VB.Net - how to support implicit type conversion as well as custom equality Fixed: See notes at bottom I am implementing a generic class that supports two features, implicit type conversion and custom equality operators. Well, it supports IN-equality as well, if it does that. 1) if ( "value" = myInstance ) then ... 2) Dim s As String = myInstance 3) Dim s As String = CType(myInstance,String) The problem I am having is that if I support #2, implicit conversion, then I can't get my equality operators to work, since they complain about no conversion being the most specific. The error I get is this (simplified a bit for brevity): Overload resolution failed because no accessible '=' is most specific for these arguments: 'Public Shared Operator =(obj As MyClass, data As String) As Boolean': Not most specific. 'Public Shared Operator =(data As String, obj As MyClass) As Boolean': Not most specific. 'Public Shared Operator =(obj1 As MyClass, obj2 As MyClass) As Boolean': Not most specific. What is the best way of implementing this. Just as importantly, what should I leave out? I have implemented the following conversions Operator =(ByVal data As String, ByVal obj As classType) As Boolean (and <>) Operator =(ByVal obj As classType, byval data As String) As Boolean (and <>) Operator =(ByVal obj1 As classType, ByVal obj2 As classType) As Boolean (and <>) Equals(obj as Object) as Boolean Equals(compareTo as classType ) as Boolean Equals(compareTo as String) as Boolean Widening Operator CType(ByVal source As String) As classType Widening Operator CType(ByVal source As classType) as String Narrowing Operator CType(ByVal inst As classType) As dataType In my widening operator I do some reflection, which is why I wanted to be able to do an implicit convert DOWN to String when I do a comparison or assignment with the string on the left side. A) SomeObject.StringPropertySetter = MyClass Fix (edit) I went way overboard in what I implemented, because I didn't understand what was happening. Comparison between the base types (ie string/double/guid) takes place via the widening ctype(...) as String (or Guid,etc) operator. In the end, i just implemented these functions and all my test cases still pass, in addition to assignment from the class to a base type instance Public Class MyClass(Of BaseType) Widening Operator CType(ByVal source As dataType) As MyClass Widening Operator CType(ByVal source As MyClass) As dataType //conv between inst & base Equals() // for datatype, classType, object Operator <>(MyClass,MyClass) // for comparison between two instances Opeator =(MyClass,MyClass) comments are c style, but code is vb.net Of course the class is a little more complicated than that, but that give me everything I needed :) A: You should not override the = operator. If you have implicit conversions to types such as string or int, then let the default equality operator take over. As a general rule, if you need to customize equality for a class you should override the Equals(object) method.
{ "language": "en", "url": "https://stackoverflow.com/questions/151318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Sharing Files between VM and Host using Virtual PC 2007 I know that I can share files using Shared Folders in Virtual PC, but this method seems to have pretty poor performance. Is there another method to share files that provides better performance? (Besides using something other than Virtual PC) A: The best way to do it is probably set up proper bridge network connection between host machine and VM. A: Using VirtualBox, I had problems setting up shared folders (I tried setting it up, and it wasn't working intuitively right away, so I got fed up with it). Thus, I just ftp'ed to the host OS (which I already had set up since I was on Linux), and transfered the file that way. I would suggest timing transferring a reasonably sized file via shared folders, and then time it again using FTP... if it's faster, that's your solution :-) Sorry I can't give actual performance metrics on that!
{ "language": "en", "url": "https://stackoverflow.com/questions/151327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: .NET Winforms Deployment Is there anyway to combine all resources into a single exe file such as app.config and associated DLL's? Some applications seem to do this such as eMule. I don't want my app.config sitting there waiting to be edited. Thanks A: Certainly, in the Solution Explorer (assuming Visual Studio here, since you don't mention) Right-click and Properties of the file(s) you want included. There should be an option there for Build Action which you can set to Embedded Resource. A: You can of course embed resources. Go to the application properties and select the "Resources" tab. All resources added in their will be in the main binary. Why not have app.config sitting there waiting to be edited? Many professional software packages have configuration and ini files freely there to be edited. A: Dude, That's why it's a config file! It's supposed to allow you to change the way an app works on-d-fly b! If you are concerned about your settings, which shouldn't be altered, then try another storage, class, database, registry, flatfile etc, or just keep a replica somewhere which can be used to replace the screwed up one. A: Merging dlls - ILMerge Merging config file is not worth it, since it is supposed to be way to tweak the app behavior without recompiling it. If you do not need that - just hardcode everything (either the code or as EmbeddedResources). If you do still need configurability, you can hide the file into the user profile. See, for example, http://www.codeproject.com/KB/cs/SystemConfiguration.aspx A: I have used Thinstall as an application virtualization shrinkwrapper before: https://thinstall.com/help/index.php?_netsupport.htm This does what you want, i.e. bundles all your app's dependencies into one exectuable, including the .configs. You would also do well researching other software shrink-wrap tools. A: If you don't want settings to be changed move them into the code rather than config. A: I agree with some of the users. It defeats the purpose of "config", really. Just hard code all the info in a shared class call "settings" and then reference like _serverIP = settings.MailServerIP The only items which should be considered are helperfiles (which relates to something, per say), images, 3rd party dlls (I am not sure of this though)...to name a few. A: The problem with the .net app.config files is that modifying them can change the way an application works. Embedding resources is not a problem, its that particular file which I'm worried about.
{ "language": "en", "url": "https://stackoverflow.com/questions/151335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to get a reference to the currently focused form field in JavaScript? I'm looking for a cross-browser method - I know IE has something (I've already forgotten what), and the way to do it in Mozilla may have to do with a focusNode thing I found, that seems related to getting text selections. Methods involving jQuery or another common JS library are fine by me. Thanks! A: Check out the extra selectors plugin for jQuery, it includes a :focus selector that answers your need. You can use just the implementation of that selector if you don't the rest. A: OK then, so use jQuery. There is no current, available way to just ask this. You need to track the focus events when they happen, so this sample (thanks to Karl Rudd here) does that across all elements. This is for inputs but you can adjust the selector to fit your needs, even across the entire DOM. var currentFocus = null; $(':input').focus( function() { currentFocus = this; }).blur( function() { currentFocus = null; });
{ "language": "en", "url": "https://stackoverflow.com/questions/151337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Adding an instance variable to a class in Ruby How can I add an instance variable to a defined class at runtime, and later get and set its value from outside of the class? I'm looking for a metaprogramming solution that allows me to modify the class instance at runtime instead of modifying the source code that originally defined the class. A few of the solutions explain how to declare instance variables in the class definitions, but that is not what I am asking about. A: Ruby provides methods for this, instance_variable_get and instance_variable_set. (docs) You can create and assign a new instance variables like this: >> foo = Object.new => #<Object:0x2aaaaaacc400> >> foo.instance_variable_set(:@bar, "baz") => "baz" >> foo.inspect => #<Object:0x2aaaaaacc400 @bar=\"baz\"> A: Mike Stone's answer is already quite comprehensive, but I'd like to add a little detail. You can modify your class at any moment, even after some instance have been created, and get the results you desire. You can try it out in your console: s1 = 'string 1' s2 = 'string 2' class String attr_accessor :my_var end s1.my_var = 'comment #1' s2.my_var = 'comment 2' puts s1.my_var, s2.my_var A: The other solutions will work perfectly too, but here is an example using define_method, if you are hell bent on not using open classes... it will define the "var" variable for the array class... but note that it is EQUIVALENT to using an open class... the benefit is you can do it for an unknown class (so any object's class, rather than opening a specific class)... also define_method will work inside a method, whereas you cannot open a class within a method. array = [] array.class.send(:define_method, :var) { @var } array.class.send(:define_method, :var=) { |value| @var = value } And here is an example of it's use... note that array2, a DIFFERENT array also has the methods, so if this is not what you want, you probably want singleton methods which I explained in another post. irb(main):001:0> array = [] => [] irb(main):002:0> array.class.send(:define_method, :var) { @var } => #<Proc:0x00007f289ccb62b0@(irb):2> irb(main):003:0> array.class.send(:define_method, :var=) { |value| @var = value } => #<Proc:0x00007f289cc9fa88@(irb):3> irb(main):004:0> array.var = 123 => 123 irb(main):005:0> array.var => 123 irb(main):006:0> array2 = [] => [] irb(main):007:0> array2.var = 321 => 321 irb(main):008:0> array2.var => 321 irb(main):009:0> array.var => 123 A: You can use attribute accessors: class Array attr_accessor :var end Now you can access it via: array = [] array.var = 123 puts array.var Note that you can also use attr_reader or attr_writer to define just getters or setters or you can define them manually as such: class Array attr_reader :getter_only_method attr_writer :setter_only_method # Manual definitions equivalent to using attr_reader/writer/accessor def var @var end def var=(value) @var = value end end You can also use singleton methods if you just want it defined on a single instance: array = [] def array.var @var end def array.var=(value) @var = value end array.var = 123 puts array.var FYI, in response to the comment on this answer, the singleton method works fine, and the following is proof: irb(main):001:0> class A irb(main):002:1> attr_accessor :b irb(main):003:1> end => nil irb(main):004:0> a = A.new => #<A:0x7fbb4b0efe58> irb(main):005:0> a.b = 1 => 1 irb(main):006:0> a.b => 1 irb(main):007:0> def a.setit=(value) irb(main):008:1> @b = value irb(main):009:1> end => nil irb(main):010:0> a.setit = 2 => 2 irb(main):011:0> a.b => 2 irb(main):012:0> As you can see, the singleton method setit will set the same field, @b, as the one defined using the attr_accessor... so a singleton method is a perfectly valid approach to this question. A: @Readonly If your usage of "class MyObject" is a usage of an open class, then please note you are redefining the initialize method. In Ruby, there is no such thing as overloading... only overriding, or redefinition... in other words there can only be 1 instance of any given method, so if you redefine it, it is redefined... and the initialize method is no different (even though it is what the new method of Class objects use). Thus, never redefine an existing method without aliasing it first... at least if you want access to the original definition. And redefining the initialize method of an unknown class may be quite risky. At any rate, I think I have a much simpler solution for you, which uses the actual metaclass to define singleton methods: m = MyObject.new metaclass = class << m; self; end metaclass.send :attr_accessor, :first, :second m.first = "first" m.second = "second" puts m.first, m.second You can use both the metaclass and open classes to get even trickier and do something like: class MyObject def metaclass class << self self end end def define_attributes(hash) hash.each_pair { |key, value| metaclass.send :attr_accessor, key send "#{key}=".to_sym, value } end end m = MyObject.new m.define_attributes({ :first => "first", :second => "second" }) The above is basically exposing the metaclass via the "metaclass" method, then using it in define_attributes to dynamically define a bunch of attributes with attr_accessor, and then invoking the attribute setter afterwards with the associated value in the hash. With Ruby you can get creative and do the same thing many different ways ;-) FYI, in case you didn't know, using the metaclass as I have done means you are only acting on the given instance of the object. Thus, invoking define_attributes will only define those attributes for that particular instance. Example: m1 = MyObject.new m2 = MyObject.new m1.define_attributes({:a => 123, :b => 321}) m2.define_attributes({:c => "abc", :d => "zxy"}) puts m1.a, m1.b, m2.c, m2.d # this will work m1.c = 5 # this will fail because c= is not defined on m1! m2.a = 5 # this will fail because a= is not defined on m2! A: Readonly, in response to your edit: Edit: It looks like I need to clarify that I'm looking for a metaprogramming solution that allows me to modify the class instance at runtime instead of modifying the source code that originally defined the class. A few of the solutions explain how to declare instance variables in the class definitions, but that is not what I am asking about. Sorry for the confusion. I think you don't quite understand the concept of "open classes", which means you can open up a class at any time. For example: class A def hello print "hello " end end class A def world puts "world!" end end a = A.new a.hello a.world The above is perfectly valid Ruby code, and the 2 class definitions can be spread across multiple Ruby files. You could use the "define_method" method in the Module object to define a new method on a class instance, but it is equivalent to using open classes. "Open classes" in Ruby means you can redefine ANY class at ANY point in time... which means add new methods, redefine existing methods, or whatever you want really. It sounds like the "open class" solution really is what you are looking for... A: I wrote a gem for this some time ago. It's called "Flexible" and not available via rubygems, but was available via github until yesterday. I deleted it because it was useless for me. You can do class Foo include Flexible end f = Foo.new f.bar = 1 with it without getting any error. So you can set and get instance variables from an object on the fly. If you are interessted... I could upload the source code to github again. It needs some modification to enable f.bar? #=> true as method for asking the object if a instance variable "bar" is defined or not, but anything else is running. Kind regards, musicmatze A: It looks like all of the previous answers assume that you know what the name of the class that you want to tweak is when you are writing your code. Well, that isn't always true (at least, not for me). I might be iterating over a pile of classes that I want to bestow some variable on (say, to hold some metadata or something). In that case something like this will do the job, # example classes that we want to tweak class Foo;end class Bar;end klasses = [Foo, Bar] # iterating over a collection of klasses klasses.each do |klass| # #class_eval gets it done klass.class_eval do attr_accessor :baz end end # it works f = Foo.new f.baz # => nil f.baz = 'it works' # => "it works" b = Bar.new b.baz # => nil b.baz = 'it still works' # => "it still works"
{ "language": "en", "url": "https://stackoverflow.com/questions/151338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: How to check if an object is an instance of a NodeList in IE? Why is NodeList undefined in IE6/7? <form action="/" method="post" id="testform"> <input type="checkbox" name="foobar[]" value="1" id="" /> <input type="checkbox" name="foobar[]" value="2" id="" /> <input type="checkbox" name="foobar[]" value="3" id="" /> </form> <script type="text/javascript" charset="utf-8"> (function () { var el = document.getElementById('testform')['foobar[]'] if (el instanceof NodeList) { alert("I'm a NodeList"); } })(); </script> This works in FF3/Safari 3.1 but doesn't work in IE6/7. Anyone have any ideas how to check if el is an instance of NodeList across all browsers? A: Adam Franco's answer almost works. Unfortunately, typeof el.item returns different things in different version of IE (7: string, 8: object, 9: function). So I am using his code, but I changed the line to typeof el.item !== "undefined" and changed == to === throughout. if (typeof el.length === 'number' && typeof el.item !== 'undefined' && typeof el.nextNode === 'function' && typeof el.reset === 'function') { alert("I'm a NodeList"); } A: "Duck Typing" should always work: ... if (typeof el.length == 'number' && typeof el.item == 'function' && typeof el.nextNode == 'function' && typeof el.reset == 'function') { alert("I'm a NodeList"); } A: I would just use something that always evaluates to a certain type. Then you just do a true/false type check to see if you got a valid object. In your case, I would get a reference to the select item like you are now, and then use its getOptions() method to get an HTMLCollection that represents the options. This object type is very similar to a NodeList, so you should have no problem working with it. A: With jQuery: if (1 < $(el).length) { alert("I'm a NodeList"); }
{ "language": "en", "url": "https://stackoverflow.com/questions/151348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: IDE's for C# development on Linux? What are my options? I tried MonoDevelop over a year ago but it was extremely buggy. Is the latest version a stable development environment? A: Monodevelop There are 2 versions around: * *1.0: the currently stable version. It is indeed stable, but somewhat limited in its capabilities. It's very good for smallish projects. I got it via the ubuntu hardy repos. *2.0RC (aka 1.9.x) you can get it via SVN and compiling. The process is quite straightforward, and you can run it without installing (via make run). It's somewhat less stable than 1.0, but it depends on which build you get (it's a development snapshot). Regarding capabilities, it is great. It has refactoring, profiling, tons of plugins, etc. A: I would recommend X-develop from Omnicore. It is a very good IDE, but is only free to use for 30 days. A: MonoDevelop 2.0 has been released, it now has a decent GUI Debugger, code completion, Intellisense C# 3.0 support (including linq), and a decent GTK# Visual Designer. In short, since the 2.0 release I have started using Mono Develop again and am very happy with it so far. Check out the MonoDevelop website for more info. A: There is a C# binding for Eclipse, though I haven't tried it personally, so I can't vouch for it. I use MonoDevelop, which isn't perfect, but works reasonably well for the most part. The version included in Ubuntu 8.04 (Hardy Heron) is much more stable than the Gutsy Gibbon version. A: I've been using JetBrains Rider for quite a while and I quite like it. It has all the ReSharper goodness and is a joy to use on OS/X or Linux. Beware that it is still in Early Access Program, so it has a few rough edges there and there, but most of the times it works well enough for day-to-day usage. You can get it here: https://www.jetbrains.com/rider/download/ P.S. I mostly use it for .NET Core development needs, but have used it for traditional .NET coding as well. A: I used MonoDevelop a while ago, and it was fine. It's not anywhere near as good as Eclipse or NetBeans are for Java development, but those are really in a class of their own. And I think the only real alternative is using emacs or vim... It's fairly polished. Stability really wasn't an issue. Simple code-completion is there, as is jumping to to declaration, super-class and the extremely useful find references. Debugging isn't there, though, which is a fairly glaring omission. I actually spent a couple of minutes trying to set up a breakpoint until it dawned on me that there isn't even a way to "Debug..." instead of "Run..." A: Have you looked at SlickEdit? I thought it was pretty good several years ago when I was developing C++ apps on Linux. It says it supports C#, but I cannot comment as to how well. I was happy to use it for my C++ development, though. A: Microsoft has released Visual Studio Code for Linux, which has good C# support, naturally. A: Is the latest version stable a stable development environment? Probably ... it hit 1.0 this past spring.
{ "language": "en", "url": "https://stackoverflow.com/questions/151350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "70" }
Q: "Access is denied" error on accessing iframe document object For posting AJAX forms in a form with many parameters, I am using a solution of creating an iframe, posting the form to it by POST, and then accessing the iframe's content. specifically, I am accessing the content like this: $("some_iframe_id").get(0).contentWindow.document I tested it and it worked. On some of the pages, I started getting an "Access is denied" error. As far as I know, this shouldn't happen if the iframe is served from the same domain. I'm pretty sure it was working before. Anybody have a clue? If I'm not being clear enough: I'm posting to the same domain. So this is not a cross-domain request. I am testing on IE only. P.S. I can't use simple ajax POST queries (don't ask...) A: Beware of security limitations associated to iFrames, like Cross domain restriction (aka CORS). Below are 3 common errors related to CORS : * *Load an iFrame with a different domain. (Ex: opening "www.foo.com" while top frame is "www.ooof.com") *Load an iFrame with a different port: iFrame's URL port differs from the one of the top frame. *Different protocols : loading iFrame resource via HTTPS while parent Frame uses HTTP. A: Solved it by myself! The problem was, that even though the correct response was being sent (verified with Fiddler), it was being sent with an HTTP 500 error code (instead of 200). So it turns out, that if a response is sent with an error code, IE replaces the content of the iframe with an error message loaded from the disk (res://ieframe.dll/http_500.htm), and that causes the cross-domain access denied error. A: My issue was the X-Frame-Options HTTP header. My Apache configuration has it set to: Header always append X-Frame-Options DENY Removing it allowed it to work. Specifically in my case I was using iframe transport for jQuery with the jQuery file upload plugin to upload files in IE 9 and IE 10. A: I know this question is super-old, but I wanted to mention that the above answer worked for me: setting the document.domain to be the same on each of the pages-- the parent page and the iframe page. However in my search, I did find this interesting article: http://softwareas.com/cross-domain-communication-with-iframes A: Note if you have a iframe with src='javascript:void(0)' then javascript like frame.document.location =... will fail with Access Denied error in IE. Was using a javascript library that interacts with a target frame. Even though the location it was trying to change the frame to was on the same domain as parent, the iframe was initially set to javascript:void which triggered the cross domain access denied error. To solve this I created a blank.html page in my site and if I need to declare an iframe in advance that will initially be blank until changed via javascript, then I point it to the blank page so that src='/content/blank.html' is in the same domain. Alternatively you could create the iframe completely through javascript so that you can set the src when it is created, but in my case I was using a library which reqired an iframe already be declared on the page. A: Basically, this error occurs when the document in frame and outside of ii have different domains. So to prevent cross-side scripting browsers disable such execution. A: if it is a domain issue (or subdomain) such as www.foo.com sending a request to www.api.foo.com on each page you can set the document.domain = www.foo.com to allow for "cross-domain" permissions
{ "language": "en", "url": "https://stackoverflow.com/questions/151362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Tools for refactoring table-based HTML layouts to CSS? Given an HTML page that has a complex table-based layout and many tags that are duplicated and wasteful, e.g.: td align="left" class="tableformat" width="65%" style="border-bottom:1px solid #ff9600; border-right:1px solid #ff9600; background-color:#FDD69E" nowrap etc. Are there tools to aide the task of refactoring the page into a more compact form? For instance, a tool that automatically generates CSS styles and selectors? That converts tables into div layouts? Just to give a sense of the order of the problem, the page I'm looking at is >8000 lines of HTML and JavaScript, which is 500Kb not counting images! Update: In re. "give up and start from scratch" comments. What does that mean, in the real world? Print out the page, scan it, set it as the background image in Dreamweaver, and start with that? Seriously? Would that really be more efficient than refactoring? Update: I'm not denigrating "trace it from scratch" nor did I mean to imply that Dreamweaver is by any means my tool of choice. I'm just very surprised that refactoring a layout is considered to be an intractable problem. A: I'm not aware of specific tools, only the generic ones of caffeine and Firebug, which anyone doing CSS work should be aware of. I think that the problem is sufficiently hard that automated tools would struggle to produce good, maintainable markup and CSS. A: It seems as if you are looking for more automated means of re-factoring an old table-based layout to CSS standards. However, I agree with some of the other comments to "start from scratch". What this means to you is that you should try to rebuild (using CSS) the look that was achieved using an HTML table. If this concept escapes you, then I would suggest Googling for some beginner CSS tutorials, maybe even some focusing on teaching the concept of Table->CSS layouts.. Another tool to consider (that could possibly aid you even further) would be some sort of CSS "framework". I recommend Blueprint CSS for this, as it helps in the creation of grid/table-like layouts with minimal effort. Another good one is Yet-Another-Multicolumn-Layout, which has a really neat multi-column layout builder. A: I agree with TimB in that automated tools are going to have trouble doing this, in particular making the relational jumps to combine and abstract CSS in the most efficient way. If you are presenting tabular data, it may be reasonable to attempt to refactor the inline CSS to reusable classes. If you have a lot of similar tables with inline styles you can gradually refactor the CSS by simple search and replace. This will give you lots of classes that match a subset of similar tables and lots of somewhat similar classes. Breaking this up into layout and presentation would be a good start, then overriding these with specific classes for each theme or semantically related item. I would still recommend starting from scratch, it's probably going to be quicker, and you can recreate only what you need to present the page, and can reuse elements or collections of elements at a later date. The time spent will also pay off significantly if the page is ever needed to be modified again. But that's not at all likely is it? :D A: Don't just throw it in dreamweaver or whatever tool of choice and slice it up. Write the HTML first, in purely semantic style. eg, a very common layout would end up being: <body> <div id="header"> <img id="logo"/> <h1 id="title"> My Site </h1> <div id="subtitle">Playing with css</div> </div> <div id="content"> <h2>Page 1</h2> <p>Blah blah blah..</p> </div> <div id="menu"> <ul> <li><a>Some link</a></li> ... </ul> </div> </body> Do the HTML in a way that makes sense with your content. It should be usable if you have no CSS at all. After that, add in the CSS to make it look the way you want. With the state of browsers/CSS now, you still probably have to add some decorators to the HTML - eg, wrap some things in extra divs, just to be able to get the content the way you want. Once you're done though, you'll have a very flexible layout that can be modified easily with CSS, and you'll have a page that is accessible, to both disabled people and search engines. It does take quite a bit of learning before you get the knack of it, but it is worthwhile. Resist the temptation to give up and go back to table based layouts, and just work your way through. Everything can be done with semantic HTML and CSS. A: You denigrate this approach in your question, but I'd recommend taking a screen shot of your page in the browser whose rendering you like the best, declare that to be your reference, and start trying to recreate it. It's easier than you think. I've had to take skanky old table-based layouts and turn them into CMS templates done with modern techniques and it's not that bad a job. A: I am tackling a similar problem at the moment, not quite as messy as what this sounds, but the product of really bad asp.net web forms, overblown view state, and deeply nested server controls that format search results from a database. Resulting in ~300 - 400K of markup for 50 DB rows - yeek. I have not found any automated tools that will do a half way reasonable job of refactoring it. Starting with a tool like visual studio that you can use to reformat the code to a consistent manner helps. You can then use combinations of regexs and rectangular selecting to start weeding out the junk, and stripping back the redundant markup, to start to sort out what is important and what is not, and then start defining, by hand, an efficient pattern to present the information. The trick is to break into manageable chunks, and build it up from there. If you have a lot of actual "tabular data" to format, and is only a once off, I have found excel to be my saviour on a few occasions, paste the data into a sheet, and then use a combination of concatenate and fill to generate the markup for the tabular data. A: Starting from scratch means going back to the design drawing board. If you need to refactor such a monstrosity, then for all the time you will spend making it better, you might as well have done a full redesign. If you want to get away from duplicated and wasteful tags, you need to escape Dreamewaver. A good text editor (jedit, emacs, vim, eclipse, etc.) is really all you need. If you customize your editor properly, you won't even miss Dreamweaver. (Emacs with nXhtml and yasnippets is my favorite.)
{ "language": "en", "url": "https://stackoverflow.com/questions/151369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to stress-test video streaming server? Does anyone know any good tool that I can use to perform stress tests on a video streaming server? I need to test how well my server handles 5,000+ connections. A: One option is to use VLC. You can specify a url on the command line. (see here for details). You could then write a brief shell script to open up all 5000 connections. eg. the following perl script (very quick hack - check before running, might cause explosions etc.) $i = 0; $myurl = "udp://someurl"; @cmdline = ("/usr/bin/vlc", ""); for( $i = 1; $i <= 5000; $i++ ) { if( $pid = fork ) { # parent - ignore } elsif( defined $pid ) { $cmdline[1] = sprintf "%s:%d", $myurl, $i; exec(@cmdline); } # elseif - do more error checking here } If your video streaming server is doing multicast it should be sufficient to open sockets and make them members of your 5000 multicast groups (without necessarily doing anything with the stream. By not actually decoding the stream you will reduce performance issues on the client end). I'm not aware of any tools that will do this for you, but if you're up for writing your own utility you can start here for details. edit: The second option assumes that the OS on your client machine has multicast capability. I mention that because (from memory) the linux kernel doesn't by default, and I'd like to save you that pain. :-) Easy way to tell (again on Linux) is to check for the presence of /proc/net/igmp A: start downloading 5000+ files of the same type with different connections. Don't really need to play them, because essentially the client video player, flash, windows media player, etc. will just be doing a download. So if you server can handle 5000+ downloads you will be fine. My bet is your bandwidth gives out before you server. A: For infrastructure, you can use either a JMeter SAAS or your own Cloud server to overcome possible network issues from your injector. To reproduce user experience and have precious metrics about user experience, you can use Apache JMeter + this commercial plugin which simulates realistically the Players behavior without any scripting: * *Apple HTTP Live Streaming *MPEG-DASH Streaming *Smooth Video Streaming This plugin also provide the ability to simulate Adaptive Bitrate Streaming Disclaimer : We are behind the development of this solution A: A new plugin for JMeter has been released to help simulate a HLS scenario by using only one custom Sampler. Now, you don’t need multiple HTTP Request Samplers, ForEach Controllers or RegEx PostProcessors. This makes the whole process much simpler than before. Instead, the complete logic is seamlessly encapsulated so you only have to care about the use case: the media type, playback time and network conditions. That’s it! The plugin is brand new and it can be installed via the JMeter Plugins Manager. Here you can learn more about it: https://abstracta.us/blog/performance-testing/how-to-run-video-streaming-performance-tests-on-hls/ A: I am also searching for the same answer, I come across with following tool may be it helps someone http://www.radview.com/Solutions/multimedia-load-testing.aspx This tool is used to test video streaming. Hope it helps someone. I will update the answer if I get a better one . Thanks. A: This HLS Analyzer software can be used for stress testing HTTP Live Streaming server and monitoring downloading performance.
{ "language": "en", "url": "https://stackoverflow.com/questions/151392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: What is a good free utility to create a self-extracting executable with an embedded file version? According to the answers to this question, I cannot embed a file version in my .msi file. The installer that I give the client needs to have a file version. So, what I want to do is create a self-extracting executable containing the msi file and the setup.exe generated by Visual Studio, and put the file version on this self-extracting executable instead. Therefore, I need a utility to create self-extracting executables which supports embedding a file version in its output. It also needs to support automatically running a file after extraction, so I can start the real installer automatically. It would be nice if it was scriptable. All I could find was this, which looks great, but I would much prefer a free alternative. Does anyone have any suggestions? Edit: To clarify, I'm not really looking to create an installer - I already have a VS setup project. I just want a self-extractor (like WinZip can create). So, the user mouses over Setup-Blorgbeard2008.exe, sees "Version: 1.0.0.0". User doubleclicks it, it silently extracts setup.exe and setup.msi to a temp folder, then runs setup.exe. User then sees normal installer screen and proceeds as normal. Another Edit: Yay, I don't need a self-extractor anymore, since my other question has now been answered. That makes this whole question pretty much irrelevant. It would still be nice to be able to distribute only one file, rather than setup.exe and setup.msi. A: NSIS can do this. Part of our build environment is a script that outputs version information to a "header" file that our NSIS script sources. You should be able to use something similar to embed your version information and you can certainly get NSIS to run a file after extraction. In fact, as NSIS creates the installer package... you may be able to simplify your approach a great deal. A: When you download Windows SDK, there is MSIStuff.exe and Setup.exe for which MS provides source code to compile. MSIStuff will "stuff" the MSI you give it into Setup.exe. Setup.exe can then be distributed. More information at http://support.microsoft.com/kb/888473 Cons: * *You'll have to recompile Setup.exe with the version of your product every time there is a new version of your product (MSI). *Not sure what license the Setup.exe source is distributed under. A: I actually ended up using NSIS for this particular release, since I needed to bundle some other installers as well. For reference, here's the script I used: VIProductVersion "1.0.0.0" ; set version here VIAddVersionKey "FileVersion" "1.0.0.0" ; and here! VIAddVersionKey "CompanyName" "MyCompany" VIAddVersionKey "LegalCopyright" "Β© MyCompany" VIAddVersionKey "FileDescription" "Installer for MyProgram" OutFile MyProgram-Setup.exe SilentInstall silent Section Main SetOutPath $TEMP SetOverwrite on File SharedManagementObjects.msi File SQLSysClrTypes.msi File Release\Setup.exe File Release\Setup.msi ExecWait 'msiexec /passive /i "$OUTDIR\SharedManagementObjects.msi"' ExecWait 'msiexec /passive /i "$OUTDIR\SQLSysClrTypes.msi"' Exec '"$OUTDIR\Setup.exe"' SectionEnd A: DotNetZip can produce a Self-Extracting archive, that includes a version number that shows up in a Windows Explorer mouseover. The SFX also includes a product name, description, product version, and copyright that shows up in a Properties/Details view in Windows Explorer. The SFX can run a command that you specify after extract. Creation of the SFX can be scripted from powershell, or you can write a program to do it, using VB.NET, C#, VBScript or JavaScript, etc. To get the version number stuff, you need at least v1.9.0.26 of DotNetZip. A: Little bit surprised that it is not listed here yet: IExpress is a simple tool coming with Windows and can be used to create self-extracting installers. A: try inno-setup set the VersionInfoVersion directive to your binary version number, e.g. VersionInfoVersion = 1.1.0.0 this will appear in mouseover text and properties A: nsis looks like a good choice A: Experiment in progress here: Pop-up Version info: http://screencast.com/t/LVqvLfxCj3g From Visual Studio Assembly info: http://screencast.com/t/fqunlMNh13 Installed with plain old MSI file. By adding the "Version: 1.5.0" text into the Description property of the Setup Project, the version number also shows on the MSI file like so: http://screencast.com/t/A499i6jS I generally just rename the MSI file, like DataMonkey_1_5_0.msi for my own purposes. A: FYI using a DotNetZip self-extractor does not make sense if you are using the bootstrapper setup.exe to verify .NET is installed (DotNetZip self-extractor requires .NET 2.0).
{ "language": "en", "url": "https://stackoverflow.com/questions/151403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to get an X11 Window from a Process ID? Under Linux, my C++ application is using fork() and execv() to launch multiple instances of OpenOffice so as to view some powerpoint slide shows. This part works. Next I want to be able to move the OpenOffice windows to specific locations on the display. I can do that with the XMoveResizeWindow() function but I need to find the Window for each instance. I have the process ID of each instance, how can I find the X11 Window from that ? UPDATE - Thanks to Andy's suggestion, I have pulled this off. I'm posting the code here to share it with the Stack Overflow community. Unfortunately Open Office does not seem to set the _NET_WM_PID property so this doesn't ultimately solve my problem but it does answer the question. // Attempt to identify a window by name or attribute. // by Adam Pierce <adam@doctort.org> #include <X11/Xlib.h> #include <X11/Xatom.h> #include <iostream> #include <list> using namespace std; class WindowsMatchingPid { public: WindowsMatchingPid(Display *display, Window wRoot, unsigned long pid) : _display(display) , _pid(pid) { // Get the PID property atom. _atomPID = XInternAtom(display, "_NET_WM_PID", True); if(_atomPID == None) { cout << "No such atom" << endl; return; } search(wRoot); } const list<Window> &result() const { return _result; } private: unsigned long _pid; Atom _atomPID; Display *_display; list<Window> _result; void search(Window w) { // Get the PID for the current Window. Atom type; int format; unsigned long nItems; unsigned long bytesAfter; unsigned char *propPID = 0; if(Success == XGetWindowProperty(_display, w, _atomPID, 0, 1, False, XA_CARDINAL, &type, &format, &nItems, &bytesAfter, &propPID)) { if(propPID != 0) { // If the PID matches, add this window to the result set. if(_pid == *((unsigned long *)propPID)) _result.push_back(w); XFree(propPID); } } // Recurse into child windows. Window wRoot; Window wParent; Window *wChild; unsigned nChildren; if(0 != XQueryTree(_display, w, &wRoot, &wParent, &wChild, &nChildren)) { for(unsigned i = 0; i < nChildren; i++) search(wChild[i]); } } }; int main(int argc, char **argv) { if(argc < 2) return 1; int pid = atoi(argv[1]); cout << "Searching for windows associated with PID " << pid << endl; // Start with the root window. Display *display = XOpenDisplay(0); WindowsMatchingPid match(display, XDefaultRootWindow(display), pid); // Print the result. const list<Window> &result = match.result(); for(list<Window>::const_iterator it = result.begin(); it != result.end(); it++) cout << "Window #" << (unsigned long)(*it) << endl; return 0; } A: Try installing xdotool, then: #!/bin/bash # --any and --name present only as a work-around, see: https://github.com/jordansissel/xdotool/issues/14 ids=$(xdotool search --any --pid "$1" --name "dummy") I do get a lot of ids. I use this to set a terminal window as urgent when it is done with a long command, with the program seturgent. I just loop through all the ids I get from xdotool and run seturgent on them. A: The only way I know to do this is to traverse the tree of windows until you find what you're looking for. Traversing isn't hard (just see what xwininfo -root -tree does by looking at xwininfo.c if you need an example). But how do you identify the window you are looking for? Some applications set a window property called _NET_WM_PID. I believe that OpenOffice is one of the applications that sets that property (as do most Gnome apps), so you're in luck. A: There is no good way. The only real options I see, are: * *You could look around in the process's address space to find the connection information and window ID. *You could try to use netstat or lsof or ipcs to map the connections to the Xserver, and then (somehow! you'll need root at least) look at its connection info to find them. *When spawning an instance you can wait until another window is mapped, assume it's the right one, and `move on. A: I took the freedom to re-implement the OP's code using some modern C++ features. It maintains the same functionalities but I think that it reads a bit better. Also it does not leak even if the vector insertion happens to throw. // Attempt to identify a window by name or attribute. // originally written by Adam Pierce <adam@doctort.org> // revised by Dario Pellegrini <pellegrini.dario@gmail.com> #include <X11/Xlib.h> #include <X11/Xatom.h> #include <iostream> #include <vector> std::vector<Window> pid2windows(pid_t pid, Display* display, Window w) { struct implementation { struct FreeWrapRAII { void * data; FreeWrapRAII(void * data): data(data) {} ~FreeWrapRAII(){ XFree(data); } }; std::vector<Window> result; pid_t pid; Display* display; Atom atomPID; implementation(pid_t pid, Display* display): pid(pid), display(display) { // Get the PID property atom atomPID = XInternAtom(display, "_NET_WM_PID", True); if(atomPID == None) { throw std::runtime_error("pid2windows: no such atom"); } } std::vector<Window> getChildren(Window w) { Window wRoot; Window wParent; Window *wChild; unsigned nChildren; std::vector<Window> children; if(0 != XQueryTree(display, w, &wRoot, &wParent, &wChild, &nChildren)) { FreeWrapRAII tmp( wChild ); children.insert(children.end(), wChild, wChild+nChildren); } return children; } void emplaceIfMatches(Window w) { // Get the PID for the given Window Atom type; int format; unsigned long nItems; unsigned long bytesAfter; unsigned char *propPID = 0; if(Success == XGetWindowProperty(display, w, atomPID, 0, 1, False, XA_CARDINAL, &type, &format, &nItems, &bytesAfter, &propPID)) { if(propPID != 0) { FreeWrapRAII tmp( propPID ); if(pid == *reinterpret_cast<pid_t*>(propPID)) { result.emplace_back(w); } } } } void recurse( Window w) { emplaceIfMatches(w); for (auto & child: getChildren(w)) { recurse(child); } } std::vector<Window> operator()( Window w ) { result.clear(); recurse(w); return result; } }; //back to pid2windows function return implementation{pid, display}(w); } std::vector<Window> pid2windows(const size_t pid, Display* display) { return pid2windows(pid, display, XDefaultRootWindow(display)); } int main(int argc, char **argv) { if(argc < 2) return 1; int pid = atoi(argv[1]); std::cout << "Searching for windows associated with PID " << pid << std::endl; // Start with the root window. Display *display = XOpenDisplay(0); auto res = pid2windows(pid, display); // Print the result. for( auto & w: res) { std::cout << "Window #" << static_cast<unsigned long>(w) << std::endl; } XCloseDisplay(display); return 0; } A: Check if /proc/PID/environ contains a variable called WINDOWID A: Bit late to the party. However: Back in 2004, Harald Welte posted a code snippet that wraps the XCreateWindow() call via LD_PRELOAD and stores the process id in _NET_WM_PID. This makes sure that each window created has a PID entry. http://www.mail-archive.com/devel@xfree86.org/msg05806.html A: Are you sure you have the process ID of each instance? My experience with OOo has been that trying to run a second instance of OOo merely converses with the first instance of OOo, and tells that to open the additional file. I think you're going to need to use the message-sending capabilities of X to ask it nicely for its window. I would hope that OOo documents its coversations somewhere. A: If you use python, I found a way here, the idea is from BurntSushi If you launched the application, then you should know its cmd string, with which you can reduce calls to xprop, you can always loop through all the xids and check if the pid is the same as the pid you want import subprocess import re import struct import xcffib as xcb import xcffib.xproto def get_property_value(property_reply): assert isinstance(property_reply, xcb.xproto.GetPropertyReply) if property_reply.format == 8: if 0 in property_reply.value: ret = [] s = '' for o in property_reply.value: if o == 0: ret.append(s) s = '' else: s += chr(o) else: ret = str(property_reply.value.buf()) return ret elif property_reply.format in (16, 32): return list(struct.unpack('I' * property_reply.value_len, property_reply.value.buf())) return None def getProperty(connection, ident, propertyName): propertyType = eval(' xcb.xproto.Atom.%s' % propertyName) try: return connection.core.GetProperty(False, ident, propertyType, xcb.xproto.GetPropertyType.Any, 0, 2 ** 32 - 1) except: return None c = xcb.connect() root = c.get_setup().roots[0].root _NET_CLIENT_LIST = c.core.InternAtom(True, len('_NET_CLIENT_LIST'), '_NET_CLIENT_LIST').reply().atom raw_clientlist = c.core.GetProperty(False, root, _NET_CLIENT_LIST, xcb.xproto.GetPropertyType.Any, 0, 2 ** 32 - 1).reply() clientlist = get_property_value(raw_clientlist) cookies = {} for ident in clientlist: wm_command = getProperty(c, ident, 'WM_COMMAND') cookies[ident] = (wm_command) xids=[] for ident in cookies: cmd = get_property_value(cookies[ident].reply()) if cmd and spref in cmd: xids.append(ident) for xid in xids: pid = subprocess.check_output('xprop -id %s _NET_WM_PID' % xid, shell=True) pid = re.search('(?<=\s=\s)\d+', pid).group() if int(pid) == self.pid: print 'found pid:', pid break print 'your xid:', xid
{ "language": "en", "url": "https://stackoverflow.com/questions/151407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61" }
Q: How to solve HTTP status 405 "Method Not Allowed" when calling Web Services I've got a siluation where i need to access a SOAP web service with WSE 2.0 security. I've got all the generated c# proxies (which are derived from Microsoft.Web.Services2.WebServicesClientProtocol), i'm applying the certificate but when i call a method i get an error: System.Net.WebException : The request failed with HTTP status 405: Method Not Allowed. at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) I've done some googling and it appears that this is a server configuration issue. However this web service is used many clients without any problem (the web service is provided by a Telecom New Zealand, so it's bound to be configured correctly. I believe it's written in Java) Can anyone shed some light on this issue? A: I found this was due to WCF not being installed on IIS. The main thing is that the .svc extension has to be mapped in IIS See MSDN here. Use the ServiceModelReg tool to complete the installation. You'll always want to verify that WCF is installed and .svc is mapped in IIS anytime you get a new machine or reinstall IIS. A: I had the same problem, but the details were different: The Url we were using didn't have the file (.asmx) part. Calling the Url in a browser was OK. It also worked in a simple client setting the URL through Visual Studio. But it didn't worked setting the Url dynamically! It gave the same 405 error. Finally we found that adding the file part to the Web Service Url solved the problem. Maybe a .Net framework bug? A: You needto enable HTTP Activation Go to Control Panel > Windows Features > .NET Framework 4.5 Advanced Services > WCF Services > HTTP Activation A: Ok, found what the problem was. I was trying to call a .wsdl url instead of .asmx url. Doh! A: hmm are those other clients also using C#/.NET? Method not allowed --> could this be a REST service, instead of a SOAP web service? A: MethodNotAllowedEquivalent to HTTP status 405. MethodNotAllowed indicates that the request method (POST or GET) is not allowed on the requested resource. The problem is in your enpoint uri is not full or correct addres to wcf - .scv Check your proxy.enpoint or wcf client.enpoint uri is correct. A: In my case the problem was that the app config was incorrectly formed/called: in config the service url was using "localhost" as domain name, but real hostname differed from the URL I called :( so I changed the "localhost" in config to domainname thah I use in URL. That`s all!
{ "language": "en", "url": "https://stackoverflow.com/questions/151413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Explain the JVM Directory Layout on Mac OSX Leopard Here is the directory layout that was installed with Leopard. What is the "A" directory and why the "Current" directory in addition to the "CurrentJDK"? It seems like you can easily switch the current JDK by move the CurrentJDK link, but then the contents under Current and A will be out of sync. lrwxr-xr-x 1 root wheel 5 Jun 14 15:49 1.3 -> 1.3.1 drwxr-xr-x 3 root wheel 102 Jan 14 2008 1.3.1 lrwxr-xr-x 1 root wheel 5 Feb 21 2008 1.4 -> 1.4.2 lrwxr-xr-x 1 root wheel 3 Jun 14 15:49 1.4.1 -> 1.4 drwxr-xr-x 8 root wheel 272 Feb 21 2008 1.4.2 lrwxr-xr-x 1 root wheel 5 Feb 21 2008 1.5 -> 1.5.0 drwxr-xr-x 8 root wheel 272 Feb 21 2008 1.5.0 lrwxr-xr-x 1 root wheel 5 Jun 14 15:49 1.6 -> 1.6.0 drwxr-xr-x 8 root wheel 272 Jun 14 15:49 1.6.0 drwxr-xr-x 8 root wheel 272 Jun 14 15:49 A lrwxr-xr-x 1 root wheel 1 Jun 14 15:49 Current -> A lrwxr-xr-x 1 root wheel 3 Jun 14 15:49 CurrentJDK -> 1.5 steve-mbp /System/Library/Frameworks/JavaVM.framework/Versions $ and the contents of A -rw-r--r-- 1 root wheel 1925 Feb 29 2008 CodeResources drwxr-xr-x 34 root wheel 1156 Jun 14 15:49 Commands drwxr-xr-x 3 root wheel 102 Mar 6 2008 Frameworks drwxr-xr-x 16 root wheel 544 Jun 14 15:49 Headers -rwxr-xr-x 1 root wheel 236080 Feb 29 2008 JavaVM drwxr-xr-x 29 root wheel 986 Jun 14 15:49 Resources steve-mbp /System/Library/Frameworks/JavaVM.framework/Versions/A $ A: The (A, Current symbolic-linked to A) is part of the structure of a Mac OS X framework, which JavaVM.framework is. This framework may have C or Objective-C code in it, in addition to the actual JVM installations. Thus it could potentially be linked against from some C or Objective-C code in addition to containing the JVM alongside that. Note that you should not change the CurrentJDK link to point at anything but what it is set to by Mac OS X. Unlike on other platforms, the Java virtual machine is an operating system service on Mac OS X, and changing it in this way would put you in an unsupported (and potentially untested, unstable, etc.) configuration. A: You should use the Java Preferences command to change the jvm version. If you have spotlight on your Harddisk, you can just spotlight "Java Preferences" A: If you want to revert to an older JVM (here, 1.5), you can put the following in your ~/.profile (or paste it into a specific Terminal window): export JAVA_HOME="/System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Home/" export PATH=$JAVA_HOME/bin/:$PATH
{ "language": "en", "url": "https://stackoverflow.com/questions/151414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Calling a C++ function pointer on a specific object instance I have a function pointer defined by: typedef void (*EventFunction)(int nEvent); Is there a way to handle that function with a specific instance of a C++ object? class A { private: EventFunction handler; public: void SetEvent(EventFunction func) { handler = func; } void EventOne() { handler(1); } }; class B { private: A a; public: B() { a.SetEvent(EventFromA); } // What do I do here? void EventFromA(int nEvent) { // do stuff } }; Edit: Orion pointed out the options that Boost offers such as: boost::function<int (int)> f; X x; f = std::bind1st( std::mem_fun(&X::foo), &x); f(5); // Call x.foo(5) Unfortunately Boost is not an option for me. Is there some sort of "currying" function that can be written in C++ that will do this kind of wrapping of a pointer to a member function in to a normal function pointer? A: I highly recommend Don Clugston's excellent FastDelegate library. It provides all the things you'd expect of a real delegate and compiles down to a few ASM instructions in most cases. The accompanying article is a good read on member function pointers as well. http://www.codeproject.com/KB/cpp/FastDelegate.aspx A: You may find C++ FAQ by Marshall Cline helpful to what you're trying to accomplish. A: Read about pointers to members. To call a method on the derived class, the method has to be declared in the base class as virtual and overriden in the base class and your pointer should point to the base class method. More about pointers to virtual members. A: If you're interfacing with a C library, then you can't use a class member function without using something like boost::bind. Most C libraries that take a callback function usually also allow you to pass an extra argument of your choosing (usually of type void*), which you can use to bootstrap your class, as so: class C { public: int Method1(void) { return 3; } int Method2(void) { return x; } int x; }; // This structure will hold a thunk to struct CCallback { C *obj; // Instance to callback on int (C::*callback)(void); // Class callback method, taking no arguments and returning int }; int CBootstrapper(CCallback *pThunk) { // Call the thunk return ((pThunk->obj) ->* (pThunk->callback))( /* args go here */ ); } void DoIt(C *obj, int (C::*callback)(void)) { // foobar() is some C library function that takes a function which takes no arguments and returns int, and it also takes a void*, and we can't change it struct CCallback thunk = {obj, callback}; foobar(&CBootstrapper, &thunk); } int main(void) { C c; DoIt(&c, &C::Method1); // Essentially calls foobar() with a callback of C::Method1 on c DoIt(&c, &C::Method2); // Ditto for C::Method2 } A: You can use function pointers to index into the vtable of a given object instance. This is called a member function pointer. Your syntax would need to change to use the ".*" and the "&::" operators: class A; class B; typedef void (B::*EventFunction)(int nEvent) and then: class A { private: EventFunction handler; public: void SetEvent(EventFunction func) { handler = func; } void EventOne(B* delegate) { ((*delegate).*handler)(1); } // note: ".*" }; class B { private: A a; public: B() { a.SetEvent(&B::EventFromA); } // note: "&::" void EventFromA(int nEvent) { /* do stuff */ } }; A: Run away from raw C++ function pointers, and use std::function instead. You can use boost::function if you are using an old compiler such as visual studio 2008 which has no support for C++11. boost:function and std::function are the same thing - they pulled quite a bit of boost stuff into the std library for C++11. Note: you may want to read the boost function documentation instead of the microsoft one as it's easier to understand A: Unfortunately, the EventFunction type cannot point to a function of B, because it is not the correct type. You could make it the correct type, but that probably isn't really the solution you want: typedef void (*B::EventFunction)(int nEvent); ... and then everything works once you call the callback with an obhect of B. But you probably want to be able to call functions outside of B, in other classes that do other things. That is sort of the point of a callback. But now this type points to something definitely in B. More attractive solutions are: * *Make B a base class, then override a virtual function for each other class that might be called. A then stores a pointer to B instead of a function pointer. Much cleaner. *If you don't want to bind the function to a specific class type, even a base class (and I wouldn't blame you), then I suggest you make the function that gets called a static function: "static void EventFrom A(int nEvent);". Then you can call it directly, without an object of B. But you probably want it to call a specific instance of B (unless B is a singleton). *So if you want to be able to call a specific instance of B, but be able to call non-B's, too, then you need to pass something else to your callback function so that the callback function can call the right object. Make your function a static, as above, and add a void* parameter which you will make a pointer to B. In practice you see two solutions to this problem: ad hoc systems where you pass a void* and the event, and hierarchies with virtual functions in a base class, like windowing systems A: You mention that boost isn't an option for you, but do you have TR1 available to you? TR1 offers function, bind, and mem_fn objects based on the boost library, and you may already have it bundled with your compiler. It isn't standard yet, but at least two compilers that I've used recently have had it. http://en.wikipedia.org/wiki/Technical_Report_1 http://msdn.microsoft.com/en-us/library/bb982702.aspx A: It's somewhat unclear what you're trying to accomplish here. what is clear is that function pointers is not the way. maybe what you're looking for is pointer to method. A: I have a set of classes for this exact thing that I use in my c++ framework. http://code.google.com/p/kgui/source/browse/trunk/kgui.h How I handle it is each class function that can be used as a callback needs a static function that binds the object type to it. I have a set of macros that do it automatically. It makes a static function with the same name except with a "CB_" prefix and an extra first parameter which is the class object pointer. Checkout the Class types kGUICallBack and various template versions thereof for handling different parameters combinations. #define CALLBACKGLUE(classname , func) static void CB_ ## func(void *obj) {static_cast< classname *>(obj)->func();} #define CALLBACKGLUEPTR(classname , func, type) static void CB_ ## func(void *obj,type *name) {static_cast< classname *>(obj)->func(name);} #define CALLBACKGLUEPTRPTR(classname , func, type,type2) static void CB_ ## func(void *obj,type *name,type2 *name2) {static_cast< classname *>(obj)->func(name,name2);} #define CALLBACKGLUEPTRPTRPTR(classname , func, type,type2,type3) static void CB_ ## func(void *obj,type *name,type2 *name2,type3 *name3) {static_cast< classname *>(obj)->func(name,name2,name3);} #define CALLBACKGLUEVAL(classname , func, type) static void CB_ ## func(void *obj,type val) {static_cast< classname *>(obj)->func(val);}
{ "language": "en", "url": "https://stackoverflow.com/questions/151418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: returning null values from a web service call I have a web service API. Some calls return objects containing text fields with information provided by the user. From both a design and a security standpoint, what are the downsides to returning null in those fields when no information has been provided? Is there a clear advantage to always returning an empty string instead, other then simplifying the API by not requiring the client code to check for nulls? A: It all depends on whether you treat a null string value as semantically different from an empty string. If null and empty string both mean that there's no data for that field then I see no reason not to make life simpler for the client by not having to check and return empty string. A: Short answer: Don't use null in web-services. In general I advise sticking to empty strings over null unless when the meaning is zero characters. I would prefer to use null only as an "undefined". For example, with user input, if the user enters the field and types nothing, that would be an empty string. But if the user simply skips the field, that might be null. Up until I define a meaning for null, I favor returning an empty string and using String.IsNUllOrEmpty on the processing side, because in lieu of any future knowledge, I should assume null and empty are the same. But web-services have a special twist, which is that there has been more than a fair share of mistakes in tools between the differences in <element/>, <element></element> and the element simply being missing. Enough confusion that unless I control the whole thing I don't trust the interoperability to be acceptable. So if I have a concept like null that I need to represent, I'll create a separate boolean element to indicate present/not present. A: I personally prefer returning nulls instead of empty strings. If the database has a null value, a web service returning that value should ideally return null instead of an empty string. Having said that, I have come across problems where web service clients like Adobe Flex cannot really work with nulls, and you might have to pass in an empty string if the client cannot be modified. I don't see any security issues either way. A: I don't think there's a security issue involved with returning null vs. returning an empty string. There's not any real downside to returning null for those fields for which there is no information - that's kind of what nulls are meant to indicate. You can simplify your client code by using string.IsNullOrEmpty() (assuming this is .NET) A: Null just means null. There's no inherent security issue there. As a web service is a public API, you should be rigorously checking all input data and expect badly-formed input to occur. You can never assume that the client code does the right thing and doesn't send you null either (maliciously or otherwise). So Empty strings or other sentinel values don't save you from checking input, and they don't necessarily make life easier either. Semantically empty strings aren't null either, they're empty. For what it's worth, the xml generated by the .net SoapFormatter for a null is plain old nothing, and that's not the same as an empty string. It's a trivial example, but informative. If you send null you're also sending less data, which may be something to consider. { [WebMethod] public MyClass HelloWorld() { MyClass val = new MyClass() { IsValid = false, HelloString = "Hello World", BlankString = "", Nested = new NestedClass { Name = "Bob" } }; return val; } } public class MyClass { public bool IsValid { get; set; } public string HelloString { get; set; } public string BlankString { get; set; } public string OtherString { get; set; } public NestedClass Nested { get; set; } public NestedClass NullNested { get; set; } } public class NestedClass { public string Name { get; set; } } yields the following xml response. Note how OtherString and NullNested are missing entirely from the response, which is different to BlankString. <MyClass> <IsValid>false</IsValid> <HelloString>Hello World</HelloString> <BlankString /> <Nested> <Name>Bob</Name> </Nested> </MyClass> A: just for completeness, assuming you return a row or a set of rows of data as a JSON string, there is always the possibility to completely omit fields with null values. this results in less data transferred over the network and can typically be handled easily by any client (obnviously, such a convention should be documented somewhere). see e.g. the following jackson configuration (it also omits empty collections): private static ObjectMapper configureMapper(ObjectMapper mapper) { mapper.setDefaultPropertyInclusion(JsonInclude.Value.construct(JsonInclude.Include.NON_EMPY, JsonInclude.Include.NON_NULL)); mapper.setSerializationInclusion(JsonInclude.Include.NON_NULL); return mapper; }
{ "language": "en", "url": "https://stackoverflow.com/questions/151434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Web "frameworks" for Haxe to deploy in a PHP environment? Lately I've been taking a look at Haxe, to build an application to be deployed to Apache running PHP. Well, while it looks like it might suit my needs (deploying to PHP, but not using an awful language), I haven't found anything to make the actual application development easier than building a traditional non-MVC PHP app. Are there any toolkits/frameworks that I'm missing, that would be worthwhile? It'd be nice if it were MVC inspired, and I'd definitely want an easy way to use nice URLS, though I could settle for mod_rewrite rules if necessary. Edit: The idea is to not use something like CakePHP on the PHP end, but to instead use something like CakePHP on the Haxe end. A: There is a port of PureMVC for Haxe: https://github.com/PureMVC/puremvc-haxe-standard-framework/wiki As far as I know this the only thing for Haxe, but there are discussions on the mailing list about creating a own framework, but this could take a while. A: I'm happy to say that haXigniter has been completely rewritten, to get away from the PHP-framework-style as mentioned by Marek. Now it adheres much more to better OO-principles and is also a standard haXe library, so upgrades are much simpler. Please check it out at http://github.com/ciscoheat/haxigniter. A: I see that someone is starting to develop an MVC framework for Haxe called "Hails", though I donΒ΄t know if it is usable yet. hails: A minimal Rails-inspired MVC web-framework for Haxe / PHP http://code.google.com/p/hails/ A: Take a look at HaXigniter, a new kid on the block: http://github.com/ciscoheat/haxigniter A: I would recon you to do your own. The problem with frameworks above (Excluding PureMVC) is that they were designed for particular language. Haxigniter is a good copy, but it has the architecture that was kind of enforced by PHP4. Its a good excersise! Let's you understand the differences and work out bottomline mechanics - and this is very important as your haxe code will be translated ( so you have double abstraction 1. translation 2. framework its good to know how to work thing out ;]) A: I am working on a Haxe-based toolkit/framework for NekoVM/PHP. It is also built around a Zend Framework/Ruby-on-Rails-ish MVC workflow and has various classes for authentication, caching, form validation, session management etc. It can be compiled to both NekoVM and PHP and I have already used this library for a couple of websites. Unfortunately there is not much documentation available right now but I am working on that for the upcoming version 1 release of the library. The project is hosted at http://code.google.com/p/toolkat A: There is also ufront: * *http://lib.haxe.org/p/ufront *http://code.google.com/p/ufront/ It works with php and neko. A: See forum FAQ ( 7th entry ), but the list and links may not be still relevant, so below is a revised list of some that seem current. * *poko *hxquery *uform *hxWiki But if you are interested in just MVC structures then I suspect more robot legs type approaches maybe better, and injection is also useful. * *unject *cube *inject *robotlegs
{ "language": "en", "url": "https://stackoverflow.com/questions/151438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Response.Write vs <%= %> Bearing in mind this is for classic asp Which is better, all HTML contained within Response.Write Statements or inserting variables into HTML via <%= %>. Eg Response.Write "<table>" & vbCrlf Response.Write "<tr>" &vbCrLf Response.Write "<td class=""someClass"">" & someVariable & "</td>" & vbCrLf Response.Write "</tr>" & vbCrLf Response.Write "</table>" & vbCrLf VS <table> <tr> <td class="someClass"><%= someVariable %></td> </tr> </table> I am mainly asking from a performance point of view in which will have the least server impact when there multiple variables to insert? If there are no technical differences what are the arguments for one over the other? A: Leaving aside issues of code readibility/maintainibility which others have addressed, your question was specifically about performance when you have multiple variables to insert - so I assume you're going to be repeating something like your code fragment multiple times. In that case, you should get the best performance by using a single Response.Write without concatenating all of the newlines: Response.Write "<table><tr><td class=""someClass"">" & someVar & "</td></tr></table>" The browser doesn't need the newlines or the tabs or any other pretty formatting to parse the HTML. If you're going for performance, you can remove all of these. You'll also make your HTML output smaller, which will give you faster page load times. In the case of a single variable, it doesn't really make a lot of difference. But for multiple variables, you want to minimise the context switching between HTML and ASP - you'll take a hit for every jump from one to the other. To help with readibility when building a longer statement, you can use the VBScript line continuation charcter and tabs in your source code (but not the output) to represent the structure without hitting your performance: Response.Write "<table>" _ & "<tr>" _ & "<td class=""someClass"">" & someVar & "</td>" _ & "</tr>" _ & "<tr>" _ & "<td class=""anotherClass"">" & anotherVar & "</td>" _ & "</tr>" _ & "<tr>" _ & "<td class=""etc"">" & andSoOn & "</td>" _ & "</tr>" _ & "</table>" It's not as legible as the HTML version but if you're dropping a lot of variables into the output (ie a lot of context switching between HTML and ASP), you'll see better performance. Whether the performance gains are worth it or whether you would be better off scaling your hardware is a separate question - and, of course, it's not always an option. Update: see tips 14 and 15 in this MSDN article by Len Cardinal for information on improving performance with Response.Buffer and avoiding context switching: http://msdn.microsoft.com/en-us/library/ms972335.aspx#asptips_topic15. A: First, The most important factor you should be looking at is ease of maintenance. You could buy a server farm with the money and time you would otherwise waste by having to decipher a messy web site to maintain it. In any case, it doesn't matter. At the end of the day, all ASP does is just execute a script! The ASP parser takes the page, and transforms <%= expression %> into direct script calls, and every contiguous block of HTML becomes one giant call to Response.Write. The resulting script is cached and reused unless the page changes on disk, which causes the cached script to be recalculated. Now, too much use of <%= %> leads to the modern version of "spaghetti code": the dreaded "Tag soup". You won't be able to make heads or tails of the logic. On the other hand, too much use of Response.Write means you will never be able to see the page at all until it renders. Use <%= %> when appropriate to get the best of both worlds. My first rule is to pay attention at the proportion of "variable text" to "static text". If you have just a few places with variable text to replace, the <%= %> syntax is very compact and readable. However, as the <%= %> start to accumulate, they obscure more and more of the HTML and at the same time the HTML obscures more and more of your logic. As a general rule, once you start taking about loops, you need to stop and switch to Response.Write`. There aren't many other hard and fast rules. You need to decide for your particular page (or section of the page) which one is more important, or naturally harder to understand, or easier to break: your logic or your HTML? It's usually one or the other (I've seen hundreds of cases of both) If you logic is more critical, you should weight more towards Response.Write; it will make the logic stand out. If you HTML is more critical, favor <%= %>, which will make the page structure more visible. Sometimes I've had to write both versions and compare them side-by-side to decide which one is more readable; it's a last resort, but do it while the code is fresh in your mind and you will be glad three months later when you have to make changes. A: I prefer the <%= %> method in most situations for several reasons. * *The HTML is exposed to the IDE so that it can be processed giving you tooltips, tag closing, etc. *Maintaining indentation in the output HTML is easier which can be very helpful with reworking layout. *New lines without appending vbCrLf on everything and again for reviewing output source. A: The response format will render HTML like so: <table> <tr> <td class="someClass">variable value</td> </tr> </table> As a result, not only will the means to produce your code be unreadable, but the result will also be tabbed inappropriately. Stick with the other option. A: I prefer <%= %> solely because it makes Javascript development easier. You can write code that references your controls like this var myControl = document.getElementById('<%= myControl.ClientID %>'); I can then use that control any way I'd like in my javascript code without having to worry about the hard coded IDs. Response.Write can break some ASP.NET AJAX code on some occasions so I try to avoid it unless using it for rendering specific things in custom controls. A: I try to use the MVC paradigm when doing ASP/PHP. It makes things easiest to maintain, re-architect, expand upon. In that regard, I tend to have a page that represents the model. It's mostly VB/PHP and sets vars for later use in the view. It also generates hunks of HTML when looping for later inclusion in the view. Then I have a page that represents the view. That's mostly HTML peppered with <%= %> tags. The model is #include -d in the view and away you go. Controller logic is typically done in JavaScript in a third page or server-side. A: My classic ASP is rusty, but: Response.Write "<table>" & vbCrlf Response.Write "<tr>" &vbCrLf Response.Write "<tdclass=""someClass"">" & someVariable & "</td>" & vbCrLf Response.Write "</tr>" & vbCrLf Response.Write "</table>" & vbCrLf this is run as-is. This, however: <table> <tr> <td class="someClass"><%= someVariable %></td> </tr> </table> results in: Response.Write"<table>\r\n<tr>\r\n<td class="someClass">" Response.Write someVariable Response.Write "</td>\r\n</tr>\r\n</table>" Where \r\n is a vbCrLf So technically, the second one is quicker. HOWEVER, the difference would be measured in single milliseconds, so I wouldn't worry about it. I'd be more concerned that the top one is pretty much unmaintainable (esp by a HTML-UI developer), where as the second one is trivial to maintain. props to @Euro Micelli - maintenance is the key (which is also why languages like Ruby, Python, and in the past (tho still....) C# and Java kicked butt over C, C++ and Assembly- humans could maintain the code, which is way more important than shaving a few ms off a page load. Of course, C/C++ etc have their place.... but this isn't it. :) A: From a personal preference point of view I prefer the <%= %> method as I feel it provides a better separation variable content from static content. A: Many of the answers here indicate that the two approaches produce the same output and that the choice is one of coding style and performance. Its seems its believed that static content outside of <% %> becomes a single Response.Write. However it would be more accurate to say the code outside <% %> gets sent with BinaryWrite. Response.Write takes a Unicode string and encodes it to the current Response.CodePage before placing it in the buffer. No such encoding takes place for the static content in an ASP file. The characters outside <% %> are dumped verbatim byte for byte into the buffer. Hence where the Response.CodePage is different than the CodePage that was used to save the ASP file the results of the two approaches may differ. For example lets say I have this content saved in a standard 1252 code page:- <% Response.CodePage = 65001 Response.CharSet = "UTF-8" %> <p> The British Β£</p> <%Response.Write("<p> The British Β£</p>")%> The first paragraph is garbled, since the Β£ will not be sent using UTF-8 encoding, the second is fine because the Unicode string supplied is encoded to UTF-8. Hence from a perfomance point of view using static content is preferable since it doesn't need encoding but care is needed if the saved code page differs from the output codepage. For this reason I prefer to save as UTF-8, include <%@ codepage=65001 and set Response.Charset = "UTF-8". A: <%= %> and the rest get expanded to Response.Write() so it's the same in the end. A: There is no performance improvement switching to Response.Write, and it can be faster to read and maintain using the shortcut notation. A: You should frame this questions in terms of code reuse and code maintainability (aka readability). There is really no performance gain either way. A: <%=Bazify()%> is useful when you are generating HTML from a short expression inline with some HTML. Response.Write "foo" is better when you need to do some HTML inline with a lot of code. Technically, they are the same. Speaking about your example, though, the Response.Write code does a lot of string concatenation with &, which is very slow in VBScript. Also, like Russell Myers said, it's not tabbed the same way as your other code, which might be unintentional. A: I agree with Jason, more so now that .NET is offering an MVC framework as an alternative to that awful Web Form/Postback/ViewState .NET started out with. Maybe it's because I was old-school classic ASP/HTML/JavaScript and not a VB desktop application programmer that caused me to just not grokk "The Way of the Web Form?" but I'm so pleased we seem to be going full circle and back to a methodology like Jason refers to. With that in mind, I'd always choose an included page containing your model/logic and <%= %> tokens within your, effectively template HTML "view." Your HTML will be more "readable" and your logic separated as much as classic ASP allows; you're killing a couple of birds with that stone. A: If you are required to write and maintain a Classic ASP application you should check out the free KudzuASP template engine. It is capable of 100% code and HTML separation and allows for conditional content, variable substitution, template level control of the original asp page. If-Then-Else, Try-Catch-Finally, Switch-Case, and other structural tags are available as well as custom tags based in the asp page or in dynamically loadable libraries (asp code files) Structural tags can be embedded within other structural tags to any desired level. Custom tags and tag libraries are easy to write and can be included at the asp page's code level or by using an include tag at the template level. Master pages can be written by invoking a template engine at the master page level and leveraging a second child template engine for any internal content. In KudzuASP your asp page contains only code, creates the template engine, sets up initial conditions and invokes the template. The template contains HTML layout. Once the asp page invokes the template it then becomes an event driven resource fully driven by the evaluation of the template and the structure that it contains.
{ "language": "en", "url": "https://stackoverflow.com/questions/151448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: What is the difference between String.Empty and "" (empty string)? In .NET, what is the difference between String.Empty and "", and are they interchangable, or is there some underlying reference or Localization issues around equality that String.Empty will ensure are not a problem? A: All instances of "" are the same, interned string literal (or they should be). So you really won't be throwing a new object on the heap every time you use "" but just creating a reference to the same, interned object. Having said that, I prefer string.Empty. I think it makes code more readable. A: String.Empty does not create an object whereas "" does. The difference, as pointed out here, is trivial, however. A: Use String.Empty rather than "". This is more for speed than memory usage but it is a useful tip. The "" is a literal so will act as a literal: on the first use it is created and for the following uses its reference is returned. Only one instance of "" will be stored in memory no matter how many times we use it! I don't see any memory penalties here. The problem is that each time the "" is used, a comparing loop is executed to check if the "" is already in the intern pool. On the other side, String.Empty is a reference to a "" stored in the .NET Framework memory zone. String.Empty is pointing to same memory address for VB.NET and C# applications. So why search for a reference each time you need "" when you have that reference in String.Empty? Reference: String.Empty vs "" A: It just doesn't matter! Some past discussion of this: http://www.codinghorror.com/blog/archives/000185.html http://blogs.msdn.com/brada/archive/2003/04/22/49997.aspx http://blogs.msdn.com/brada/archive/2003/04/27/50014.aspx A: The previous answers were correct for .NET 1.1 (look at the date of the post they linked: 2003). As of .NET 2.0 and later, there is essentially no difference. The JIT will end up referencing the same object on the heap anyhow. According to the C# specification, section 2.4.4.5: http://msdn.microsoft.com/en-us/library/aa691090(VS.71).aspx Each string literal does not necessarily result in a new string instance. When two or more string literals that are equivalent according to the string equality operator (Section 7.9.7) appear in the same assembly, these string literals refer to the same string instance. Someone even mentions this in the comments of Brad Abram's post In summary, the practical result of "" vs. String.Empty is nil. The JIT will figure it out in the end. I have found, personally, that the JIT is way smarter than me and so I try not to get too clever with micro-compiler optimizations like that. The JIT will unfold for() loops, remove redundant code, inline methods, etc better and at more appropriate times than either I or the C# compiler could ever anticipate before hand. Let the JIT do its job :) A: String.Empty is a readonly field while "" is a const. This means you can't use String.Empty in a switch statement because it is not a constant. A: string mystring = ""; ldstr "" ldstr pushes a new object reference to a string literal stored in the metadata. string mystring = String.Empty; ldsfld string [mscorlib]System.String::Empty ldsfld pushes the value of a static field onto the evaluation stack I tend to use String.Empty instead of "" because IMHO it's clearer and less VB-ish. A: In .NET prior to version 2.0, "" creates an object while string.Empty creates no objectref, which makes string.Empty more efficient. In version 2.0 and later of .NET, all occurrences of "" refer to the same string literal, which means "" is equivalent to .Empty, but still not as fast as .Length == 0. .Length == 0 is the fastest option, but .Empty makes for slightly cleaner code. See the .NET specification for more information. A: Since String.Empty is not a compile-time constant you cannot use it as a default value in function definition. public void test(int i=0,string s="") { // Function Body } A: Eric Lippert wrote (June 17, 2013):"The first algorithm I ever worked on in the C# compiler was the optimizer that handles string concatenations. Unfortunately I did not manage to port these optimizations to the Roslyn codebase before I left; hopefully someone will get to that!" Here are some Roslyn x64 results as of January 2019. Despite the consensus remarks of the other answers on this page, it does not appear to me that the current x64 JIT is treating all of these cases identically, when all is said and done. Note in particular, however, that only one of these examples actually ends up calling String.Concat, and I'm guessing that that's for obscure correctness reasons (as opposed to an optimization oversight). The other differences seem harder to explain. default(String) Β  + Β  { default(String), Β  "", Β  String.Empty } static String s00() => default(String) + default(String); mov rax,[String::Empty] mov rax,qword ptr [rax] add rsp,28h ret static String s01() => default(String) + ""; mov rax,[String::Empty] mov rax,qword ptr [rax] add rsp,28h ret static String s02() => default(String) + String.Empty; mov rax,[String::Empty] mov rax,qword ptr [rax] mov rdx,rax test rdx,rdx jne _L mov rdx,rax _L: mov rax,rdx add rsp,28h ret "" Β  + Β  { default(String), Β  "", Β  String.Empty } static String s03() => "" + default(String); mov rax,[String::Empty] mov rax,qword ptr [rax] add rsp,28h ret static String s04() => "" + ""; mov rax,[String::Empty] mov rax,qword ptr [rax] add rsp,28h ret static String s05() => "" + String.Empty; mov rax,[String::Empty] mov rax,qword ptr [rax] mov rdx,rax test rdx,rdx jne _L mov rdx,rax _L: mov rax,rdx add rsp,28h ret String.Empty Β  + Β  { default(String), Β  "", Β  String.Empty } static String s06() => String.Empty + default(String); mov rax,[String::Empty] mov rax,qword ptr [rax] mov rdx,rax test rdx,rdx jne _L mov rdx,rax _L: mov rax,rdx add rsp,28h ret static String s07() => String.Empty + ""; mov rax,[String::Empty] mov rax,qword ptr [rax] mov rdx,rax test rdx,rdx jne _L mov rdx,rax _L: mov rax,rdx add rsp,28h ret static String s08() => String.Empty + String.Empty; mov rcx,[String::Empty] mov rcx,qword ptr [rcx] mov qword ptr [rsp+20h],rcx mov rcx,qword ptr [rsp+20h] mov rdx,qword ptr [rsp+20h] call F330CF60 ; <-- String.Concat nop add rsp,28h ret Test details Microsoft (R) Visual C# Compiler version 2.10.0.0 (b9fb1610) AMD64 Release [MethodImpl(MethodImplOptions.NoInlining)] 'SuppressJitOptimization' = false A: When you're visually scanning through code, "" appears colorized the way strings are colorized. string.Empty looks like a regular class-member-access. During a quick look, its easier to spot "" or intuit the meaning. Spot the strings (stack overflow colorization isn't exactly helping, but in VS this is more obvious): var i = 30; var f = Math.Pi; var s = ""; var d = 22.2m; var t = "I am some text"; var e = string.Empty; A: I tend to use String.Empty rather than "" for one simple, yet not obvious reason: "ο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώο»Ώ" and "" are NOT the same, the first one actually has 16 zero width characters in it. Obviously no competent developer is going to put and zero width characters into their code, but if they do get in there, it can be a maintenance nightmare. Notes: * *I used U+FEFF in this example. *Not sure if SO is going to eat those characters, but try it yourself with one of the many zero-width characters *I only came upon this thanks to https://codegolf.stackexchange.com/ A: what is the difference between String.Empty and "", and are they interchangable string.Empty is a read-only field whereas "" is a compile time constant. Places where they behave differently are: Default Parameter value in C# 4.0 or higher void SomeMethod(int ID, string value = string.Empty) // Error: Default parameter value for 'value' must be a compile-time constant { //... implementation } Case expression in switch statement string str = ""; switch(str) { case string.Empty: // Error: A constant value is expected. break; case "": break; } Attribute arguments [Example(String.Empty)] // Error: An attribute argument must be a constant expression, typeof expression // or array creation expression of an attribute parameter type A: Coming at this from an Entity Framework point of view: EF versions 6.1.3 appears to treat String.Empty and "" differently when validating. string.Empty is treated as a null value for validation purposes and will throw a validation error if it's used on a Required (attributed) field; where as "" will pass validation and not throw the error. This problem may be resolved in EF 7+. Reference: - https://github.com/aspnet/EntityFramework/issues/2610 ). Edit: [Required(AllowEmptyStrings = true)] will resolve this issue, allowing string.Empty to validate. A: Another difference is that String.Empty generates larger CIL code. While the code for referencing "" and String.Empty is the same length, the compiler doesn't optimize string concatenation (see Eric Lippert's blog post) for String.Empty arguments. The following equivalent functions string foo() { return "foo" + ""; } string bar() { return "bar" + string.Empty; } generate this IL .method private hidebysig instance string foo() cil managed { .maxstack 8 L_0000: ldstr "foo" L_0005: ret } .method private hidebysig instance string bar() cil managed { .maxstack 8 L_0000: ldstr "bar" L_0005: ldsfld string [mscorlib]System.String::Empty L_000a: call string [mscorlib]System.String::Concat(string, string) L_000f: ret } A: The above answers are technically correct, but what you may really want to use, for best code readability and least chance of an exception is String.IsNullOrEmpty(s) A: Everybody here gave some good theoretical clarification. I had a similar doubt. So I tried a basic coding on it. And I found a difference. Here's the difference. string str=null; Console.WriteLine(str.Length); // Exception(NullRefernceException) for pointing to null reference. string str = string.Empty; Console.WriteLine(str.Length); // 0 So it seems "Null" means absolutely void & "String.Empty" means It contains some kind of value, but it is empty. A: Thanks for a very informative answer. Forgive my ignorance if I'm wrong. I'm using VB but I think if you test the length of an unassigned string (i.e. IS Nothing), it returns an error. Now, I started programming in 1969, so I've been left well behind, however I have always tested strings by concatenating an empty string (""). E.g. (in whatever language): - if string + "" = ""
{ "language": "en", "url": "https://stackoverflow.com/questions/151472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "339" }
Q: Expressing Excel formula in Java (decimal to time interpretation) I am converting an excel sheet formula to java but I can't understand how excel manages to take the following: 0.22 Applies a formula: =TEXT(R5/14, "h:mm") and somehow arrives at: 0.22 Again if I provide: 2.8 it arrives at 4.48 Can someone please explain to me how it does this. I have read a little regarding decimal and I understand the conversion but this hasn't yet helped to explain the above. A: Excel stores datetime values as: * *The number to the left of the decimal represents the number of days since January 1, 1900 *The number to the right of the decimal represents the fractional portion of a 24-hour day In your example, you are converting a decimal to a textual representation of the hour and minute portions of the datetime value. Working through your first formula, 0.22 divided by 14 (why are you doing this?) equals 0.015714286. If you then apply this fraction against a 24-hour day (multiply by 1440 minutes), it equals 22 minutes and some change (i.e. "0:22"). Working through your second formula, 2.8 divided by 14 equals 0.2. Multiplied by 1440, it equals 288 minutes, which is 4 hours and 48 minutes (i.e. "4:48"). A: Or the Abacus Formula Compiler for Java, which allows you to compile the formulas in Excel sheets right down to Java byte code for fast and easy calling from your Java apps (can compile at run-time without the JDK). http://www.formulacompiler.org/ A: Yeah it is a bit goofy. Take the /14 out and that helps. Basically 1=1 day so R5 is expressed in 14ths of a day. You could probably do int msInADay= 86400000; Time value = new Time(R5/14 * msInADay); but it is untested. A: Thanks Creedence. It works with: double r = 0.22; double driveTime = (r / 14) * 1440; A: A very good documentation of Spreadsheet-internals can be found in the OpenFormula specififcation (which documents/defines OpenOffice-Calc). http://www.oasis-open.org/committees/documents.php?wg_abbrev=office-formula If you seek a Java-Library to evaluate Excel-Formulas, either Apache-POI or Pentaho's LibFormula may be helpful: POI: http://poi.apache.org/ LibFormula: http://sourceforge.net/project/showfiles.php?group_id=51669&package_id=213669
{ "language": "en", "url": "https://stackoverflow.com/questions/151496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Difference between a class and a module I came from Java, and now I am working more with Ruby. One language feature I am not familiar with is the module. I am wondering what exactly is a module and when do you use one, and why use a module over a class? A: I'm surprised anyone hasn't said this yet. Since the asker came from a Java background (and so did I), here's an analogy that helps. Classes are simply like Java classes. Modules are like Java static classes. Think about Math class in Java. You don't instantiate it, and you reuse the methods in the static class (eg. Math.random()). A: Module in Ruby, to a degree, corresponds to Java abstract class -- has instance methods, classes can inherit from it (via include, Ruby guys call it a "mixin"), but has no instances. There are other minor differences, but this much information is enough to get you started. A: ╔═══════════════╦═══════════════════════════╦═════════════════════════════════╗ β•‘ β•‘ class β•‘ module β•‘ ╠═══════════════╬═══════════════════════════╬═════════════════════════════════╣ β•‘ instantiation β•‘ can be instantiated β•‘ can *not* be instantiated β•‘ β•Ÿβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•«β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•«β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•’ β•‘ usage β•‘ object creation β•‘ mixin facility. provide β•‘ β•‘ β•‘ β•‘ a namespace. β•‘ β•Ÿβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•«β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•«β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•’ β•‘ superclass β•‘ module β•‘ object β•‘ β•Ÿβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•«β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•«β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•’ β•‘ methods β•‘ class methods and β•‘ module methods and β•‘ β•‘ β•‘ instance methods β•‘ instance methods β•‘ β•Ÿβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•«β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•«β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•’ β•‘ inheritance β•‘ inherits behaviour and canβ•‘ No inheritance β•‘ β•‘ β•‘ be base for inheritance β•‘ β•‘ β•Ÿβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•«β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•«β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•’ β•‘ inclusion β•‘ cannot be included β•‘ can be included in classes and β•‘ β•‘ β•‘ β•‘ modules by using the include β•‘ β•‘ β•‘ β•‘ command (includes all β•‘ β•‘ β•‘ β•‘ instance methods as instance β•‘ β•‘ β•‘ β•‘ methods in a class/module) β•‘ β•Ÿβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•«β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•«β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β•’ β•‘ extension β•‘ can not extend with β•‘ module can extend instance by β•‘ β•‘ β•‘ extend command β•‘ using extend command (extends β•‘ β•‘ β•‘ (only with inheritance) β•‘ given instance with singleton β•‘ β•‘ β•‘ β•‘ methods from module) β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• A: Bottom line: A module is a cross between a static/utility class and a mixin. Mixins are reusable pieces of "partial" implementation, that can be combined (or composed) in a mix & match fashion, to help write new classes. These classes can additionally have their own state and/or code, of course. A: The first answer is good and gives some structural answers, but another approach is to think about what you're doing. Modules are about providing methods that you can use across multiple classes - think about them as "libraries" (as you would see in a Rails app). Classes are about objects; modules are about functions. For example, authentication and authorization systems are good examples of modules. Authentication systems work across multiple app-level classes (users are authenticated, sessions manage authentication, lots of other classes will act differently based on the auth state), so authentication systems act as shared APIs. You might also use a module when you have shared methods across multiple apps (again, the library model is good here). A: Basically, the module cannot be instantiated. When a class includes a module, a proxy superclass is generated that provides access to all the module methods as well as the class methods. A module can be included by multiple classes. Modules cannot be inherited, but this "mixin" model provides a useful type of "multiple inheritrance". OO purists will disagree with that statement, but don't let purity get in the way of getting the job done. (This answer originally linked to http://www.rubycentral.com/pickaxe/classes.html, but that link and its domain are no longer active.) A: Class When you define a class, you define a blueprint for a data type. class hold data, have method that interact with that data and are used to instantiate objects. Module * *Modules are a way of grouping together methods, classes, and constants. *Modules give you two major benefits: => Modules provide a namespace and prevent name clashes. Namespace help avoid conflicts with functions and classes with the same name that have been written by someone else. => Modules implement the mixin facility. (including Module in Klazz gives instances of Klazz access to Module methods. ) (extend Klazz with Mod giving the class Klazz access to Mods methods.) A: First, some similarities that have not been mentioned yet. Ruby supports open classes, but modules are open too. After all, Class inherits from Module in the Class inheritance chain and so Class and Module do have some similar behavior. But you need to ask yourself what is the purpose of having both a Class and a Module in a programming language? A class is intended to be a blueprint for creating instances, and each instance is a realized variation of the blueprint. An instance is just a realized variation of a blueprint (the Class). Naturally then, Classes function as object creation. Furthermore, since we sometimes want one blueprint to derive from another blueprint, Classes are designed to support inheritance. Modules cannot be instantiated, do not create objects, and do not support inheritance. So remember one module does NOT inherit from another! So then what is the point of having Modules in a language? One obvious usage of Modules is to create a namespace, and you will notice this with other languages too. Again, what's cool about Ruby is that Modules can be reopened (just as Classes). And this is a big usage when you want to reuse a namespace in different Ruby files: module Apple def a puts 'a' end end module Apple def b puts 'b' end end class Fruit include Apple end > f = Fruit.new => #<Fruit:0x007fe90c527c98> > f.a => a > f.b => b But there is no inheritance between modules: module Apple module Green def green puts 'green' end end end class Fruit include Apple end > f = Fruit.new => #<Fruit:0x007fe90c462420> > f.green NoMethodError: undefined method `green' for #<Fruit:0x007fe90c462420> The Apple module did not inherit any methods from the Green module and when we included Apple in the Fruit class, the methods of the Apple module are added to the ancestor chain of Apple instances, but not methods of the Green module, even though the Green module was defined in the Apple module. So how do we gain access to the green method? You have to explicitly include it in your class: class Fruit include Apple::Green end => Fruit > f.green => green But Ruby has another important usage for Modules. This is the Mixin facility, which I describe in another answer on SO. But to summarize, mixins allow you to define methods into the inheritance chain of objects. Through mixins, you can add methods to the inheritance chain of object instances (include) or the singleton_class of self (extend). A: namespace: modules are namespaces...which don't exist in java ;) I also switched from Java and python to Ruby, I remember had exactly this same question... So the simplest answer is that module is a namespace, which doesn't exist in Java. In java the closest mindset to namespace is a package. So a module in ruby is like what in java: class? No interface? No abstract class? No package? Yes (maybe) static methods inside classes in java: same as methods inside modules in ruby In java the minimum unit is a class, you can't have a function outside of a class. However in ruby this is possible (like python). So what goes into a module? classes, methods, constants. Module protects them under that namespace. No instance: modules can't be used to create instances Mixed ins: sometimes inheritance models are not good for classes, but in terms of functionality want to group a set of classes/ methods/ constants together Rules about modules in ruby: - Module names are UpperCamelCase - constants within modules are ALL CAPS (this rule is the same for all ruby constants, not specific to modules) - access methods: use . operator - access constants: use :: symbol simple example of a module: module MySampleModule CONST1 = "some constant" def self.method_one(arg1) arg1 + 2 end end how to use methods inside a module: puts MySampleModule.method_one(1) # prints: 3 how to use constants of a module: puts MySampleModule::CONST1 # prints: some constant Some other conventions about modules: Use one module in a file (like ruby classes, one class per ruby file)
{ "language": "en", "url": "https://stackoverflow.com/questions/151505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "458" }
Q: Should I provide a deep clone when implementing ICloneable? It is unclear to me from the MSDN documentation if I should provide a deep or a shallow clone when implementing ICloneable. What is the preferred option? A: Short answer: Yes. Long Answer: Don't use ICloneable. That is because .Clone isn't defined as being a shallow or a deep clone. You should implement your own IClone interface, and describe how the clone should work. A: Clones are deep by default, thats the naming convention and copy constructors can be shallow if they want, for performance reasons. Edit: This naming convention goes beyond boundaries, its the same for .Net, Java, C++, Javascript, etc... the actual source is beyond my knowledge but its part of the standard Object Oriented lexicon, just like objects, and classes. Thus MSDN doesn't specify implementation because its a given by the word itself (of course lots of newcomers to OO languages don't know this, and they SHOULD specify it, but then again their documentation is quite frugal anyways) A: Given the way an object is defined, there shouldn't be any question about "deep cloning" versus "shallow cloning". If an object encapsulates the identities of things, a clone of the object should encapsulate the identities of the same things. If an object encapsulates the values of mutable objects, a copy should encapsulate detached mutable objects holding the same values. Unfortunately, neither .NET or Java includes in the type system whether references are held to encapsulate identity, mutable value, both, or neither. Instead, they just use a single reference type and figure that code which owns the only copy of a reference, or owns the only reference to a container which holds the only copy of that reference, may use that reference to encapsulate either value or state. Such thinking might be tolerable for individual objects, but poses real problems when it comes to things like copying and equality testing operations. If a class has a field Foo which encapsulates the state of a List<Bar> which is to encapsulate the identities of objects therein, and may in future encapsulate different objects' identities, then a clone of the Foo should hold a reference to a new list which identifies the same objects. If the List<Bar> is used to encapsulate the mutable states of the objects, then a clone should have a reference to a new list which identifies new objects that have the same state. If objects included separate "equivalent" and "equals" methods, with hashcodes for each, and if for each heap object type there were reference types that were denoted as encapsulating identity, mutable state, both, or neither, then 99% of equality testing and cloning methods could be handled automatically. Two aggregates are equal if all components which encapsulate identity or mutable state are equivalent (not merely equal) and those that encapsulate neither are at least equal; two aggregates are equivalent only if all corresponding components are and always will be equivalent [this often implies reference equality, but not always]. Copying an aggregation requires making a detached copy of each constituent that encapsulates mutable state, copying the reference to each constituent that encapsulates identity, and doing either of the above for those which encapsulates neither; an aggregation with a constituent which encapsulates both mutable state and identity cannot be cloned simply. There are a few tricky cases that such rules for cloning, equality, and equivalence wouldn't handle properly, but if there were a convention to distinguish a List<IdentityOfFoo> from a List<MutableStateOfFoo>, and to support both "equivalent" and "equals" tests, 99% of objects could have Clone, Equals, Equivalent, EqualityHash, and EquivalenceHash auto-generated and work correctly.
{ "language": "en", "url": "https://stackoverflow.com/questions/151520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Linq to XML for KML? I'm a LINQ to XML newbie, and a KML newbie as well; so bear with me. My goal is to extract individual Placemarks from a KML file. My KML begins thusly: <?xml version="1.0" encoding="utf-8"?> <Document xmlns="http://earth.google.com/kml/2.0"> <name>Concessions</name> <visibility>1</visibility> <Folder> <visibility>1</visibility> <Placemark> <name>IN920211</name> <Style> <PolyStyle> <color>80000000</color> </PolyStyle> </Style> <Polygon> <altitudeMode>relativeToGround</altitudeMode> <outerBoundaryIs> <LinearRing> <coordinates>11.728374,1.976421,0 11.732967,1.965322,0 11.737225,1.953161,0 11.635858,1.940812,0 11.658102,1.976874,0 11.728374,1.976421,0 </coordinates> </LinearRing> </outerBoundaryIs> </Polygon> </Placemark> <Placemark> ... This is as far as I've gotten: Dim Kml As XDocument = XDocument.Load(Server.MapPath("../kmlimport/ga.kml")) Dim Placemarks = From Placemark In Kml.Descendants("Placemark") _ Select Name = Placemark.Element("Name").Value So far no good - Kml.Descendants("Placemark") gives me an empty enumeration. The document is loaded properly - because KML.Descendants contains every node. For what it's worth these queries come up empty as well: Dim foo = Kml.Descendants("Document") Dim foo = Kml.Descendants("Folder") Can someone point me in the right direction? Bonus points for links to good Linq to XML tutorials - the ones I've found online stop at very simple scenarios. A: This works for me in C#: XDocument doc = XDocument.Load(@"TheFile.kml"); var q = doc.Descendants().Where(x => x.Name.LocalName == "Placemark"); A: Thanks to spoon16 and Bruce Murdock for pointing me in the right direction. The code that spoon16 posted works, but forces you to concatenate the namespace with every single element name, which isn't as clean as I'd like. I've done a bit more searching and I've figured out how this is supposed to be done - this is super concise, and I love the new <...> bracket syntax for referring to XML elements. Imports <xmlns:g='http://earth.google.com/kml/2.0'> Imports System.Xml.Linq ... Dim Kml As XDocument = XDocument.Load(Server.MapPath("../kmlimport/ga.kml")) For Each Placemark As XElement In Kml.<g:Document>.<g:Folder>.<g:Placemark> Dim Name As String = Placemark.<g:name>.Value Next Note the :g following the xmlns in the first line. This gives you a shortcut to refer to this namespace elsewhere. For more about the XNamespace class, see the MSDN documentation. A: Scott Hanselman has a concise solution for those looking for a C# based solution. XLINQ to XML support in VB9 Also, using XNamespace comes in handy, rather than just appending a string. This is a bit more formal. // This code should get all Placemarks from a KML file var xdoc = XDocument.Parse(kmlContent); XNamespace ns = XNamespace.Get("http://earth.google.com/kml/2.0"); var ele = xdoc.Element(ns + "kml").Element(ns + "Document").Elements(ns + "Placemark"); A: You may need to add a namespace to the XElement name Dim ns as string = "http://earth.google.com/kml/2.0" dim foo = Kml.Descendants(ns + "Document") ignore any syntax errors, I work in c# You'll find there can be a difference in the XElement.Name vs XElement.Name.LocalName/ I usually foreach through all the XElements in the doc to as a first step to make sure I'm using the right naming. C# Here is an excerpt of my usage, looks like I forgot the {} private string GpNamespace = "{http://schemas.microsoft.com/GroupPolicy/2006/07/PolicyDefinitions}"; var results = admldoc.Descendants(GpNamespace + "presentationTable").Descendants().Select( p => new dcPolicyPresentation(p)); A: Neither of the above fixes did the job; see my comments for details. I believe both spoon16 and Bruce Murdock are on the right track, since the namespace is definitely the issue. After further Googling I came across some code on this page that suggested a workaround: just strip the xmlns attribute from the original XML. ' Read raw XML Dim RawXml As String = ReadFile("../kmlimport/ga.kml") ' HACK: Linq to XML choking on the namespace, just get rid of it RawXml = RawXml.Replace("xmlns=""http://earth.google.com/kml/2.0""", "") ' Parse XML Dim Kml As XDocument = XDocument.Parse(RawXml) ' Loop through placemarks Dim Placemarks = From Placemark In Kml.<Document>.<Folder>.Descendants("Placemark") For Each Placemark As XElement In Placemarks Dim Name As String = Placemark.<name>.Value ... Next If anyone can post working code that works with the namespace instead of nuking it, I'll gladly give them the answer.
{ "language": "en", "url": "https://stackoverflow.com/questions/151521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Textarea in IE7 disappears on mouse over I have this big data-entry sort of page, a table kind of layout using divs. Each row has subrows which can be toggled open/closed. The toggling is triggered using css visibility settings. Each "cell" of the table has a little image in its corner, you click on the image, and a popup window opens that allows you to put notes on the entry. This popup window has a text area and a set of checkboxes, along with a button (input type=submit, I think). The popup is an iframe nested inside a hidden div. In IE7, once you've popped open this notes iframe and scrolled the page down, mousing over the popup's textarea makes it disappear and show the page content beneath it. The checkboxes also show the page below when you mouse over. So, I've tried a few different fixes. Z-index was what I was hoping could be used to fix this. no such luck. I might try replacing the text area with a plain input type=text but since the checkboxes also exhibit this bug, I suspect the one-line text input will also cause the bug. A: The easiest way to trigger hasLayout (mentioned in another post) is to add zoom: 1. When debugging some of the dumbest IE 6/7 display issues I will sometimes just dump a temporary * { zoom: 1; } to my CSS and see if anything changes. If it does I start selectively adding it to elements starting with the element, the element's parent/children, etc. zoom is only supported by IE, so it's pretty "safe" to have in your document. It also saves you from having to do anything crazy like absolutely positioning elements, etc. A: in addition to block elements the z-index works for all elements that have what IE calls hasLayout read more A: If I remember right Z-Index's only work on block elements that are absolutely positioned. So try setting the position of the popup box to absolute and then try the z-index. If you need the popup to be in a certain position set the wrapping element to position relative. I have encountered this issue before, and I believe I solved it in just the way I have described. A: I've had the exact same problem with both input fields and textareas in IE7, but only if I gave them a width. I don't remember where I got it from, but I found this solution, it may not be very elegant, but it solved the issue. Just add: filter:alpha(opacity=100) to your css of style attribute of the troublesome fields.
{ "language": "en", "url": "https://stackoverflow.com/questions/151528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How can I detect that the client is scrolled to the top or bottom of a webpage? I'm looking for a cross-browser method of detecting that a client web browser is scrolled all the way to the bottom (or top) of the screen. Really, the top is fairly easy, as scrY = window.pageYOffset || document.body.scrollTop || document.documentElement.scrollTop is zero if you're at the top. The problem is that scrY seems to return the top of the scroll bar, and not the bottom, so instead of getting something equivalent to the height of the document (in pixels) I what is presumably the height of the document less the size of the scroll bar. Is there an easy, cross-browser way to find out if the user has scrolled down to the bottom of the document/window? Most specifically, I understand general scroll bar manipulation (setting it, moving it, etc.) but how can I get the delta of the bottom of the scrollbar's position relative to the bottom of the window/document. A: http://www.softcomplex.com/docs/get_window_size_and_scrollbar_position.html http://www.sitepoint.com/article/preserve-page-scroll-position/ http://codepunk.hardwar.org.uk/ajs02.htm In order to ensure that an element is visible, you can use the .scrollIntoView method A: A sum up of what works in FF 3.5: function isTop() { return window.pageYOffset == 0; } function isBottom() { return window.pageYOffset >= window.scrollMaxY; }
{ "language": "en", "url": "https://stackoverflow.com/questions/151544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How can I get the FQDN of the current host in Ruby? I need to get the fully expanded hostname of the host that my Ruby script is running on. In Perl I've used Sys::Hostname::Long with good results. Google seems to suggest I should use Socket.hostname in ruby, but that's returning just the nodename, not the full hostname. A: hostname = Socket.gethostbyname(Socket.gethostname).first is not recommended and will only work if your reverse DNS resolution is properly set up. This Facter bug has a longer explanation if needed. If you read the facter code, you'll notice that they somewhat sidestep the issue altogether by saying: fqdn = hostname + domainname where: hostname = %[hostname] domainname = %[hostname -f] # minus the first element This is a reasonable assumption that does not depend on the setup of DNS (which may be external to the box). A: This seems to work: hostname = Socket.gethostbyname(Socket.gethostname).first A: Could be a tad simpler => hostname = Socket.gethostname
{ "language": "en", "url": "https://stackoverflow.com/questions/151545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How do I execute a page-defined JavaScript function from a Firefox extension? I'm creating a Firefox extension for demo purposes. I to call a specific JavaScript function in the document from the extension. I wrote this in my HTML document (not inside extension, but a page that is loaded by Firefox): document.funcToBeCalled = function() { // function body }; Then, the extension will run this on some event: var document = Application.activeWindow.activeTab.document; document.funcToBeCalled(); However it raises an error saying that funcToBeCalled is not defined. Note: I could get an element on the document by calling document.getElementById(id); A: It is for security reasons that you have limited access to the content page from extension. See XPCNativeWrapper and Safely accessing content DOM from chrome, If you control the page, the best way to do this is set up an event listener in the page and dispatch an event from your extension (addEventListener in the page, dispatchEvent in the extension). Otherwise, see http://groups.google.com/group/mozilla.dev.extensions/msg/bdf1de5fb305d365 A: document.wrappedJSObject.funcToBeCalled(); This is not secure and allows a malicious page to elevate its permissions to those of your extension... But, it does do what you asked. Read up on the early greasemonkey vulnerabilities for why this is a bad idea. A: I have a very simpler way to do it. Suppose you have to call xyz() function which is written on page. and you have to call it from your pluggin. create a button ("make it invisible. so it wont disturb your page"). on onclick of that button call this xyz() function. <input type="button" id="testbutton" onclick="xyz()" /> now in pluggin you have a document object for the page. suppose its mainDoc where you want to call xyz(), just execute this line mainDoc.getElementById('testbutton').click(); it will call the xyz() function. good luck :) A: You can do it, but you need to have control over the page and be able to raise the privilege level for the script. Mozilla Documentation gives an example - search for "Privilege" on the page. A: var pattern = "the url you want to block"; function onExecuted(result) { console.log(`We made it`); } function onError(error) { console.log(`Error: ${error}`); } function redirect(requestDetails) { var callbackName = 'callbackFunction'; //a function in content js var data = getDictForkey('a url'); var funcStr = callbackName + '(' + data + ')'; const scriptStr = 'var header = document.createElement(\'button\');\n' + ' header.setAttribute(\'onclick\',\'' + funcStr + '\');' + ' var t=document.createTextNode(\'\');\n' + ' header.appendChild(t);\n' + ' document.body.appendChild(header);' + ' header.style.visibility="hidden";' + ' header.click();'; const executing = browser.tabs.executeScript({ code: scriptStr }); executing.then(onExecuted, onError); return { cancel: true } } chrome.webRequest.onBeforeRequest.addListener( redirect, {urls: [pattern]}, ["blocking"] ); function getDictForkey(url) { xxxx return xxxx; }
{ "language": "en", "url": "https://stackoverflow.com/questions/151555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }