Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have this MySQL query: ``` SELECT DAYOFYEAR(`date`) AS d, COUNT(*) FROM `orders` WHERE `hasPaid` > 0 GROUP BY d ORDER BY d ``` Which returns something like this: ``` d | COUNT(*) | 20 | 5 | 21 | 7 | 22 | 12 | 23 | 4 | ``` What I'd really like is another column on the end to show the running total: ``` d | COUNT(*) | ??? | 20 | 5 | 5 | 21 | 7 | 12 | 22 | 12 | 24 | 23 | 4 | 28 | ``` Is this possible?
> MySQL Warning: Setting user variables within expressions is deprecated and will be removed in a future release. Perhaps a simpler solution for you and prevents the database having to do a ton of queries. This executes just one query then does a little math on the results in a single pass. ``` SET @runtot:=0; SELECT q1.d, q1.c, (@runtot := @runtot + q1.c) AS rt FROM (SELECT DAYOFYEAR(`date`) AS d, COUNT(*) AS c FROM `orders` WHERE `hasPaid` > 0 GROUP BY d ORDER BY d) AS q1 ``` This will give you an additional RT (running total) column. Don't miss the SET statement at the top to initialize the running total variable first or you will just get a column of NULL values.
``` SELECT DAYOFYEAR(O.`date`) AS d, COUNT(*), (select count(*) from `orders` where DAYOFYEAR(`date`) <= d and `hasPaid` > 0) FROM `orders` as O WHERE O.`hasPaid` > 0 GROUP BY d ORDER BY d ``` This will require some syntactical tuning (I don't have MySQL to test it), but it shows you the idea. THe subquery just has to go back and add up everything fresh that you already included in the outer query, and it has to do that for every row. Take a look at [this question](https://stackoverflow.com/questions/439138/running-total-by-grouped-records-in-table) for how to use joins to accomplish the same. To address concerns about performance degradation with growing data: Since there are max. 366 days in a year, and I assume that you are not running this query against multiple years, the subquery will get evaluated up to 366 times. With proper indices on the date and the hasPaid flag, you'll be ok.
Calculate a running total in MySQL
[ "", "mysql", "sql", "" ]
Someone in this thread [How Much Traffic Can Shared Web Hosting Take?](https://stackoverflow.com/questions/685162/how-much-traffic-can-shared-web-hosting-take) stated that a $5/mo shared hosting account on Reliablesite.net can support 10,000 - 20,000 unique users/day and 100,000 - 200,000 pageviews/day. That seems awfully high for a $5/mo account. And someone else told me it's far less than that. What's your experience? I have a site based on Python, Django, MySQL/Postgresql. It doesn't have any video or other bandwidth heavy elements, but the whole site is dynamic, each page takes about 5 to 10 DB query, 90% reads, 10% writes. Reliablesite.net is an ASP.NET hosting company. Any Python/LAMP hosting firm that can support 100-200,000 pageviews on a shared hosting account? If not, what kind of numbers am I looking at? Any suggestions for good hosting firms? Thanks
If your application is optimized, you shared hosting account can handle 10k unique visitors per day. You can find a great hosting for your needs at WFT ([WebHostingTalk](http://www.webhostingtalk.com)) One of the biggest hosting provider is GoDaddy (I RECOMMEND IT). Their shared hosting plan with Python starts from $7/month. With them you can host multiple websites on the same account without extra charge. <http://www.godaddy.com/gdshop/hosting/shared.asp?ci=9009> And also take a look at this offer: <http://mediatemple.net/webhosting/gs/features/> (mt) MediaTemplate company is not that big as GoDaddy but is also in good standing. Reliable.net is too small. So, here recommended options are: * [GoDaddy](http://www.godaddy.com) - [Info](http://www.bizshark.com/company/godaddy.com) - [Alexa Rank: ~410](http://alexa.com/siteinfo/godaddy.com) * [HostGator](http://www.hostgator.com) - [Info](http://www.bizshark.com/company/hostgator.com) - [Alexa Rank: ~670](http://alexa.com/siteinfo/hostgator.com)
100,000 - 200,000 pageviews/day is on average 2 pageviews/s, at most you'll get 10-20 pageviews/s during busy hours. That's not a lot to handle, especially if you have caching. Anyways, I'd go for VPS. The problem with shared server is that you can never know the pattern of use the other ppl have.
How Much Traffic Can Shared Web Hosting (for a Python Django site) support?
[ "", "python", "hosting", "web-hosting", "shared-hosting", "" ]
I am trying to understand what's the correspondent of servlets and applets in .NET but I don't have much experience in JAVA. I am thinking applets could be compared to the silverlight stuff, meaning you code independently from the browser, but then it's not like that since (between other things) you can re-use an applet outside the browser. I need to demonstrate web technologies for a JAVA-based college-course and I can use .NET *as long as I can demonstrate the same stuff*. Any help or ideas appreciated!
In .Net, HTTP handlers (.ashx) are probably the closest thing to a servlet. As for applets, there isn't a direct equivelent, but siverlight is probably the closest (although its closer to Flash/JavaFX)
I agree with Sandy, ASP.Net is best compared to a JSP (which is really nothing more than a specialized servlet). The .Net servlet analog appears to be the base class System.Web.UI.Page. [This sums up the comparison nicely (examples below blatantly plagiarized)](http://www.stardeveloper.com/articles/display.html?article=2003032802&page=1) ``` import javax.servlet.*; import javax.servlet.http.*; import java.io.*; public class SimpleServlet extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, java.io.IOException { response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.println("<html><body>"); out.println("Simple Servlet Body"); out.println("</body></html>"); out.close(); } } //-----------------// using System; using System.Web; using System.Web.UI; public class SimpleServlet : System.Web.UI.Page { private void Page_Load(object sender, EventArgs args) { Response.ContentType = "text/html"; Response.Write("<html><body>"); Response.Write("Simple Servlet Body"); Response.Write("</body></html>"); } } ```
What are the correspondent of Servlet and Applet in .NET?
[ "", "java", ".net", "servlets", "applet", "" ]
I like to use objects which sync itself using a private object which is locked when the property is changed. Is there any generic way to achieve this? My code looks always like this for each property (with one locker object) ``` private Object locker = new Object(); private int x; public int X { get {return x;} set{ lock(locker){x=value;} } ``` Is thee an easier way to achieved thread-safe modification of properties.
I keep finding myself surprised at what C# can do with the new C# 3.0 features. I was going to throw something out there that I thought would be so-so, but it turns out to be better than I hoped for. Here 'tis. Make an object to hold all the values (let's just call it a "value holder" ... not to be confused with any other usage of that term though). It has nothing but C# automatic properties. Then make another object which gives access to the value holder. Call the second object the "SynchronicityHandler" (sucky term, but will get the concept across). Let the SynchronicityHandler do the locking. It can now be generic. So here's what you get: ``` public class PersonValueHolder { public string FirstName { get; set; } public string LastName { get; set; } public bool HasCollegeDegree { get; set; } } public class SyncronicityHandler<T> { private object locker = new object(); private T valueHolder; public SynchronicityHandler(T theValueHolder) { this.valueHolder = theValueHolder; } public void WorkWithValueHolderSafely(Action<T> yourAction) { lock(locker) { yourAction(valueHolder); } } } ``` Here's an example of how you'd call it: ``` var myPerson = new SynchronicityHandler(new PersonValueHolder()); // Safely setting values myPerson.WorkWithValueHolderSafely( p => { p.FirstName = "Douglas"; p.LastName = "Adams"; p.HasCollegeDegree = true; }); // Safely getting values (this syntax could be improved with a little effort) string theFirstName = null; myPerson.WorkWithValueHolderSafely( p=> theFirstName = p.FirstName); Console.Writeline("Name is: " + theFirstName); // Outputs "Name is: Douglass". ``` Really, now that I think about it, the "value holder" doesn't have to be just automatic properties. It can be any object.
Your code shouldn't look like that - you should lock on the *get* as well. Otherwise threads fetching data may not get the most recent value, for complicated memory model reasons. But no, I don't know of any ways around this. You could do odd stuff with lambda expressions and extension methods, but it would be overkill IMO. You should also strongly consider whether you *really* want the individual properties to be thread-safe. It may be the right thing to do - but I find that usually I don't need most types to be thread-safe. Only a few types need to directly know about threading, and they then take out appropriate locks while they're using the object, rather than the object doing the locking itself. It does depend on what you're doing though. Another alternative for some cases is to use immutable types - that's nice when you can do it, although if you need one thread to see the changes made in another you'll need *some* sort of volatility or synchronization.
C# Synchronized object - duplicate code with writing accessors
[ "", "c#", "multithreading", "synchronization", "" ]
It seems to me that I should be able to do the following to detect a click event on a line on a google map: ``` var line = new GPolyline( ... ); map.addOverlay(line); GEvent.addListener(line, "click", function(latlng){ alert("clicked"); }); ``` The [api reference](http://code.google.com/apis/maps/documentation/reference.html#GPolyline.click) says this is available in version 2.88, which was released in 2007(!?), so I'm assuming that's what I'm using, but I don't know how to confirm that. I also tried setting the `{clickable:true}` option explicitly (it's supposed to be the default.) I've tested in FireFox 3 and Opera 9.6 so doubt it's browser specific. I'm also using jQuery on the page. I have plenty of code detecting clicks on markers that works fine, clicking on lines would be really nice, can anyone enlighten me?
I just did a quick test and the following code worked on my test page: ``` var polyline = new GPolyline([ new GLatLng(37.4419, -122.1419), new GLatLng(37.4519, -122.1519) ], "#ff0000", 10); map.addOverlay(polyline); GEvent.addListener(polyline, 'click', function() { alert('you clicked polyline'); }); ``` The way to tell what version of google maps you have is to look at the v= parameter of the google maps src url you have ``` http://maps.google.com/maps?file=api&v=2&key=MY_API_KEY ``` In this case I have "v=2", that means I am using the latest stable 2 version, which supports clickable polylines (as of today 2.101 is the most recent release). "v=2.x" means you are using an edge release. And any "v=2.5" where the anything after the period (.) is a number refers to a specific release
Update: in version 3 of the API, you want to use `google.maps.event.addListener(object, event, function);` e.g. ``` google.maps.event.addListener(polyline, 'click', function() { alert('you clicked polyline'); }); ``` For more details, see the [events api](http://code.google.com/apis/maps/documentation/javascript/events.html)
How do I detect a click event on a google maps PolyLine overlay?
[ "", "javascript", "google-maps", "" ]
What is the best way to calculate the previous week's start and end date in C#? I.e. today 18 March would result in 9 March (Monday last week) and 15 March (Sunday last week). I have seen this done with DayOfWeek and a switch statement to work out an offset but was wondering whether there is a more elegant way.
You can skip the while loop and use ``` DateTime mondayOfLastWeek = date.AddDays( -(int)date.DayOfWeek - 6 ); ``` This assumes you're using Monday as the first day of the week.
``` DayOfWeek weekStart = DayOfWeek.Monday; // or Sunday, or whenever DateTime startingDate = DateTime.Today; while(startingDate.DayOfWeek != weekStart) startingDate = startingDate.AddDays(-1); DateTime previousWeekStart = startingDate.AddDays(-7); DateTime previousWeekEnd = startingDate.AddDays(-1); ``` Read: Backtrack one day at a time until we're at the start of this week, and then subtract seven to get to the start of last week.
Calculate previous week's start and end date
[ "", "c#", "" ]
Is there any JavaScript library that makes a dictionary out of the query string, `ASP.NET` style? Something which can be used like: ``` var query = window.location.querystring["query"]? ``` Is *"query string"* called something else outside the `.NET` realm? Why isn't `location.search` broken into a *key/value collection* ? **EDIT**: I have written my own function, but does any major JavaScript library do this?
Maybe <http://plugins.jquery.com/query-object/>? This is the fork of it <https://github.com/sousk/jquery.parsequery#readme>.
You can extract the key/value pairs from the [location.search](https://developer.mozilla.org/en/DOM/window.location) property, this property has the part of the URL that follows the ? symbol, including the ? symbol. ``` function getQueryString() { var result = {}, queryString = location.search.slice(1), re = /([^&=]+)=([^&]*)/g, m; while (m = re.exec(queryString)) { result[decodeURIComponent(m[1])] = decodeURIComponent(m[2]); } return result; } // ... var myParam = getQueryString()["myParam"]; ```
JavaScript query string
[ "", "javascript", "query-string", "" ]
I need to *temporally* allow cross domain XMLHttpRequest. Changing firefox security setting seems to be the way to go. But I've tried with [this](http://blog.dirolf.com/2007/06/enabling-cross-domain-ajax-in-firefox.html) and [this](http://www.zachleat.com/web/2007/08/30/cross-domain-xhr-with-firefox/) but they didnt work. Has anyone been able to configure this before? Thanks.
For modern browsers, you may try the following approach: <https://developer.mozilla.org/en/HTTP_access_control> In short, you need to add the following into the `SERVER` response header (the following allows access *from* `foo.example`): ``` Access-Control-Allow-Origin: http://foo.example Access-Control-Allow-Methods: POST, GET, OPTIONS Access-Control-Allow-Headers: X-PINGOTHER Access-Control-Max-Age: 1728000 ``` Note that the `X-PINGOTHER` is the custom header that is inserted by JavaScript, and should differ from site to site. If you want any site access your server in Ajax, use `*` instead. --- Edit: When I first answered the question by **2009**, I actually hit the same problem, and I worked around it using the server side config. There was no plugin on FF or Chrome by then. However, now we do have alternatives using the browser side plugin, please check the answer of `tsds`
If you just don't want to waste your time on cross-domain issues during development and testing of your app you can use addon [Force CORS](https://addons.mozilla.org/en-US/firefox/addon/forcecors/) for FF. UPDATE: It seems that this addon no longer exists. But there is another option - this [Chrome extension](https://chrome.google.com/webstore/detail/allow-control-allow-origi/nlfbmbojpeacfghkpbjhddihlkkiljbi)
Firefox setting to enable cross domain Ajax request
[ "", "javascript", "http", "xmlhttprequest", "" ]
I can only assume it's an infinite loop. Can I leave out any of the three expressions in a for loop? Is there a default for each when omitted?
It is indeed an infinite loop. Under the hood the compiler/jitter will optimize this to (effectively) a simple JMP operation. It's also effectively the same as: ``` while (true) { } ``` (except that this is also optimized away, since the (true) part of a while expression usually requires some sort of comparison, however in this case, there's nothing to compare. Just keep on looping!)
Yes, it's an infinite loop. Examples: `for (; ;) { }` (aka: The Crab) `while (true) { }` `do { } while (true)`
What does a C# for loop do when all the expressions are missing. eg for(;;) {}
[ "", "c#", ".net", "syntax", "for-loop", "" ]
We have project which uses gcc and make files. Project also contains of one big subproject (SDK) and a lot of relatively small subprojects which use that SDK and some shared framework. We use precompiled headers, but that helps only for re-compilation to be faster. Is there any known techniques and tools to help with build-time optimizations? Or maybe you know some articles/resources about this or related topics?
You can tackle the problem from two sides: refactor the code to reduce the complexity the compiler is seeing, or speed up the compiler execution. Without touching the code, you can add more compilation power into it. Use ccache to avoid recompiling files you have already compiled and distcc to distribute the build time among more machines. Use make -j where N is the number of cores+1 if you compile locally, or a bigger number for distributed builds. That flag will run more than one compiler in parallel. Refactoring the code. Prefer forward declaration to includes (simple). Decouple as much as you can to avoid dependencies (use the PIMPL idiom). Template instantiation is expensive, they are recompiled in every compilation unit that uses them. If you can refactor your templates as to forward declare them and then instantiate them in only one compilation unit.
The best I can think of with `make` is the `-j` option. This tells `make` to run as many jobs as possible in parallel: `make -j` If you want to limit the number of concurrent jobs to *n* you can use: `make -j` *n* --- Make sure the dependencies are correct so `make` doesn't run jobs it doesn't have to. --- Another thing to take into account is optimizations that `gcc` does with the `-O` switch. You can specify various levels of optimization. The higher the optimization, the longer the compile and link times. A project I work with runs takes 2 minutes to link with `-O3`, and half a minute with `-O1`. You should make sure you're not optimizing more than you need to. You could build without optimization for development builds and with optimization for deployment builds. --- Compiling with debug info (`gcc -g`) will probably increase the size of your executable and may impact your build time. If you don't need it, try removing it to see if it affects you. --- The type of linking (static vs. dynamic) should make a difference. As far as I understand static linking takes longer (though I may be wrong here). You should see if this affects your build.
GCC/Make Build Time Optimizations
[ "", "c++", "optimization", "gcc", "makefile", "build-time", "" ]
I want to run a function that has 2 different outcomes, but I want each outcome to be truly 50%. I am assuming rand(0,1) is the way to go, but I am curious if that could possibly favor one over the other. What is the best way to get a 50/50 outcome? Thanks. EDIT: thanks guys I don't want it to be random though, I want the outcome to be 101010101 not 111001101. Maybe I should just update a data-base with the last value output and then return the opposite? EDIT2: OK I am sorry my last edit was misleading. I am only calling the function once per user and assigning that value as a cookie to the user. I want each visiting user to receive a 1 or a 0 in the order 1010101.
Depending on the specific context in which you will use this number (e.g. a per-user basis, lifetime of application, etc.) you can store it in a number of locations, $\_SESSION, a database value, or if the scope only covers the current page then you can save it on directly in that page's code. An easy way to toggle the value is: ``` $val = 1 - $val; ``` For a database call: ``` UPDATE YourTable SET `next_value` = 1 - `next_value` ``` etc...
in PHP [`mt_rand()`](http://is.php.net/manual/en/function.mt-rand.php) is a better random number generator ``` mt_rand(0,1); ``` Is enough and should generate a pretty good 50/50 value. **Quote about mt\_rand:** > Many random number generators of older libcs have dubious or unknown characteristics and are slow. By default, PHP uses the libc random number generator with the rand() function. The mt\_rand() function is a drop-in replacement for this. > > It uses a random number generator with known characteristics using the » Mersenne Twister, which will produce random numbers four times faster than what the average libc rand() provides. Bill had a good link regarding a visual example. I do not know what PHP the user had there for PHP but since he included the code i have hosted it on my server (Linux with PHP 5.1.6) * [rand() Visual Example](http://cznp.com/rand/?rand) * [mt\_rand() Visual Example](http://cznp.com/rand/?mt_rand)
PHP rand() ...get true 50/50 results?
[ "", "php", "" ]
``` BASE_URL = 'http://foobar.com?foo=%s' variable = 'bar' final_url = BASE_URL % (variable) ``` I get this `'http://foobar.com?foo=bar'` # It ignores the inside string. But i wanted something like this `'http://foobar.com?foo='bar''` Thanks for the answer. Can you help me out with almost the same problem: ``` lst = ['foo', 'bar', 'foo bar'] [str(l) for l in lst if ' ' in l] ``` I get `['foo bar']` but i wanted it like `[''foo bar'']` Thanks in advance.
If you're working with URL parameters, it's probably safer to use urllib.urlencode: ``` import urllib BASE_URL = 'http://foobar.com/?%s' print BASE_URL % urllib.urlencode({ 'foo': 'bar', }) ``` Regarding the quotes: Why do you explicitly want them? Normally your HTTP-wrapper would handle all that for you. Regarding your 2nd question: If you absolutely also want to have the quotes in there, you still have to either escape them when appending the contained string, or (probably the safer way of doing it) would be using repr(...) ``` lst = ['foo', 'bar', 'foo bar'] lst2 = [] for l in lst: if ' ' in l: lst2.append(repr(l)) ```
Change your `BASE_URL` to either ``` BASE_URL = "http://foobar.com?foo='%s'" ``` or ``` BASE_URL = 'http://foobar.com?foo=\'%s\'' ```
String inside a string
[ "", "python", "string", "" ]
I´ve lately been thinking about the things i´m returning from methods and I noticed that there are 4 different things i return when the method fails. What bothers me about it, is that my code is not very consitent in this regard, so i wanted to ask about your "best practices". So lets imagine a method that takes Foo and returns a list of Bar: ``` public IList<Bar> Method(Foo something); ``` Or to keep it more general: ``` public IBar Method(IFoo something); ``` The question is what do you return on what kind of failure. the options would be: 1. empty return type like: new List; or: new EmptyBar(); 2. null 3. throw an exception 4. a special list value indicating failure like: new List{new FailureBar()} I really hate option 4 so I´m mostly interessted to hear when you use the other 3 options and why
I'd choose between an empty list and an exception depending on the nature of the failure. E.g. If your database failed to connect - exception. If your query didn't return results - empty list.
It depends on what you mean by 'failure'. If it is a failure in the sense that something unexpected happened, then I'd throw an exception. Maybe an argument exception for when the parameter was incorrect, an IOException when you couldn't read from file, etcetera. If the 'failure' is that no items could be found for the given parameter value, then I'd return an empty List. In case you'd return an object that is not a collection, I'd return null. I never return special result codes, like -1 on error. I really don't like it. People tend to forget about codes, they change over time, you end up with badly maintanable if statements that check for these result codes, etcetera.
What to return from a failed method and when to throw?
[ "", "c#", ".net", "" ]
Given a class `C` in Python, how can I determine which file the class was defined in? I need something that can work from either the class `C`, or from an instance of `C`. I am doing this because I am generally a fan of putting files that belong together in the same folder. I want to create a class that uses a Django template to render itself as HTML. The base implementation should infer the filename for the template based on the filename that the class is defined in. Say I put a class `LocationArtifact` in the file "base/artifacts.py", then I want the default behaviour to be that the template name is "base/LocationArtifact.html".
You can use the [inspect](http://docs.python.org/library/inspect.html#inspect.getfile) module, like this: ``` import inspect inspect.getfile(C.__class__) ```
try: ``` import sys, os os.path.abspath(sys.modules[LocationArtifact.__module__].__file__) ```
How do I get the filepath for a class in Python?
[ "", "python", "class", "introspection", "" ]
I am not a database person, exactly, and most of my db work has been with MySQL, so forgive me if something in this question is incredibly naïve. I need to delete 5.5 million rows from an Oracle table that has about 100 million rows. I have all the IDs of the rows I need to delete in a temporary table. If it were a just a few thousand rows, I'd do this: ``` DELETE FROM table_name WHERE id IN (SELECT id FROM temp_table); COMMIT; ``` Is there anything I need to be aware of, and/or do differently, because it's 5.5 million rows? I thought about doing a loop, something like this: ``` DECLARE vCT NUMBER(38) := 0; BEGIN FOR t IN (SELECT id FROM temp_table) LOOP DELETE FROM table_name WHERE id = t.id; vCT := vCT + 1; IF MOD(vCT,200000) = 0 THEN COMMIT; END IF; END LOOP; COMMIT; END; ``` First of all - is this doing what I think it is - batching commits of 200,000 at a time? Assuming it is, I'm still not sure if it's better to generate 5.5 million SQL statements, and commit in batches of 200,000, or to have one SQL statement and commit all at once. Ideas? Best practices? **EDIT**: I ran the first option, the single delete statement, and it only took 2 hours to complete in development. Based on that, it's queued to be run in production.
The first approach is better, because you give the query optimizer a clear picture of what you are trying to do, instead of trying to hide it. The database engine might take a different approach to deleting 5.5m (or 5.5% of the table) internally than to deleting 200k (or 0.2%). Here is also an [article](http://www.devx.com/dbzone/10MinuteSolution/22191) about massive DELETE in Oracle which you might want to read.
The fastest way is to create a new one with `CREATE TABLE AS SELECT` using `NOLOGGING` option. I mean: ``` ALTER TABLE table_to_delete RENAME TO tmp; CREATE TABLE table_to_delete NOLOGGING AS SELECT .... ; ``` Of course you have to recreate constraints with no validate, indexes with nologging, grants, ... but is very very fast. If you have the trouble in production, you can do the following: ``` ALTER TABLE table_to_delete RENAME to tmp; CREATE VIEW table_to_delete AS SELECT * FROM tmp; -- Until there can be instantly CREATE TABLE new_table NOLOGGING AS SELECT .... FROM tmp WHERE ...; <create indexes with nologging> <create constraints with novalidate> <create other things...> -- From here ... DROP VIEW table_to_delete; ALTER TABLE new_table RENAME TO table_to_delete; -- To here, also instantly ``` You have take care of: * Stored procedures can be invalidated, but they will be recompiled the second time are called. You have to test it. * `NOLOGGING` means that **minimal** redo are generated. If you have DBA role, run a `ALTER SYSTEM CHECKPOINT` to ensure no data lost if instance crash. * For `NOLOGGING` the tablespace have to be also in `NOLOGGING`. Another option better than create milions of inserts is: ``` -- Create table with ids DELETE FROM table_to_delete WHERE ID in (SELECT ID FROM table_with_ids WHERE ROWNUM < 100000); DELETE FROM table_with_ids WHERE ROWNUM < 100000; COMMIT; -- Run this 50 times ;-) ``` The PLSQL choice is not advisable because can create the *Snapshot too old* message due that you are commiting (and closing the transaction) with an opened cursor (the looped one) you want to continue using it. Oracle allows it but it's not a good practice. UPDATE: Why I can ensure the last PLSQL block is going to work? Because I supose that: * No other one is using this temporary table for any reason (dba or jobs gathering statistics, dab tasks like move, inserting records, and so on). That can be ensured because is an auxiliar table only for this. * Then, with the last assertion, the query is going to be executed *exactly* with the same plan and is going to return the rows with the same order.
Deleting a LOT of data in Oracle
[ "", "sql", "oracle", "plsql", "" ]
How do I disable the context menu in the IE `WebBrowser` control and instead perform custom handling of the right-click event in C#?
``` WebBrowser browser; browser.Document.ContextMenuShowing += new HtmlElementEventHandler(MyCustomContextMenuMethod); ```
Actually: ``` WebBrowser browser; browser.IsWebBrowserContextMenuEnabled = false; ``` This pretty much tells the WebBrowser that the right-click context menu is not welcome.
Disable context menu in Internet Explorer control
[ "", "c#", ".net", "events", "webbrowser-control", "contextmenu", "" ]
I basically have this problem: right now, we have a system where it gets a string as input, and it basically says ACTION:. For each of the actions there is an automatically generated function(Rational Rose GRRR), such as ``` bouncer_comm.chatMessage("data goes here").sendAt(msg->sapIndex0()); bouncer_comm.askforname().sendAt(msg->sapindex0()); ``` `bouncer_comm` returns an `RTOutSignal`, I can't create them manually because of the bizarre structure rose uses. Right now, my only option is to create a hundred or so if statements, where I do: ``` if(action == "CHAT") bouncer_comm.chatMessage("data goes here").sendAt(msg->sapIndex0()); ``` Which is realllllyy annoying. What would be the best way to avoid this? I've looked at / tried countless things, this is a really old version of rational rose (pre 2k) and yeah. If anyone has any ideas that would be amazing.
I like @cobbal's idea of the function pointer hash above, but you could replace this conditional logic with polymorphism. see: <http://c2.com/cgi/wiki?ReplaceConditionalWithPolymorphism>
A hash storing function pointers could work well here
Best way to avoid a thousand if statements?
[ "", "c++", "boost", "rational-rose", "" ]
I've seen some code, as well as some errors generated from my compiler that have a '`**`' token before the variable (eg \*\*variablename unreferenced-- or something, I can't recall exactly offhand). I'm fairly certain this is related to pointers, if I had to guess it looks like it's trying to dereference twice. '`**`' is fairly ungoogleable. Can someone point me to a good website/documentation or would someone care to explain it here? Thanks. Great responses. If I can add, what would be some situations where it is useful to have a pointer to a pointer? Shouldn't you just be using the original pointer instead of creating yet another pointer to the original pointer?
`**` is not actually only pointer to pointer (as in declaration), but is also the dereference of a dereference (in a statement). It is used often in C which does not have the & notation for references, e.g. to update a return value which is a pointer type: ``` int alloc_foo(struct foo **foo_ret) { *foo_ret = malloc(sizeof(struct foo)); return 1; /* to indicate success; return value in foo_ret */ } ```
You may recognize the signature for main(): ``` int main(int argc, char* argv[]) ``` The following is equivalent: ``` int main(int argc, char** argv) ``` In this case, argv is a pointer to an array of char\*. In C, the index operator [] is just another way of performing pointer arithmetic. For example, ``` foo[i] ``` produces the same code as ``` *(foo + i) ```
What is ** in C++?
[ "", "c++", "pointers", "" ]
I recently wrote a quick-and-dirty proof-of-concept proxy server in C# as part of an effort to get a Java web application to communicate with a legacy VB6 application residing on another server. It's ridiculously simple: The proxy server and clients both use the same message format; in the code I use a `ProxyMessage` class to represent both requests from clients and responses generated by the server: ``` public class ProxyMessage { int Length; // message length (not including the length bytes themselves) string Body; // an XML string containing a request/response // writes this message instance in the proper network format to stream // (helper for response messages) WriteToStream(Stream stream) { ... } } ``` The messages are as simple as could be: the length of the body + the message body. I have a separate `ProxyClient` class that represents a connection to a client. It handles all the interaction between the proxy and a single client. What I'm wondering is are they are design patterns or best practices for simplifying the boilerplate code associated with asynchronous socket programming? For example, you need to take some care to manage the read buffer so that you don't accidentally lose bytes, and you need to keep track of how far along you are in the processing of the current message. In my current code, I do all of this work in my callback function for `TcpClient.BeginRead`, and manage the state of the buffer and the current message processing state with the help of a few instance variables. The code for my callback function that I'm passing to `BeginRead` is below, along with the relevant instance variables for context. The code seems to work fine "as-is", but I'm wondering if it can be refactored a bit to make it clearer (or maybe it already is?). ``` private enum BufferStates { GetMessageLength, GetMessageBody } // The read buffer. Initially 4 bytes because we are initially // waiting to receive the message length (a 32-bit int) from the client // on first connecting. By constraining the buffer length to exactly 4 bytes, // we make the buffer management a bit simpler, because // we don't have to worry about cases where the buffer might contain // the message length plus a few bytes of the message body. // Additional bytes will simply be buffered by the OS until we request them. byte[] _buffer = new byte[4]; // A count of how many bytes read so far in a particular BufferState. int _totalBytesRead = 0; // The state of the our buffer processing. Initially, we want // to read in the message length, as it's the first thing // a client will send BufferStates _bufferState = BufferStates.GetMessageLength; // ...ADDITIONAL CODE OMITTED FOR BREVITY... // This is called every time we receive data from // the client. private void ReadCallback(IAsyncResult ar) { try { int bytesRead = _tcpClient.GetStream().EndRead(ar); if (bytesRead == 0) { // No more data/socket was closed. this.Dispose(); return; } // The state passed to BeginRead is used to hold a ProxyMessage // instance that we use to build to up the message // as it arrives. ProxyMessage message = (ProxyMessage)ar.AsyncState; if(message == null) message = new ProxyMessage(); switch (_bufferState) { case BufferStates.GetMessageLength: _totalBytesRead += bytesRead; // if we have the message length (a 32-bit int) // read it in from the buffer, grow the buffer // to fit the incoming message, and change // state so that the next read will start appending // bytes to the message body if (_totalBytesRead == 4) { int length = BitConverter.ToInt32(_buffer, 0); message.Length = length; _totalBytesRead = 0; _buffer = new byte[message.Length]; _bufferState = BufferStates.GetMessageBody; } break; case BufferStates.GetMessageBody: string bodySegment = Encoding.ASCII.GetString(_buffer, _totalBytesRead, bytesRead); _totalBytesRead += bytesRead; message.Body += bodySegment; if (_totalBytesRead >= message.Length) { // Got a complete message. // Notify anyone interested. // Pass a response ProxyMessage object to // with the event so that receivers of OnReceiveMessage // can send a response back to the client after processing // the request. ProxyMessage response = new ProxyMessage(); OnReceiveMessage(this, new ProxyMessageEventArgs(message, response)); // Send the response to the client response.WriteToStream(_tcpClient.GetStream()); // Re-initialize our state so that we're // ready to receive additional requests... message = new ProxyMessage(); _totalBytesRead = 0; _buffer = new byte[4]; //message length is 32-bit int (4 bytes) _bufferState = BufferStates.GetMessageLength; } break; } // Wait for more data... _tcpClient.GetStream().BeginRead(_buffer, 0, _buffer.Length, this.ReadCallback, message); } catch { // do nothing } } ``` So far, my only real thought is to extract the buffer-related stuff into a separate `MessageBuffer` class and simply have my read callback append new bytes to it as they arrive. The `MessageBuffer` would then worry about things like the current `BufferState` and fire an event when it received a complete message, which the `ProxyClient` could then propagate further up to the main proxy server code, where the request can be processed.
I've had to overcome similar problems. Here's my solution (modified to fit your own example). We create a wrapper around `Stream` (a superclass of `NetworkStream`, which is a superclass of `TcpClient` or whatever). It monitors reads. When some data is read, it is buffered. When we receive a length indicator (4 bytes) we check if we have a full message (4 bytes + message body length). When we do, we raise a `MessageReceived` event with the message body, and remove the message from the buffer. This technique automatically handles fragmented messages and multiple-messages-per-packet situations. ``` public class MessageStream : IMessageStream, IDisposable { public MessageStream(Stream stream) { if(stream == null) throw new ArgumentNullException("stream", "Stream must not be null"); if(!stream.CanWrite || !stream.CanRead) throw new ArgumentException("Stream must be readable and writable", "stream"); this.Stream = stream; this.readBuffer = new byte[512]; messageBuffer = new List<byte>(); stream.BeginRead(readBuffer, 0, readBuffer.Length, new AsyncCallback(ReadCallback), null); } // These belong to the ReadCallback thread only. private byte[] readBuffer; private List<byte> messageBuffer; private void ReadCallback(IAsyncResult result) { int bytesRead = Stream.EndRead(result); messageBuffer.AddRange(readBuffer.Take(bytesRead)); if(messageBuffer.Count >= 4) { int length = BitConverter.ToInt32(messageBuffer.Take(4).ToArray(), 0); // 4 bytes per int32 // Keep buffering until we get a full message. if(messageBuffer.Count >= length + 4) { messageBuffer.Skip(4); OnMessageReceived(new MessageEventArgs(messageBuffer.Take(length))); messageBuffer.Skip(length); } } // FIXME below is kinda hacky (I don't know the proper way of doing things...) // Don't bother reading again. We don't have stream access. if(disposed) return; try { Stream.BeginRead(readBuffer, 0, readBuffer.Length, new AsyncCallback(ReadCallback), null); } catch(ObjectDisposedException) { // DO NOTHING // Ends read loop. } } public Stream Stream { get; private set; } public event EventHandler<MessageEventArgs> MessageReceived; protected virtual void OnMessageReceived(MessageEventArgs e) { var messageReceived = MessageReceived; if(messageReceived != null) messageReceived(this, e); } public virtual void SendMessage(Message message) { // Have fun ... } // Dispose stuff here } ```
There's nothing wrong with the way you've done it. For me, though, I like to separate the receiving of the data from the processing of it, which is what you seem to be thinking with your proposed MessageBuffer class. I have discussed that in detail [here](https://stackoverflow.com/questions/582550/c-begin-endreceive-how-do-i-read-large-data/599448#599448).
Are there well-known patterns for asynchronous network code in C#?
[ "", "c#", "sockets", "c#-2.0", "" ]
On an Eclipse RCP application, I'm building an Editor in which some fields are mandatory. If the user saves the editor with theses fields not filled, what is the best way/practice to handle this ? (my first guess is to show an error dialog if theses fields are empty in doSave() method but I'm not sure it's the "eclipse" way to deal with this kind of problem)
I agree with VonC and would disable the "Save" button, as long as the user has filed all the important fields. Showig a warning which fields the user has to fill would help a lot. EDIT: Create a component which added himself as change listener. When someone changes the component, you can check whether the input is correct. Create a window whith all the self-checking components and add the window as listener to all the components. When somebody change a compounent you can directly check if the input is valid and the user can step to the next page or save the page. In RCP (Example FieldEditorPreferencePage) a lot of components have the doSave() and isValid() methods. In isValid() you can check all the components you can find in the window and in doSave(), you sould check the isValid() state, when it is not done automatically, and do some additional save actions.
If you consider some dialog box like "Create a New Java Project", * you simply cannot do any action (like "Next" or "Finish" until you fill the one mandatory field ("Project name") So it is different than checking on the `doSave()` event. * If that field is invalid (like if the project name already exist), a warning is displayed in the zone at the top. * If the field is correctly filled, *then* you can save. (Note: I know this is not an Editor, but it can give you an idea about the mandatory fields management in eclipse) [alt text http://ds.informatik.uni-marburg.de/MAGE/gdt/images/tutorial/NewJavaProject.png](http://ds.informatik.uni-marburg.de/MAGE/gdt/images/tutorial/NewJavaProject.png)
In an Eclipse RCP editor, what is the best way to handle missing mandatory fields?
[ "", "java", "eclipse-rcp", "" ]
Can anybody tell me the C# equivalent for this C code? ``` static const value_string message_id[] = { {0x0000, "Foo"}, {0x0001, "Bar"}, {0x0002, "Fubar"}, ... ... ... } ```
``` private static readonly IDictionary<int, string> message_id = new Dictionary<int, string> { { 0x0000, "Foo" }, { 0x0001, "Bar" } }; ```
``` public Enum MessageID { Foo = 0, Bar = 1, Fubar = 2 }; ``` Then you can get the "string" version using `Enum.Format()` or `ToString()`.
C# equivalent for C code needed
[ "", "c#", "arrays", "initialization", "" ]
I just read in the C++ standard that `std::for_each` is a non-modifying sequence operation, along with `find`, `search` and so on. Does that mean that the function applied to each element should not modify them? Why is that? What could possibly go wrong? Here is a sample code, where the sequence is modified. Can you see anything wrong with it? ``` void foo(int & i) { i = 12; } int main() { std::vector<int> v; v.push_back(0); std::for_each(v.begin(), v.end(), foo); // v now contains 12 } ``` I suspect this to be just an interpretation issue, but I wanted to have your opinion about that. PS: I know I could use `std::transform` instead of `for_each`, but that's not the point.
See [this defect report](http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-defects.html#475) they say > The LWG believes that nothing in the standard prohibits function objects that modify the sequence elements. The problem is that for\_each is in a secion entitled "nonmutating algorithms", and the title may be confusing. A nonnormative note should clarify that. But also note [this one](http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-active.html#969). They seem to call it "non-modifying" because for\_each itself does not exlicitly modify the elements of the sequence.
Quite simply, you can't make a change that could modify the structure of the container. That's because in the general case, modifying a container can invalidate the iterators being used. You can modify the element as long as it doesn't change the container's structure (such as the order of elements in the container). [addition] --- Note that there seems to be some confusion about `for_each` being a 'non-modifying' algorithm. This confusing situation is summed up here by Stroustrup in errata for the 4rd Printing of "The C++ Programming Language, 3rd Ed." (*CPL*) has this to say about whether `for_each` can modify the elements of a sequence (<http://www.research.att.com/~bs/3rd_printing5.html>): > "The `for_each()` algorithm is classified as nonmodifying because it doesn't explicitly modify a sequence. However, if applied to a non-const sequence `for_each()` may change the elements of the sequence. For an example, see the use of `negate()` in 11.9." (recent standards resolution). The CPL originally indicated that the function or function object passed to `for_each` was not permitted to modify the element passed to it. However, the CPL was written and originally published before the standard was finalized, and apparently this restriction on `for_each()` was removed before it was finalized. See also: * <http://www.angelikalanger.com/Articles/Cuj/03.ForeachTransform/ForEachTransform.html> * the C++ LWG defect report mentioned in [litb's answer](https://stackoverflow.com/questions/662845/why-is-stdforeach-a-non-modifying-sequence-operation/662922#662922) (<http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-defects.html#475>)
Why is std::for_each a non-modifying sequence operation?
[ "", "c++", "stl", "standards", "foreach", "" ]
I have worked with JFreeChart in a few Java projects. I wonder if there is something similar I can use in Delphi for Win32 or if you know about a nice free chart library for Delphi. I'm working with Delphi 2006. I know there is already TChart but it looks a bit "old". My researchs bring me to TMS Advanced Charts but it's too expensive for me. Thanks.
Maybe you can find some useful here <http://www.torry.net/pages.php?id=195>
You can always build a Delphi wrapper for the [Google Chart API](http://code.google.com/apis/chart/). Many such wrappers exist in [other languages](http://groups.google.com/group/google-chart-api/web/useful-links-to-api-libraries).
JFreeChart-like in Delphi
[ "", "java", "delphi", "charts", "jfreechart", "" ]
I came across this following arithmetic problem. But the result is different from normal maths operation, Why is it so? ``` double d1 = 1.000001; double d2 = 0.000001; Console.WriteLine((d1-d2)==1.0); ```
I presume you found the question on Jon Skeet's Brainteasers page? The answers are listed and explained [here](http://www.yoda.arachsys.com/csharp/teasers-answers.html) on the same website. For a matter of reference, here's the answer copied from that page. --- **3) Silly arithmetic** Computers are meant to be good at arithmetic, aren't they? Why does this print "False"? ``` double d1 = 1.000001; double d2 = 0.000001; Console.WriteLine((d1-d2)==1.0); ``` Answer: All the values here are stored as binary floating point. While 1.0 can be stored exactly, 1.000001 is actually stored as 1.0000009999999999177333620536956004798412322998046875, and 0.000001 is actually stored as 0.000000999999999999999954748111825886258685613938723690807819366455078125. The difference between them isn't exactly 1.0, and in fact the difference can't be stored exactly either. ---
From the MSDN entry for [Double.Equals](https://stackoverflow.com/questions/485175/c-net-is-it-safe-to-check-floating-point-values-for-equality-to-0/485741): > **Precision in Comparisons** > > The Equals method should be used with > caution, because two apparently > equivalent values can be unequal due > to the differing precision of the two > values. The following example reports > that the Double value .3333 and the > Double returned by dividing 1 by 3 are > unequal. > > ... > > Rather than comparing for equality, > one recommended technique involves > defining an acceptable margin of > difference between two values (such as > .01% of one of the values). If the > absolute value of the difference > between the two values is less than or > equal to that margin, the difference > is likely to be due to differences in > precision and, therefore, the values > are likely to be equal. The following > example uses this technique to compare > .33333 and 1/3, the two Double values > that the previous code example found > to be unequal. If you need to do a lot of "equality" comparisons it might be a good idea to write a little helper function or extension method in .NET 3.5 for comparing: ``` public static bool AlmostEquals(this double double1, double double2, double precision) { return (Math.Abs(double1 - double2) <= precision); } ``` This could be used the following way: ``` double d1 = 1.000001; double d2 = 0.000001; bool equals = (d1 - d2).AlmostEquals(1.0, 0.0000001); ``` See this very similar question: [C#.NET: Is it safe to check floating point values for equality to 0?](https://stackoverflow.com/questions/485175/c-net-is-it-safe-to-check-floating-point-values-for-equality-to-0/485741)
Why the result is different for this problem?
[ "", "c#", "floating-point", "floating-accuracy", "" ]
I built this class to work with PDO, to make SQL queries 'easier' and less to worry about. Here are my thoughts * Should it be more like class DB extends PDO? * Is the query method too big? Should it be split into private methods which are called.. is this what is known as *loose coupling*? * Is my way for detecting a SELECT query too ugly for it's own good? * What other problems are evident? As I am sort of learning-as-I-go, I'm sure I could have overlooked a lot of potential problems. Thank you ` ``` class Db { private static $_instance = NULL; private function __construct() { // can not call me } private function __clone() { // no! } public static function getInstance() { if (!self::$_instance) { try { self::$_instance = new PDO('mysql:host=' . CONFIG_MYSQL_SERVER . ';dbname=' . CONFIG_MYSQL_DATABASE, CONFIG_MYSQL_USERNAME, CONFIG_MYSQL_PASSWORD);; self::$_instance-> setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } catch(PDOException $e) { trigger_error($e->getMessage()); } } return self::$_instance; } public static function query($query /*string*/, $bindings = NULL) { $queryPortion = substr($query,0, 6); try { if ($bindings) { $prepared = self::getInstance()->prepare($query); foreach($bindings as $binding=>$data) { // defaults to string if (!is_array($data)) { $prepared->bindParam($binding, $data); } else { switch(count($data)) { case 1: $prepared->bindParam($binding, $data['value']); break; case 2: $prepared->bindParam($binding, $data['value'], $data['dataType']); break; case 3: $prepared->bindParam($binding, $data['value'], $data['dataType'], (int)$data['length']); break; default: trigger_error('An error has occured with the prepared statement bindings.'); return false; break; } } } $prepared->execute(); return $prepared->fetchAll(PDO::FETCH_ASSOC); } else if (String::match($queryPortion, 'select')) { // if this is a select query $rows = self::getInstance()->query($query); return $rows->fetchAll(PDO::FETCH_ASSOC); } else { return self::getInstance()->exec($query); } } catch(PDOException $e) { trigger_error($e->getMessage()); } } public static function getLastInsertId() { try { self::getInstance()->lastInsertId(); } catch(PDOException $e) { trigger_error($e->getMessage()); } } public static function disconnect() { // kill PDO object self::$_instance = NULL; } } ```
It's not bad and as it's been said it might help for small applications although it's mostly a very thin abstraction on another abstraction. It's not bringing a lot of others functionalities. Something you might want to consider, amongst other things: * As this is PHP5 code, use [exceptions](http://www.php.net/exceptions) instead of `trigger_error` and [`set_exception_handler`](http://www.php.net/manual/en/function.set-exception-handler.php) if necessary until exceptions are more widespread, but it's definitely cleaner and more future-proof. * You are using a singleton, it's not a bad thing necessarily but in this case, for example, one shortcoming will be that you'll only be able to handle one connection to one database. * I don't know if you make use of stored procedures, but a stored procedure [might return a result set](http://publib.boulder.ibm.com/infocenter/db2luw/v9/index.jsp?topic=/com.ibm.db2.udb.apdv.php.doc/doc/t0023501.htm) through the `query()` method too. * You have two semi-colons (`;;`) at the end of your `new PDO` line. That being said, I don't think your query method is too big and there's not much that could be recalled from elsewhere in there at the moment. Though as soon as you see two or three lines that could be called from another function, split it. That's a good way to [DRY](http://en.wikipedia.org/wiki/DRY).
Yes and No. It is good code for a simple quick and dirty application. The problem comes when you use this in a more complex structured application. Where the error handling will vary depending on which sql you are executing. Also any severe errors will show up as "problem at line 999" type errors where 999 is in your super duper routine and you will have difficulty tracing it back to a particular sql request. Having said that I do this sort of thing myself all the time on small projects.
Is this wrapper for PDO 'good code' ? Are there any potential problems?
[ "", "php", "refactoring", "pdo", "" ]
I have a ``` List<Cat> ``` sorted by the cats' birthdays. Is there an efficient Java Collections way of finding **all the cats** that were born on January 24th, 1983? Or, what is a good approach in general?
[`Collections.binarySearch()`](http://java.sun.com/javase/6/docs/api/java/util/Collections.html). Assuming the cats are sorted by birthday, this will give the index of one of the cats with the correct birthday. From there, you can iterate backwards and forwards until you hit one with a different birthday. If the list is long and/or not many cats share a birthday, this should be a significant win over straight iteration. Here's the sort of code I'm thinking of. Note that I'm assuming a [random-access](http://java.sun.com/javase/6/docs/api/java/util/RandomAccess.html) list; for a linked list, you're pretty much stuck with iteration. (Thanks to fred-o for pointing this out in the comments.) ``` List<Cat> cats = ...; // sorted by birthday List<Cat> catsWithSameBirthday = new ArrayList<Cat>(); Cat key = new Cat(); key.setBirthday(...); final int index = Collections.binarySearch(cats, key); if (index < 0) return catsWithSameBirthday; catsWithSameBirthday.add(cats.get(index)); // go backwards for (int i = index-1; i > 0; i--) { if (cats.get(tmpIndex).getBirthday().equals(key.getBirthday())) catsWithSameBirthday.add(cats.get(tmpIndex)); else break; } // go forwards for (int i = index+1; i < cats.size(); i++) { if (cats.get(tmpIndex).getBirthday().equals(key.getBirthday())) catsWithSameBirthday.add(cats.get(tmpIndex)); else break; } return catsWithSameBirthday; ```
Binary search is the classic way to go. Clarification: I said you use binary search. Not a single method specifically. The algorithm is: ``` //pseudocode: index = binarySearchToFindTheIndex(date); if (index < 0) // not found start = index; for (; start >= 0 && cats[start].date == date; --start); end = index; for (; end < cats.length && cats[end].date == date; ++end); return cats[ start .. end ]; ```
Java: What is the best way to find elements in a sorted List?
[ "", "java", "list", "collections", "findall", "sorting", "" ]
## Introduction I heard something about writing device drivers in Java (heard as in "with my ears", not from the internet) and was wondering... I always thought device drivers operated on an operating system level and thus must be written in the same language as the OS (thus mostly C I suppose) ## Questions 1. Am I generally wrong with this assumption? (it seems so) 2. How can a driver in an "alien" language be used in the OS? 3. What are the requirements (from a programming language point of view) for a device driver anyway? Thanks for reading
There are a couple of ways this can be done. First, code running at "OS level" does not need to be written in the same language as the OS. It merely has to be able to be linked together with OS code. Virtually all languages can interoperate with C, which is really all that's needed. So language-wise, there is technically no problem. Java functions can call C functions, and C functions can call Java functions. And if the OS isn't written in C (let's say, for the sake of argument that it's written in C++), then the OS C++ code can call into some intermediate C code, which forwards to your Java, and vice versa. C is pretty much a *lingua franca* of programming. Once a program has been compiled (to native code), its source language is no longer relevant. Assembler looks much the same regardless of which language the source code was written in before compilation. As long as you use the same calling convention as the OS, it's no problem. A bigger problem is runtime support. Not a lot of software services are available in the OS. There usually is no Java virtual machine, for example. (There is no reason why there technically couldn't be, but usually, but usually, it's safe to assume that it's not present). Unfortunately, in its "default" representation, as Java bytecode, a Java program *requires* a lot of infrastructure. It needs the Java VM to interpret and JIT the bytecode, and it needs the class library and so on. But there are two ways around this: * Support Java in the kernel. This would be an unusual step, but it could be done. * Or compile your Java source code to a native format. A Java program doesn't have to be compiled to Java bytecode. You could compile it to x86 assembler. The same goes for whatever class libraries you use. Those too could be compiled all the way to assembler. Of course, parts of the Java class library requires certain OS features that won't be available, but then use of those classes could be avoided. So yes, it can be done. But it's not straightforward, and it's unclear what you'd gain. Of course another problem may be that Java won't let you access arbitrary memory locations, which would make a lot of hardware communication pretty tricky. But that could be worked around too, perhaps by calling into very simple C functions which simply return the relevant memory areas as arrays for Java to work on.
[Writing Solaris Device Drivers in Java](http://research.sun.com/techrep/2006/abstract-156.html) covers a A RAM disk device written in Java. [Another one for Linux](http://portal.acm.org/citation.cfm?id=1408654.1408657). Goes more in depth on why you might want a DD in Java as well (since some people were wondering by the looks of the other posts and comments)
Is it possible to code a device driver in Java?
[ "", "java", "operating-system", "device-driver", "" ]
I have an array full of patterns that I need matched. Any way to do that, other than a for() loop? Im trying to do it in the least CPU intensive way, since I will be doing dozens of these every minute. Real world example is, Im building a link status checker, which will check links to various online video sites, to ensure that the videos are still live. Each domain has several "dead keywords", if these are found in the html of a page, that means the file was deleted. These are stored in the array. I need to match the contents pf the array, against the html output of the page.
First of all, if you literally are only doing **dozens** every **minute**, then I wouldn't worry terribly about the performance in this case. These matches are pretty quick, and I don't think you're going to have a performance problem by iterating through your patterns array and calling preg\_match separately like this: ``` $matches = false; foreach ($pattern_array as $pattern) { if (preg_match($pattern, $page)) { $matches = true; } } ``` You can indeed combine all the patterns into one using the `or` operator like some people are suggesting, but don't just slap them together with a `|`. This will break badly if any of your patterns **contain** the or operator. I would recommend at least grouping your patterns using parenthesis like: ``` foreach ($patterns as $pattern) { $grouped_patterns[] = "(" . $pattern . ")"; } $master_pattern = implode($grouped_patterns, "|"); ``` But... I'm not really sure if this ends up being faster. **Something** has to loop through them, whether it's the preg\_match or PHP. If I had to guess I'd guess that individual matches would be close to as fast and easier to read and maintain. Lastly, if performance is what you're looking for here, I think the most important thing to do is pull out the non regex matches into a simple "string contains" check. I would imagine that some of your checks must be simple string checks like looking to see if "This Site is Closed" is on the page. So doing this: ``` foreach ($strings_to_match as $string_to_match) { if (strpos($page, $string_to_match) !== false)) { // etc. break; } } foreach ($pattern_array as $pattern) { if (preg_match($pattern, $page)) { // etc. break; } } ``` and avoiding as many `preg_match()` as possible is probably going to be your best gain. `strpos()` is a **lot** faster than `preg_match()`.
``` // assuming you have something like this $patterns = array('a','b','\w'); // converts the array into a regex friendly or list $patterns_flattened = implode('|', $patterns); if ( preg_match('/'. $patterns_flattened .'/', $string, $matches) ) { } // PS: that's off the top of my head, I didn't check it in a code editor ```
How do you perform a preg_match where the pattern is an array, in php?
[ "", "php", "arrays", "preg-match", "" ]
I have a `div` defined with a style attribute: ``` <div id="div1" style="width:600;height:600;border:solid 1px"></div> ``` How can I change the height of the `div` with JavaScript?
``` <script type="text/javascript"> function changeHeight(height) { document.getElementById("div1").style.height = height + "px"; } </script> ```
Judging by his example code he is using the dojo framework. Changing height in dojo would be done with something similiar to the following: ``` dojo.style("div1", "height", 300); ``` <http://api.dojotoolkit.org/jsdoc/dojo/1.2/dojo.style>
How can I change one value in style attribute with JavaScript?
[ "", "javascript", "dojo", "" ]
I'm getting *`System.IO.FileNotFoundException: The specified module could not be found`* when running C# code that calls a C++/CLI assembly which in turn calls a pure C DLL. It happens as soon as an object is instantiated that calls the pure C DLL functions. BackingStore is pure C. CPPDemoViewModel is C++/CLI calling BackingStore it has a reference to BackingStore. I tried the simplest possible case - add a new C# unit test project that just tries to create an object defined in CPPDemoViewModel . I added a reference from the C# project to CPPDemoViewModel . A C++/CLI test project works fine with just the added ref to CPPDemoViewModel so it's something about going between the languages. I'm using Visual Studio 2008 SP1 with .Net 3.5 SP1. I'm building on Vista x64 but have been careful to make sure my Platform target is set to x86. This feels like something stupid and obvious I'm missing but it would be even more stupid of me to waste time trying to solve it in private so I'm out here embarrassing myself! This is a test for a project porting a huge amount of legacy C code which I'm keeping in a DLL with a ViewModel implemented in C++/CLI. **edit** After checking directories, I can confirm that the BackingStore.dll has not been copied. I have the standard unique project folders created with a typical multi-project solution. ``` WPFViewModelInCPP BackingStore CPPViewModel CPPViewModelTestInCS bin Debug Debug ``` The higher-level Debug appears to be a common folder used by the C and C++/CLI projects, to my surprise. WPFViewModelInCPP\Debug contains BackingStore.dll, CPPDemoViewModel.dll, CPPViewModelTest.dll and their associated .ilk and .pdb files WPFViewModelInCPP\CPPViewModelTestInCS\bin\Debug contains CPPDemoViewModel and CPPViewModelTestInCS .dll and .pdb files but **not** BackingStore. However, manually copying BackingStore into that directory **did not fix the error.** CPPDemoViewModel has the property *Copy Local* set which I assume is responsible for copying its DLL when if is referenced. I can't add a reference from a C# project to a pure C DLL - it just says *A Reference to Backing Store could not be added.* I'm not sure if I have just one problem or two. I can use an old-fashioned copying build step to copy the BackingStore.dll into any given C# project's directories, although I'd hoped the new .net model didn't require that. DependencyWalker is telling me that the missing file is GPSVC.dll which [has been suggested](http://forums.guru3d.com/showthread.php?t=212244) indicates security setting issues. I suspect this is a red herring. **edit2** With a manual copy of BackingStore.dll to be adjacent to the executable, the GUI now works fine. The C# Test Project still has problems which I suspect is due to the runtime environment of a test project but I can live without that for now.
The answer for the GUI, other than changing output settings, was the addition of a Pre-Build Step ``` copy $(ProjectDir)..\Debug\BackingStore.* $(TargetDir) ``` The answer for the Test projects was to add the missing DLL to the Deployment tab of the testrunconfig. You can either do so by directly editing the default *LocalTestRun.testrunconfig* (appears in Solution under Solution Items) or right-click the Solution and Add a new test run config, which will then appear under the main Test menu. Thanks for the answers on [this SO question](https://stackoverflow.com/questions/198097/creating-test-run-configurations-in-vs-2008) on test configurations for leading me to the answer.
Are the C and C++ DLLs in the same directory as the C# assembly that's executing? You may have to change your project output settings so that the C# assembly and the other DLLs all end up in the same folder. I've often used the [Dependency Walker](http://www.dependencywalker.com/) in cases like this; it's a sanity check that shows that all the dependencies can actually be found. Once your app is running, you may also want to try out [Process Monitor](http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx) on the code you are running, to see which DLLs are being referenced, and where they are located.
C# to C++/CLI to C DLL System.IO.FileNotFoundException
[ "", "c#", "build-process", "c++-cli", "unmanaged", "" ]
i have a string with double quote like this ``` $count = 5; $str = "result: $count"; echo $str; //result: 5 ``` variable parsing work well, and my problem is $count var must be define later than $str ``` $str = "result: $count"; $count = 5; echo $str; //result: ``` So i will use single quote and ask a question here to finding a way to parse var whenever i want ``` $str = 'result: $count'; $count = 5; //TODO: parse var process echo $str; //result: 5 ``` I'm will not using regex replace.
For this type of thing, I'd probably use string formatting. In PHP, that'd be [printf](http://us.php.net/sprintf). ``` ?php $str="result: %d" ....dostuff.....define $count..... printf($str,$count) ? ``` edit: although, the best way to do this probably depends partly on why you have to define $string before $count. If it's a string that's repeated a lot, and you wanted to put it in a global variable or something, printf would be my choice, or putting it in a function as other answers have suggested. If the string is only used once or twice, are you sure you can't refactor the code to make $count be defined before $string? finally, a bit of bland theory: when you write '$string = "result: $count"', PHP immediately takes the value of $count and puts it into the string. after that, it isn't worried about $count anymore, for purposes of $string, and even if $count changes, $string won't, because it contains a literal copy of the value. There isn't, as far as I'm aware, a way of binding a position in a string to a yet-to-be-defined variable. The 'printf' method leaves placeholders in the string, which the function printf replaces with values when you decide what should go in the placeholders. So, if you wanted to only write $string = "result: $count" $count=5 $echo string (and not have to call another function) and get "result: 5", there's no way to do that. The closest method would be using placeholders and printf, but that requires an explicit call to a function, not an implicit substitution.
Why don't you use a function? ``` function result_str($count) { return "result: $count"; } ```
String with Double quoted and Single quoted
[ "", "php", "" ]
I work on the localization of Java software, and my projects have both .properties files and XML resources. We currently use comments to instruct translators to not translate certain strings, but the problem with comments is that they are not machine-readable. The only solution I can think of is to prefix each do-not-translate key with something like `_DNT_` and train our translation tools to ignore these entries. Does anyone out there have a better idea?
Could you break the files up into ones to be translated or ones to be not translated and then only send them the one that are to be translated? (Don't know the structure so har dto know when answering if that is practical...)
The Eclipse JDT also uses comments to prevent the translation of certain Strings: [How to write Eclipse plug-ins for the international market](http://www.ibm.com/developerworks/library/os-i18n/) I think your translation tool should work in a similar way?
How to designate resources as do-not-translate?
[ "", "java", "localization", "internationalization", "" ]
I feel that while I love J2ME and Java it's hypocritical of them to have two APIs for Java. Java was designed with "One code, many platforms" in mind, and now it's more like "One API for every OS, and one API for everything smaller than a netbook." I see a lot of J2ME emulators and such being ported to things like the PSP, and other consoles for homebrew, and I wonder why no one is doing this with normal Java. I'd love to write a game to play on my PC, than fire up a simple emulator and play the same game on the PSP, or the Dreamcast, but I can't. J2ME can't even run on a PC, you need an emulator for it, which reduces your market greatly. Plus most emulators are bulky, and not good. With super-phones like the IPhone coming out people are going to want more than little J2ME games, so if Java can't port their standard JRE to it they might find themselves missing the boat like Microsoft did with the netbook boom. It just feels like Sun needs to ether work on making the standard JRE smaller and more portable, or making J2ME available on the PC easily.
I think this should be a community Wiki But to the point, my view is that J2ME is going to die a horrible death and leave us with normal Java. The current Netbook trend combined with the more powerful smartphone trend means that your average cellphone today is much stronger than the machines that ran J2SE when it first came out. Hence, we can do away with J2ME, which was designed for ancient Nokias, and enjoy the standard Java on a smart doorknob (or a smartphone). The only problem that Java faces is that the biggest player in smartphone applications - Apple - isn't going to allow a JVM anytime in the foreseeable future.
Even if your monitor had an accelerometer in it, you probably wouldn't want to use it for an iPhone app - so I'd say there are limits to portability after all. If "write once, run anywhere" is misleading, that's because it was conceived before cell phones became prevalent. As far as the API goes, I agree a common subset would be preferable, but once again, the entire J2ME niche is completely new. The JVM is still useful: a web browser can run on Windows, Linux, and OS X, and a game can run on both Nokia and Samsung phones. **Is the original Java ideal dead?** It still meets the original demands of portable code from workstation to workstation, so no. But it sounds like you've set an even higher bar for future platforms.
Is the original Java ideal dead?
[ "", "java", "api", "java-me", "sun", "" ]
I work on a team of Java programmers. One of my co-workers suggests from time-to-time that I do something like "just add a type field" (usu. "String type"). Or code will be committed laden with "`if (foo instanceof Foo){...} else if( foo instanceof Bar){...}`". Josh Bloch's admonition that "tagged classes are a wan imitation of a proper class hierarchy" notwithstanding, what is my one-line response to this sort of thing? And then how do I elaborate the concept more seriously? It's clear to me that - the context being Java - the type of Object under consideration is right in front of our collective faces - IOW: The word right after the "class", "enum" or "interface", etc. But aside from the difficult-to-demonstrate or quantify (on the spot) "it makes your code more complicated", how do I say that "duck-typing in a (more or less) strongly-typed language is a stupid idea that suggests a much deeper design pathology?
When you say "duck typing in strongly-typed languages" you actually mean "imitating (subtype) polymorphism in statically-typed languages". It's not that bad when you have data objects (DTOs) that don't contain any behaviour. When you do have a full-blown OO model (ask yourself if this is really the case) then you should use the polymorphism offered by the language where appropriate.
Actually, you said it reasonably well right there. The truth is that the "instance of" comb is almost always a bad idea (the exception happening for example when you're marshaling or serializing, when for a short interval you may not have all the type information at hand.) As josh says, that's a sign of a bad class hierarchy otherwise. The way that you *know* it's a bad idea is that it makes the code brittle: if you use that, and the type hierarchy *changes*, then it probably breaks that instance-of comb everywhere it occurs. What's more, you then lose the benefit of strong typing; the compiler can't help you by catching errors ahead of time. (This is somewhat analogous to the problems caused by typecasts in C.) ### Update Let me extend this a bit, since from a comment it appears I wasn't quite clear. The reason you use a typecast in C, or `instanceof`, it that you want to say "as if": use this `foo` as if it were a `bar`. Now, in C, there is no run time type information around at all, so you're just working without a net: if you typecast something, the generated code is going to treat that address as if it contained a particular type no matter what, and you should only *hope* that it will cause a run-time error instead of silently corrupting something. Duck typing just raises that to a norm; in a dynamic, weakly typed language like Ruby or Python or Smalltalk, everything is an untyped reference; you shoot messages at it at runtime and see what happens. If it understands a particular message, it "walks like a duck" -- it handles it. This can be very handy and useful, because it allows marvelous hacks like assigning a generator expression to a variable in Python, or a block to a variable in Smalltalk. But it does mean you're vulnerable to errors at runtime that a strongly typed language can catch at compile time. In a strongly-typed language like Java, you can't really, strictly, have duck typing at all: you *must* tell the compiler what type you're going to treat something as. You can get something like duck typing by using type casts, so that you can do something like ``` Object x; // A reference to an Object, analogous to a void * in C // Some code that assigns something to x ((FoodDispenser)x).dropPellet(); // [1] // Some more code ((MissleController)x).launchAt("Moon"); // [2] ``` Now at run time, you're fine as long as x is a kind of `FoodDispenser` at [1] or `MissleController` at [2]; otherwise *boom*. Or unexpectedly, no *boom*. In your description, you protect yourself by using a comb of `else if` and `instanceof` ``` Object x ; // code code code if(x instanceof FoodDispenser) ((FoodDispenser)x).dropPellet(); else if (x instanceof MissleController ) ((MissleController)x).launchAt("Moon"); else if ( /* something else...*/ ) // ... else // error ``` Now, you're protected against the run-time error, but you've got the responsibility of doing something sensible later, at the `else`. But now imagine you make a change to the code, so that 'x' can take the types 'FloorWax' and 'DessertTopping'. You now must go through all the code and find all the instances of that comb and modify them. Now the code is "brittle" -- changes in the requirements mean lots of code changes. In OO, you're striving to make the code less brittle. The OO solution is to use polymorphism instead, which you can think of as a kind of limited duck typing: you're defining all the operations that something can be trusted to perform. You do this by defining a superior class, probably abstract, that has all the methods of the inferior classes. In Java, a class like that is best expressed an "interface", but it has all the type properties of a class. In fact, you can see an interface as being a promise that a particular class can be trusted to act "as if" it were another class. ``` public interface VeebleFeetzer { /* ... */ }; public class FoodDispenser implements VeebleFeetzer { /* ... */ } public class MissleController implements VeebleFeetzer { /* ... */ } public class FloorWax implements VeebleFeetzer { /* ... */ } public class DessertTopping implements VeebleFeetzer { /* ... */ } ``` All you have to do now is use a reference to a VeebleFeetzer, and the *compiler* figures it out for you. If you happen to add another class that's a subtype of VeebleFeetzer, the compiler will select the method and check the arguments in the bargain ``` VeebleFeetzer x; // A reference to anything // that implements VeebleFeetzer // Some code that assigns something to x x.dropPellet(); // Some more code x.launchAt("Moon"); ```
How do I argue against Duck-typing in a strongly typed language like Java?
[ "", "java", "duck-typing", "" ]
Is there such a thing as a small, self-contained library that will render an equation written in a text-based format (e.g. LaTeX or MathML) to an image (either vector or raster)? It would be preferable if it's in Python or Python-friendly. (One possibility that I've found: [Matplotlib](http://matplotlib.sourceforge.net/) has Python code to parse and display LaTeX equations, using gl2ps. If I don't find anything else it seems possible to extract all the relevant bits into a separate library.) **EDIT:** by "self-contained" I mean they can't use TeX/LaTex itself since unfortunately I can't rely on it being installed
You can try to [convert MathML to SVG](http://www.grigoriev.ru/svgmath/).
There's mimetex a stand alone program for rendering latex math... it's not a library however <http://www.forkosh.com/mimetex.html>
Looking for a self-contained equation rendering library
[ "", "python", "parsing", "latex", "equation", "mathml", "" ]
I need to locate the index position of a record in a large database table in order to preset a pager to that item's page. I need an efficient SQL query that can give me this number. Sadly SQL doesn't offer something like: ``` SELECT INDEX(*) FROM users WHERE userid='123' ``` Any bright ideas? EDIT: Lets assume there is an ORDER BY clause appended to this. the point is I do not want to have to load all records to locate the position of a specific one. I am trying to open a pager to the page holding an existing item that had previously been chosen - because i want to provide information about that already chosen item within a context that allows a user to choose a different one.
You might use something like (pseudo-code): * counting query: $n = select count(uid) from {users} where ... (your paging condition including userid 123 as the limit) * $page = floor($n / $pager\_size); * display query: select what,you,want from {users} where (your paging condition without the limit), passed to db\_query\_range(thequery, $page, $pager\_size) You should really look at [pager\_query](http://api.drupal.org/api/function/pager_query/6), though, because that's what it's all about, and it basically works like this: a counting query and a display query, except it tries to build the counting query automatically.
Assuming you are really asking how to page records in SQL Server 2005 onwards, have a look at this code from [David Hayden](http://www.davidhayden.com/blog/dave/archive/2005/12/30/2652.aspx): (you will need to change Date, Description to be your columns) ``` CREATE PROCEDURE dbo.ShowUsers @PageIndex INT, @PageSize INT AS BEGIN WITH UserEntries AS ( SELECT ROW_NUMBER() OVER (ORDER BY Date DESC) AS Row, Date, Description FROM users) SELECT Date, Description FROM UserEntries WHERE Row BETWEEN (@PageIndex - 1) * @PageSize + 1 AND @PageIndex * @PageSize END ```
Index of a record in a table
[ "", "sql", "drupal", "pagination", "" ]
I'd like to clear out and reuse an ostringstream (and the underlying buffer) so that my app doesn't have to do as many allocations. How do I reset the object to its initial state?
I've used a sequence of clear and str in the past: ``` // clear, because eof or other bits may be still set. s.clear(); s.str(""); ``` Which has done the thing for both input and output stringstreams. Alternatively, you can manually clear, then seek the appropriate sequence to the begin: ``` s.clear(); s.seekp(0); // for outputs: seek put ptr to start s.seekg(0); // for inputs: seek get ptr to start ``` That will prevent some reallocations done by `str` by overwriting whatever is in the output buffer currently instead. Results are like this: ``` std::ostringstream s; s << "hello"; s.seekp(0); s << "b"; assert(s.str() == "bello"); ``` If you want to use the string for c-functions, you can use `std::ends`, putting a terminating null like this: ``` std::ostringstream s; s << "hello"; s.seekp(0); s << "b" << std::ends; assert(s.str().size() == 5 && std::strlen(s.str().data()) == 1); ``` `std::ends` is a relict of the deprecated `std::strstream`, which was able to write directly to a char array you allocated on the stack. You had to insert a terminating null manually. However, `std::ends` is not deprecated, i think because it's still useful as in the above cases.
Seems to be that the `ostr.str("")` call does the trick.
How to reuse an ostringstream?
[ "", "c++", "stl", "reset", "ostringstream", "" ]
I'm having trouble with Django templates and CharField models. So I have a model with a CharField that creates a slug that replaces spaces with underscores. If I create an object, Somename Somesurname, this creates slug *Somename\_Somesurname* and gets displayed as expected on the template. However, if I create an object, *Somename Somesurname* (notice the second space), slug *Somename\_\_Somesurname* is created, and although on the Django console I see this as `<Object: Somename Somesurname>`, on the template it is displayed as *Somename Somesurname*. So do Django templates somehow strip spaces? Is there a filter I can use to get the name with its spaces?
Let me preface this by saying @DNS's answer is correct as to why the spaces are not showing. With that in mind, this template filter will replace any spaces in the string with `&nbsp;` Usage: ``` {{ "hey there world"|spacify }} ``` Output would be `hey&nbsp;there&nbsp;&nbsp;world` Here is the code: ``` from django.template import Library from django.template.defaultfilters import stringfilter from django.utils.html import conditional_escape from django.utils.safestring import mark_safe import re register = Library() @stringfilter def spacify(value, autoescape=None): if autoescape: esc = conditional_escape else: esc = lambda x: x return mark_safe(re.sub('\s', '&'+'nbsp;', esc(value))) spacify.needs_autoescape = True register.filter(spacify) ``` For notes on how template filters work and how to install them, [check out the docs](http://docs.djangoproject.com/en/dev/howto/custom-template-tags/#code-layout).
Django sees the object internally as having two spaces (judging by the two underscores and two spaces in the `repr` output). The fact that it only shows up with one space in the template is just how HTML works. Notice how, in the question you just asked, most of the places where you entered two spaces, only one is showing up? From the [HTML4 Spec](http://www.w3.org/TR/html401/struct/text.html): > In particular, user agents should collapse input white space sequences when producing output inter-word space. As S.Lott suggested, you can verify that my guess is correct by adding debug logging, or using the Firebug plugin for Firefox or something similar, to see exactly what's getting sent to the browser. Then you'll know for sure on which end the problem lies. If multiple spaces are really important to you, you'll need to use the `&nbsp;` entity, though I don't know offhand how you'd get Django to encode the output of that specific object using them.
Django templates stripping spaces?
[ "", "python", "django", "django-templates", "" ]
I finally got a Silverlight MVVM example to work so that when I change the values of first name and last name text boxes, the full names change automatically. However, and strangely, my model which inherits from **INotifyPropertyChanged is only notified if I change** ***at least 2 characters*** of either the first or last name. * if I change "Smith" to "Smith1", then **no event is fired** * if I change "Smith" to "Smith12" then the event is fired, as expected Has anyone run in to this before in Silverlight/XAML/INotifyPropertyChanged? What could it be? **Is there a setting somewhere that indicates how much of a textbox needs to change before it notifies itself as "changed"?** Here are the main parts of the code I'm using: **Customer.cs:** ``` using System; using System.Collections.Generic; using System.ComponentModel; namespace TestMvvm345.Model { public class Customer : INotifyPropertyChanged { public int ID { get; set; } public int NumberOfContracts { get; set; } private string firstName; private string lastName; public string FirstName { get { return firstName; } set { firstName = value; RaisePropertyChanged("FirstName"); RaisePropertyChanged("FullName"); } } public string LastName { get { return lastName; } set { lastName = value; RaisePropertyChanged("LastName"); RaisePropertyChanged("FullName"); } } public string FullName { get { return firstName + " " + lastName; } } #region INotify public event PropertyChangedEventHandler PropertyChanged; private void RaisePropertyChanged(string property) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(property)); } } #endregion } } ``` **CustomerHeaderView.xaml:** ``` <UserControl x:Class="TestMvvm345.Views.CustomerHeaderView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Width="400" Height="300"> <Grid x:Name="LayoutRoot" Background="White"> <StackPanel HorizontalAlignment="Left"> <ItemsControl ItemsSource="{Binding}"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <TextBox x:Name="FirstName" Text="{Binding Path=FirstName, Mode=TwoWay}" Width="150" Margin="3 5 3 5"/> <TextBox x:Name="LastName" Text="{Binding Path=LastName, Mode=TwoWay}" Width="150" Margin="0 5 3 5"/> <TextBlock x:Name="FullName" Text="{Binding Path=FullName, Mode=TwoWay}" Margin="0 5 3 5"/> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </StackPanel> </Grid> </UserControl> ``` **CustomerViewModel.cs:** ``` using System.ComponentModel; using System.Collections.ObjectModel; using TestMvvm345.Model; namespace TestMvvm345 { public class CustomerViewModel { public ObservableCollection<Customer> Customers { get; set; } public void LoadCustomers() { ObservableCollection<Customer> customers = new ObservableCollection<Customer>(); //this is where you would actually call your service customers.Add(new Customer { FirstName = "Jim", LastName = "Smith", NumberOfContracts = 23 }); customers.Add(new Customer { FirstName = "Jane", LastName = "Smith", NumberOfContracts = 22 }); customers.Add(new Customer { FirstName = "John", LastName = "Tester", NumberOfContracts = 33 }); customers.Add(new Customer { FirstName = "Robert", LastName = "Smith", NumberOfContracts = 2 }); customers.Add(new Customer { FirstName = "Hank", LastName = "Jobs", NumberOfContracts = 5 }); Customers = customers; } } } ``` **MainPage.xaml.cs:** ``` void MainPage_Loaded(object sender, RoutedEventArgs e) { CustomerViewModel customerViewModel = new CustomerViewModel(); customerViewModel.LoadCustomers(); CustomerHeaderView.DataContext = customerViewModel.Customers; } ``` # UPDATE: I remade this project in WPF and it works fine. Perhaps it's a Silverlight 3 issue.
Using your example code it works perfectly for me. I make a single character change and then move focus and the FullName update correctly happens As an aside your usage of NotifyPropertyChanged has a flaw ``` public string FirstName { get { return firstName; } set { firstName = value; RaisePropertyChanged("FirstName"); RaisePropertyChanged("FullName"); } } ``` should be: ``` public string FirstName { get { return firstName; } set { if (firstName != value) { firstName = value; RaisePropertyChanged("FirstName"); RaisePropertyChanged("FullName"); } } } ``` You want to avoid causing the event to fire and associated rebinding from happening on non-changes.
I've notice the same problem in Silverlight 2. I think this is a bug. Here's my bug report (currently unanswered): <http://silverlight.net/forums/t/59207.aspx>
Why would INotifyPropertyChanged only fire when at least TWO characters are changed?
[ "", "c#", "wpf", "silverlight", "xaml", "inotifypropertychanged", "" ]
Why do some C++ projects require a runtime package to be installed, while others do not? EDIT:How to make a project to work without the runtime?
Some will have been statically linked, while others will depend on a dynamic library, loaded at run-time. To link your own project statically, you need to change your project configuration - how you do this depends on the compiler/linker and/or IDE you are using.
Some applications are built linking with the system libraries this are the *dynamic linked programs*. Other programs contains the code of the libraries into the executable file this are the *static linked programs*. Pros of dynamic linked: * Smaller size of the program executable. * Less memory consumption if the dynamically linked library is shared. * Better performance. Cons of dynamic linked: * Dependency on the library. * Deployment is more dificult. Pros of static linked: * No dependencies. * Easier deployment of the application. Cons of static linked: * The executable file is bigger. To get a static project you need to set up the option in the project properties.
C++ runtime required?
[ "", "c++", "deployment", "runtime", "" ]
Here is the question: How would your trim a block of text to the nearest word when a certain amount of characters have past. I'm not trying to limit a certain number words or letters, but limit the letters and cut it off at the nearest word. Say I had two strings: ``` "This is a block of text, blah blah blah" "this is another block of txt 2 work with" ``` Say I wanted to limit it to 27 characters, the first line would end at "blah" and the second on would end at "txt" even though the character limits are reached within those words. Is there any clean solution to this problem?
See the [wordwrap](http://php.net/manual/en/function.wordwrap.php) function. I would probably do something like: ``` function wrap($string) { $wstring = explode("\n", wordwrap($string, 27, "\n") ); return $wstring[0]; } ``` (If your strings already span across severeal lines, use other char - or pattern - for the split other than "\n")
I wrote a [max-string-length](http://aleembawany.com/projects/code-snippets/#toc-max-string-length) function that does just this and is very clean.
Trimming a block of text to the nearest word when a certain character limit is reached?
[ "", "php", "string", "" ]
I have often wondered what exactly does casting do at compiler or machine level. What does it do with the 0 and 1s in memory? Can anyone point me at some good literature.
Casting doesn't modify the individual bits when casting between reference types, it just instructs the compiler/runtime to interpret the bits in a specific way if possible. If the cast is not possible at compile time due to incompatible types an error is issued. If the cast is not possible at runtime an exception is thrown. The [wiki page on type conversion](http://en.wikipedia.org/wiki/Type_conversion) has additional information.
Others have covered the basics, but I'd like to say a few words about how a compiler is implemented that may be enlightening in his case. The compiler maintains a list (called the *symbol table*) of variable *names* in use at any particular point in the program and some information about the variables. The list of information includes: * their assigned storage (in this register, at that memory location, etc...) * what type they are (i.e. integer or string or `SubWhatsitObj`) including any restriction (like for instance constantness) * any linkage information the compiler needs The compiler uses this information to decide how to treat expressions involving the variables. The kind of meta-information that is stored in the symbol table can also be derived for any expression from it's components. Except in the special case of numeric type conversion, a cast just tells the compiler to use *different* meta-information for a variable or expression than would usually be the case. No bits in memory are affected at all, but the outcome of a calculation may be.
What does casting do at compiler/machine level?
[ "", "java", "compiler-construction", "casting", "" ]
I work on a project where we have to create unit tests for all of our simple beans (POJOs). Is there any point to creating a unit test for POJOs if all they consist of is getters and setters? Is it a safe assumption to assume POJOs will work about 100% of the time? --- Duplicate of - [Should @Entity Pojos be tested?](https://stackoverflow.com/questions/337241/should-entity-pojos-be-tested) **See also** [Is it bad practice to run tests on a DB instead of on fake repositories?](https://stackoverflow.com/questions/607620/is-it-bad-practice-to-run-tests-on-a-db-instead-of-on-fake-repositories/616537#616537) [Is there a Java unit-test framework that auto-tests getters and setters?](https://stackoverflow.com/questions/108692/is-there-a-java-unit-test-framework-that-auto-tests-getters-and-setters)
The rule in TDD is ["Test everything that could possibly break"](http://www.xprogramming.com/xpmag/PracticesForaReason.htm) Can a getter break? Generally not, so I don't bother to test it. Besides, the code I *do* test will certainly call the getter so it *will* be tested. My personal rule is that I'll write a test for any function that makes a decision, or makes more than a trivial calculation. I won't write a test for `i+1`, but I probably will for `if (i<0)...` and definitely will for `(-b + Math.sqrt(b*b - 4*a*c))/(2*a)`. BTW, the emphasis on POJO has a different reason behind it. We want the vast quantity of our code written into POJOs that *don't depend on the environment they run in*. For example, it's hard to test servlets, because they depend upon executing within a container. So we want the servlets to call POJOs that don't depend on their environment and are therefore easy to test.
POJOs may also contain other functions, such as equals(), hashCode(), compareTo(), and various other functions. It may be useful to know that those functions are working correctly.
JUnit tests for POJOs
[ "", "java", "junit", "pojo", "" ]
I want to change the `-webkit-transform: rotate()` property using JavaScript dynamically, but the commonly used `setAttribute` is not working: ``` img.setAttribute('-webkit-transform', 'rotate(60deg)'); ``` The `.style` is not working either... How can I set this dynamically in JavaScript?
The JavaScript style names are `WebkitTransformOrigin` and [`WebkitTransform`](http://developer.apple.com/safari/library/documentation/AppleApplications/Reference/SafariJSRef/WebKitCSSTransformValue/WebKitCSSTransformValue.html) ``` element.style.webkitTransform = "rotate(-2deg)"; ``` Check the DOM extension reference for WebKit [here](https://developer.apple.com/documentation/webkitjs).
Here are the JavaScript notations for most common vendors: ``` webkitProperty MozProperty msProperty OProperty property ``` I reset inline transform styles like: ``` element.style.webkitTransform = ""; element.style.MozTransform = ""; element.style.msTransform = ""; element.style.OTransform = ""; element.style.transform = ""; ``` And like this using jQuery: ``` $(element).css({ "webkitTransform":"", "MozTransform":"", "msTransform":"", "OTransform":"", "transform":"" }); ``` See blog post *[Coding Vendor Prefixes with JavaScript](http://www.developerdrive.com/2012/03/coding-vendor-prefixes-with-javascript/)* (2012-03-21).
How to set the style -webkit-transform dynamically using JavaScript?
[ "", "javascript", "webkit", "" ]
I have the following very large table in SQL Server 2005: ``` create table Blah ( FirstName varchar(30), Rank int, Position int, ... ) ``` I will run the following query on it: ``` declare @PassedInFirstName varchar(30) set @PassedInFirstName = 'SomeName' select TOP 1 Position from Blah where FirstName = @PassedInFirstName order by Rank DESC ``` I am setting up the following index on it: ``` CREATE INDEX IX_Blah ON Blah (FirstName, Rank) ``` Given that I order it by Rank DESC, should I change the index to order Rank in a descending way: ``` CREATE INDEX IX_Blah ON Blah (FirstName ASC, Rank DESC) ``` Or it does not matter? Thanks.
If should benefit if the WHERE returns many rows. I've seen results where logical IO was reduced by 50% by using DESC in the INDEX to match an ORDER BY Also, change the query to covering: SQL 2005 +: ``` CREATE INDEX IX_Blah ON Blah (FirstName, Rank DESC) INCLUDE (Position) ``` SQL 2000, SQL 7: ``` CREATE INDEX IX_Blah ON Blah (FirstName, Rank DESC, Position) ```
Adding Rank as a descending value in your index is only a minor change. Sql Server can *probably* reverse the sorting used, or in this case, easily iterate to the last item in the list. Is Position your primary key? An index is built of indexed column, the primary key, and optionally included columns. If Position is not your primary key then you're currently looking up your primary key in your index, then going through the result with a primary index seek to find the Position value. Try adding the Position value as an included column and you should be able to execute your query based on only one index, no other indexes will be used. ``` CREATE INDEX IX_Blah ON Blah (FirstName, Rank DESC) INCLUDE (Position) ``` Dont forget to check out your query plans, they can tell you if you lack any indexes (assuming Sql Server 2008), what indexes are used, etc.
How to properly create an index
[ "", "sql", "sql-server", "indexing", "" ]
I have an application that implements an interactive shell, similar to how the Python console / irb works. The problem now is that if the user accidentally hits `^D` EOF is issued and my `getline()` call returns an empty string which i treat as "no input" and display the prompt again. This then results in an endless loop that prints the prompt. Now in Python I would solve that problem by catching `EOFError`, but in C++ no exception is raised I could catch and there doesn't seem to be a setting on `cin` to ignore EOF. Any hints?
Correct solution thanks to litb: ``` if (!getline(std::cin, str)) { std::cin.clear(); std::cout << std::endl; } ```
If it could not read anything, it sets the `failbit`. Just test the stream in an if condition, and clear the bit: ``` if(!getline(std::cin, myline)) { std::cin.clear(); std::cout << "you should enter something" << std::endl; } ``` Internally, the sequence is this way in your case: * Wait on the terminal for a string. Terminal will block until the user emits a newline. Two probable error cases possible 1. User presses immediately EOF. This will make `getline` read nothing at all, and it will set the `failbit` and the `eofbit`. 2. User inputs something and then presses EOF. This will make `getline` consume something and then it hits EOF while trying to get the next character. This cause `eofbit` to be set. * You will try to read something again. The extraction function will create an object of type `istream::sentry` which checks in what state the stream is. If any of the errorbits are set, it will cause the extraction function to immediately return. That caused the endless loop before. A call to `clear()` clears all error bits, and you can go on reading your stuff again.
Ignoring EOF on std::cin in C++
[ "", "c++", "console", "stdin", "" ]
I am having problems [using Gzip compression and JQuery together](https://stackoverflow.com/questions/706412/use-gzip-json-responses-and-jquery). It seems that it may be caused by the way I am sending JSON responses in my Struts Actions. I use the next code to send my JSON objects back. ``` public ActionForward get(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) { JSONObject json = // Do some logic here RequestUtils.populateWithJSON(response, json); return null; } public static void populateWithJSON(HttpServletResponse response,JSONObject json) { if(json!=null) { response.setContentType("text/x-json;charset=UTF-8"); response.setHeader("Cache-Control", "no-cache"); try { response.getWriter().write(json.toString()); } catch (IOException e) { throw new ApplicationException("IOException in populateWithJSON", e); } } } ``` Is there a better way of sending JSON in a Java web application?
Instead of ``` try { response.getWriter().write(json.toString()); } catch (IOException e) { throw new ApplicationException("IOException in populateWithJSON", e); } ``` try this ``` try { json.write(response.getWriter()); } catch (IOException e) { throw new ApplicationException("IOException in populateWithJSON", e); } ``` because this will avoid creating a string and the JSONObject will directly write the bytes to the Writer object
In our project we are doing pretty much the same except that we use application/json as the content type. Wikipedia says that the [official Internet media type for JSON is application/json](http://en.wikipedia.org/wiki/JSON).
How to send JSON back with JAVA?
[ "", "java", "json", "" ]
> **Possible Duplicate:** > [C# - Is there a better alternative than this to ‘switch on type’?](https://stackoverflow.com/questions/298976/c-sharp-is-there-a-better-alternative-than-this-to-switch-on-type) If you want to `switch` on a type of object, what is the best way to do this? ### Code snippet ``` private int GetNodeType(NodeDTO node) { switch (node.GetType()) { case typeof(CasusNodeDTO): return 1; case typeof(BucketNodeDTO): return 3; case typeof(BranchNodeDTO): return 0; case typeof(LeafNodeDTO): return 2; default: return -1; } } ``` I know this doesn't work that way, but I was wondering how you could solve this. Is an `if/else` statement appropriate in this case? Or do you use the switch and add `.ToString()` to the type?
If I *really* had to `switch` on type of object, I'd use `.ToString()`. However, I would avoid it at all costs: `IDictionary<Type, int>` will do much better, [visitor](http://en.wikipedia.org/wiki/Visitor_pattern) might be an overkill but otherwise it is still a perfectly fine solution.
This won't directly solve your problem as you want to switch on your own user-defined types, but for the benefit of others who only want to switch on built-in types, you can use the [TypeCode](http://msdn.microsoft.com/en-us/library/system.typecode.aspx) enumeration: ``` switch (Type.GetTypeCode(node.GetType())) { case TypeCode.Decimal: // Handle Decimal break; case TypeCode.Int32: // Handle Int32 break; ... } ```
Using Case/Switch and GetType to determine the object
[ "", "c#", ".net", "reflection", "switch-statement", "case", "" ]
I have a a string in c# initialised as follows: ``` string strVal = "£2000"; ``` However whenever I write this string out the following is written: £2000 It does not do this with dollars. An example bit of code I am using to write out the value: ``` System.IO.File.AppendAllText(HttpContext.Current.Server.MapPath("/logging.txt"), strVal); ``` I'm guessing it's something to do with localization but if c# strings are just unicode surely this should just work? **CLARIFICATION:** Just a bit more info, Jon Skeet's answer is correct, however I also get the issue when I URLEncode the string. Is there a way of preventing this? So the URL encoded string looks like this: "%c2%a32000" %c2 =  %a3 = £ If I encode as ASCII the £ comes out as ? Any more ideas?
The default character set of URLs when used in HTML pages and in HTTP headers is called [ISO-8859-1](http://en.wikipedia.org/wiki/ISO_8859-1) or ISO Latin-1. It's not the same as UTF-8, and it's not the same as ASCII, but it does fit into one-byte-per-character. The range 0 to 127 is a lot like ASCII, and the whole range 0 to 255 is the same as the range 0000-00FF of Unicode. So you can generate it from a C# string by casting each character to a byte, or you can use `Encoding.GetEncoding("iso-8859-1")` to get an object to do the conversion for you. (In this character set, the UK pound symbol is 163.) **Background** [The RFC says](http://www.rfc-editor.org/rfc/rfc1738.txt) that unencoded text must be limited to the traditional 7-bit US ASCII range, and anything else (plus the special URL delimiter characters) must be encoded. But it leaves open the question of what character set to use for the upper half of the 8-bit range, making it dependent on the context in which the URL appears. And that context is defined by two other standards, HTTP and HTML, which do specify the default character set, and which together create a practically irresistable force on implementers to assume that the address bar contains percent-encodings that refer to ISO-8859-1. ISO-8859-1 [is the character set of text-based content sent via HTTP](http://www.rfc-editor.org/rfc/rfc2616.txt) except where otherwise specified. So by the time a URL string appears in the HTTP GET header, it ought to be in ISO-8859-1. The other factor is that HTML also uses ISO-8859-1 as its default, and URLs typically originate as links in HTML pages. So when you craft a simple minimal HTML page in Notepad, the URLs you type into that file are in ISO-8859-1. It's sometimes described as "hole" in the standards, but it's not really; it's just that HTML/HTTP fill in the blank left by the RFC for URLs. Hence, for example, the advice on [this page](http://www.blooberry.com/indexdot/html/topics/urlencoding.htm#how): > URL encoding of a character consists > of a "%" symbol, followed by the > two-digit hexadecimal representation > (case-insensitive) of the ISO-Latin > code point for the character. (ISO-Latin is another name for IS-8859-1). So much for the theory. Paste this into notepad, save it as an .html file, and open it in a few browsers. Click the link and Google should search for UK pound. ``` <HTML> <BODY> <A href="http://www.google.com/search?q=%a3">Test</A> </BODY> </HTML> ``` It works in IE, Firefox, Apple Safari, Google Chrome - I don't have any others available right now.
[`AppendAllText`](http://msdn.microsoft.com/en-us/library/ms143356.aspx) is writing out the text in UTF-8. What are you using to look at it? Chances are it's something that doesn't understand UTF-8, or doesn't try UTF-8 first. Tell your editor/viewer that it's a UTF-8 file and all should be well. Alternatively, use the overload of `AppendAllText` which allows you to specify the encoding and use whichever encoding is going to be most convenient for you. EDIT: In response to your edited question, the reason it fails when you encode with ASCII is that £ is not in the ASCII character set (which is Unicode 0-127). URL encoding is also using UTF-8, by the looks of it. Again, if you want to use a different encoding, specify it to the [`HttpUtility.UrlEncode`](http://msdn.microsoft.com/en-us/library/h10z5byc.aspx) overload which accepts an encoding.
Why is this appearing in my c# strings: £
[ "", "c#", ".net", "asp.net", "localization", "string", "" ]
Is it possible (and supported cross-browser) to embed an image into the XML of an AJAX response, and then load that image using JavaScript? I have a system that does some calculations based on the number of sessions running through it. The results are then graphed, and returned in two parts: 1) XML containing information about the graph, totals, and Image map data allowing the user to click on relevant areas. 2) The graph image. As the data can change between the two requests (and could be expensive to calculate), I'd prefer to do it in a single request (return the image with the XML). The current implementation caches the stats for a small period so that the results of multiple requests will still match. As the amount of data that needs to be cached is going to be increasing (from ~2.5K to ~1.2MB), I would like to try an alternative method. NOTE: I do not want to use inline b64 PNG images as they are not supported in IE.
As this seems like more work that it's worth, I've decided that a simpler solution would be: 1) Send XML data to the client with the details of what is to be graphed. 2) Client sends a request for the image, including the data to graph (similar to the Google Chart API). This decouples the chart rendering from the data, and then it can be used in the future to generate generic charts for other data sets. The other benefit is that it doesn't require any caching server-side since only 1 request is used.
Can you not store the image on the server and send the URL to the client?
Embedding an image in an AJAX response
[ "", "javascript", "ajax", "" ]
I have to implement Java.Polynomial as a school assignment. Part of the methods are *add(polynomial)*, *multiply(polynomial)* etc. In the case of something like ``` p.add(q); // computes p + q ``` is it better to return void and keep the sum of the polynomials in p? or is it better to return a polynomial and keep the former value of p intact? My instinct tells me that i should do the following * implement p.add(q) as a 'destructive' method... it adds q to the value of p and stores the sum in p * also implement a static method Polynomial.add(p,q) which returns the sum of the polynomials. What do you think?
Personally I really like immutable types. I would write `p.plus(q)` instead of `p.add(q)` and make it return a new polynomial. Immutability has a lot going for it, not just in terms of thread-safety (which sounds like it won't be an issue in this case) but also in terms of reasoning about your code. It's a lot easier to predict what's going to happen if you know that nothing can change the contents of an object under your feet once you've stashed a reference to it.
It's a matter of style. It's usually better (and "neater") when objects are immutable so methods are not "destructive" (they aren't mutators, they have no side effects) but return a new instance with the new value. The most well known immutable class is String.
Should methods be destructive in custom objects?
[ "", "java", "oop", "polynomial-math", "" ]
I have an asp button. It's server-side so I can only show it for logged in users, but i want it to run a javascript function and it seems when it's runat="server" it always calls the postback event. I also have a regular button (`<input...`>) not running at server and it works fine... How can I make this button only run the javascript and not postback?
Have your javascript return false when it's done. ``` <asp:button runat="server".... OnClientClick="myfunction(); return false;" /> ```
``` YourButton.Attributes.Add("onclick", "return false"); ``` or ``` <asp:button runat="server" ... OnClientClick="return false" /> ```
How to disable postback on an asp Button (System.Web.UI.WebControls.Button)
[ "", ".net", "asp.net", "javascript", "ajax", "" ]
Got a question. I have images hosted on my server. I already know of the method when an image is uploaded to resize it and save, but I have another thought in mind. 1. I was wondering if there is a way to resize when the image is requested from the user. Not when it was uploaded by the user. So for example a user goes to upload an image and I DO NOT RESIZE it and save another copy of the resized image. Instead, when the image is requested by the user via an ASP.NET img control/tag it would resize the image on the fly to display it and display it via the img tag/control. Why would I want to do this? To save on disk space. Most servers have a disk space limit, but not a server processing limit. So I would like to save on disk space and use the processing space instead. **EDIT:** As a startup website its currently better that I save disk than saving processing time. I don't have much money for large amount of space at this moment. Hopefully it will change when the site launches. Any ideas? Thanks guys and girls.
I assume you can 'control' the urls to the resized images, so for example the full-sized image might be referenced as `<img src="uploads/myphoto.jpg"/>` the thumbnail could be to an ASPX or ASHX like `<img src="uploads/myphoto.jpg.ashx"/>`? This **[article on CodeProject - Dynamic Image Resize](http://www.codeproject.com/KB/aspnet/ImageResizeWithHttpModule.aspx)** seems to have exactly the source code you are looking for (and although it's in VB, it shouldn't be hard to port if you're a C# person). Hope that helps. Finally, I'd encourage you consider the various forms of caching (both using Http-Headers, to ensure the images are cached at the client or proxy whenever possible; and using built-in ASP.NET features to avoid unnecessary processing of the same images over-and-over). Although you'll be saving disk-quota, you're effectively slowing down every other page/request... just a thought.
Dynamic image resizing has numerous advantages, the least of which is reduced disk space usage. However, it does need to be combined with a form of persistent caching, such as either Amazon CloudFront or a disk cache. Dynamic image resizing gives you great agility on your web site, whereas pre-generating image variants locks you in, preventing the eventual changes you will have to make. When combined with caching, there is no run-time performance difference between the two. The [ImageResizer library offers disk caching, CloudFront caching, and correct memory and cache management](http://imageresizing.net). It's been constantly improved and maintained since 2007, and is quite bulletproof. It's running a few social networking sites as well, some having over a million images. It's a time-tested, traffic-tested, and unit-tested library :) It's also extremely simple to use - you just add `?width=x&height=y` to the query string. Functionality can be added via 20+ plugins, so you won't be weighed down by unused code and features. The article mentioned by CraigD is inherently limited in its performance by the fact that it uses an HttpHandler instead of using an HttpModule - an HttpHandler cannot pass a request back to IIS native code for execution after the resized image is written to disk. It also doesn't adjust jpeg encoding properly or play well with the ASP.NET cache or URL authorization system. Although, I do have to admit - compared to most of the sample code I've seen, it [violates far fewer of the image resizing pitfalls I've compiled](http://nathanaeljones.com/163/20-image-resizing-pitfalls/). I strongly suggest [using the ImageResizer library](http://imageresizing.net). It's good code, I wrote it :) If you do end up using sample code or writing your own, [please avoid these pitfalls](http://nathanaeljones.com/163/20-image-resizing-pitfalls/)!
Resize and Display image from server with ASP.NET
[ "", "c#", "asp.net", "image-processing", "" ]
Can anyone please tell my why this method won't compile? ``` void Statistics::readFromFile(string filename) { string line; ifstream myfile (filename); if (myfile.is_open()) { while (! myfile.eof() ) { getline (myfile,line); cout << line << endl; } myfile.close(); } else cout << "Unable to open file"; } ``` Should work, right? Yet, I always get the following error message: ```` ``` Line Location Statistics.cpp:15: error: no matching function for call to 'std::basic_ifstream<char, std::char_traits<char> >:: basic_ifstream(std::string*)' ``` ```` any help would be greatly appreciated.
``` ifstream myfile (filename); ``` should be: ``` ifstream myfile (filename.c_str() ); ``` Also, your read-loop logic is wrong. It should be: ``` while ( getline( myfile,line ) ){ cout << line << endl; } ``` The eof() function that you are using is only meaningful **after** you have tried to read read something. To see why this makes a difference, consider the simple code: ``` int main() { string s; while( ! cin.eof() ) { getline( cin, s ); cout << "line is "<< s << endl; } } ``` If you run this and type ctrl-Z or ctrl-D to indicate EOF **immediately**, the cout will be performed even though no line has actually been input (because of the EOF). In general, the eof() function is not very useful, and you should instead test the return value of functions like getline() or the stream extraction operators.
Read the compiler error: ``` no matching function for call to 'std::basic_ifstream >::basic_ifstream(std::string*) ``` **`No matching function for call to:`** It can't find the function you're trying to call **`std::basic_ifstream >::`** - a member function of ifstream **`:basic_ifstream(std::string*)`** - the constructor which takes a string pointer as its argument So you try to create an ifstream by passing a string pointer to its constructor. And it can't find a constructor that accepts such an argument. Since you're not passing a string pointer in the above, the code you've posted must be different from your actual code. Always copy/paste when asking about code. Typos make it impossible to figure out the problem. In any case, as I recall, the constructor does not accept a string argument, but only a const char\*. So filename.c\_str() should do the trick Apart from that, you can do this a lot simpler: ``` ifstream myfile (filename); std::copy(std::istream_itrator<std::string>(myfile), std::istream_itrator<std::string>(), std::ostream_iterator<std::string>(std::cout)); } ```
Spot the error in this file reading code (C++)
[ "", "c++", "file-io", "" ]
Here's a bit of a different question for you computer history buffs. Help me settle a bet between myself and a buddy of mine: Q: When were stored procedures introduced into the SQL Server product? I need the year and/or version. Thanks. **EDIT:** Ok, going back beyond SQL Server to Sybase
As a side note, [read this for the history of SQL Server](http://blogs.msdn.com/euanga/archive/2006/01/19/514479.aspx) by an insider.
If you are asking about MS SQL Server, they have always been there, as they were there when Microsoft licensed the product from Sybase.
When were stored procedures introduced into the SQL Server product?
[ "", "sql", "sql-server", "" ]
In C# you can refer to values in a class using the 'this' keyword. ``` class MyClass { private string foo; public string MyMethod() { return this.foo; } } ``` While I presume the answer will likley be user preference, is it best practice to use the this keyword within a class for local values?
In the spirit of [DRY](http://en.wikipedia.org/wiki/Don%27t_repeat_yourself), I would say this is not a particularly useful practice in general. Almost any use of `this` can be shortened to an equivalent expression by just removing the `this`. One exception is if you have a local parameter which happens to have the same name as another class member; in that case you must distinguish between the two with `this`. But this is a situation you can easily avoid, by simply renaming the parameter.
I use the `this` keyword almost only when some member is hiding another, and when I need to pass the current class instance to a method for example: ``` class Employee { private string name; private string address; // Pass the current object instance to another class: public decimal Salary { get { return SalaryInfo.CalculateSalary(this); } } public Employee(string name, string address) { // Inside this constructor, the name and address private fields // are hidden by the paramters... this.name = name; this.address = address; } } ```
Should you always refer to local class variables with "this"
[ "", "c#", "" ]
I've never done anything with payment gateways, can anybody give me suggestions?
Payment gateways differ from country to country and they are agencies who can enable payment processing in your applications. For example. in UK BT has it payment gateway. They call it [BTBuynet](http://www.epayments.bt.com/) . In India ICICI is one of the major payment gateway providers. You can also check out [paypal](http://www.paypal.com/). Each of them have their own way of implementation. Usually every payment gateways have 2 modes for payment processing. 1. **Payment page** : This is one of the most commonly used. Sites usually redirect the user to the pay page hosted by the payment gateway. This type has some advantages and disadvantages. Advantages We dont have to worry about validating or storing cards details as everything is taken care by their pay page. Disadvantages Since we redirect the user to a different site the look and feel of the site will be different. 2. **Payment service** : This is the second type in which usually a component or API is used. ard details are collected by the merchant (The application owner) and sent for payment processing. Hope this gives yo some idea about payment gateways. You can also check out some related questions in the RHS.
Well, decide on a payment gateway you want to work with. If you're just looking to learn about this online, I'd recommend going with PayPal's payment gateway. They really don't give good rates (they take a big bite out of the money you charge users), but they have a nice "sandbox" system which you can switch on and off, and it's free to develop with. This lets you run your system as if everything was "live": * Money will get credited to your sandbox account from the fake credit cards that you create * Transactions will act just like the live system: you'll get confirmations and notifications just like it was real. So a good place to start for that would be [PayPal's developer site](https://developer.paypal.com/). Oh, don't be afraid to use some other gateway - all the ones I've used have a way to test their system before it goes live. But many of them do not allow switching back to a "fake" simulation mode after it's gone live. So adding and testing new features after you got a production system is trickier.
I need some guidance on payment gateways
[ "", "php", "html", "ajax", "api", "payment-gateway", "" ]
``` #include<iostream> using namespace std; int main() { char again; do { cout<<"you are in the while loop"; cout<<"do you want to continue looping?"; cin>>again; } while (again != 'n' || again != 'N'); system("pause"); return 0; } ``` i know something is wrong with the test condition in the 'while'. But I can't figure it out. when the input of the user is neither 'n' nor 'N', the loop should keep on printing the code "you are in the while loop". Once 'n' or 'N' is pressed, the program will be terminated. However for my code, program will keep on looping the code regardless what character i enter. But when i change the '||' to '&&', the program can ran as desired. Anyone can tell me what is going on?
This is a common boolean logic question. `||` means "or," which means "as long as one side of this is true, then the expression is true." So when you pass an uppercase `'N'` to `c != 'n' || c != 'N'` the program says "well, `'N'` is not equal to `'n'`, therefore one side of the expression is true, therefore the whole expression is true and there is no need to check the rest of the expression." Even when you press lowercase `'n'`, the program says "well, `'n'` is equal to `'n'`, but it's not equal to `'N'`, therefore one side of the expression is true, therefore the whole expression is true." This is what is happening in your while loop. On the other hand, `&&` means "and" which means "both sides of the expression must be true"; when you pass an uppercase `'N'` to `c != 'n' && c != 'N'` the program thinks "`'N'` is not equal to `'n'`, but it is equal to `'N'`, therefore only one side of the expression is true, therefore the expression is false." This gets confusing because if you were testing to see if the characters entered were equal to particular values you would use `||` (e.g., "I want to know if `'a'` **or** `'b'` **or** `'c'` was entered"). Basically, when you would use `||` for a particular expression, and you want the opposite of that expression then you need to change to `&&` (e.g., I want none of `'a'`, `'b'` or `'c'`; or to put it another way, the value cannot be `'a'` **and** it cannot be `'b'`, **and** it cannot be `'c'`"). Likewise, if you would use `&&` for a particular expression, and you want the opposite of that expression then you need to use `||`. This is one of De Morgan's laws, which I would recommend you read up on so you can avoid having to rediscover each of them on your own.
Yes: || is "or" and && is "and". *Every* character is *either* "not 'n'" or "not 'N'" because it can't possibly be *both* 'n' and 'N' simultaneously. Another (probably simpler to read) way of writing the condition would be: ``` !(again == 'n' || again == 'N') ``` which means "the opposite of (again is *either* 'n' or 'N')".
what is the difference in using && and || in the do...while loop?
[ "", "c++", "boolean-logic", "" ]
I am having a problem when trying to programmatically print a directory of word documents. In this example, I am trying to print only the files with the "3\_" prefix. The problem is that the file does not print unless there are two files with the 3\_ prefix. I have been looking around forever to figure this problem out. Is there something wrong with the way I am opening the file? It works only when there are two files in the directory, in which case it will print out only one of the two files. Edit: I did try a messagebox and the path is correct. The filename is correct. Also, if I am watching the printer in the printers folder, a document will flash up for a brief second and then disappear ( I have printing paused so that I can see the output). If word is giving me an error, why doesn't it show? And why does this work if there are two files in the directory with the 3\_ prefix? Edit: I think it is a problem with the printout() method. When I set the app to visible and run it, the document opens fine, but nothing is printed. I can open the document manually and print (which works fine). Edit: Thank you all for the answers. The background parameter in the printout() method was the issue. The program would quit before printing could fully spool (which is why I would see a document flash in the print queue and disappear. Turning background printing off required the document to stay open and print, which was key. Thank you ``` string[] filesToCheck = Directory.GetFiles(clientDirectoryPath); Object filename = null; for (int i = 0; i < filesToCheck.Count();i++ ) { if(filesToCheck[i].Contains("3_")) { filename = filesToCheck[i]; wrdDoc = wrdApp.Documents.Open(ref filename, ref oMissing, ref oTrue, ref oFalse, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing); wrdDoc.PageSetup.FirstPageTray = letterHeadTray; wrdDoc.PageSetup.OtherPagesTray = defaultTray; wrdDoc.PrintOut(ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing); wrdDoc.Close(ref oFalse, ref oMissing, ref oMissing); wrdDoc = null; } } ```
Try and set the Background parameter (1st param) of the [PrintOut](http://msdn.microsoft.com/en-us/library/bb237242.aspx)() call to False. Probably the last print job is not completely spooled and canceled since the Word COM object is released too early.
try use ``` string[] files = Directory.GetFiles(dir, "3_*.doc"); foreach(string file in files) { } ``` instead of ``` for (int i = 0; i < filesToCheck.Count(); i++ ) { } ```
Programmatically printing directory of word documents... last file always skipped
[ "", "c#", ".net", "printing", "word-automation", "page-setup", "" ]
``` System.Console.WriteLine(int.MaxValue); ``` This line gives me the answer of `2,147,483,647` as I have a 32-bit PC. Will the answer be same on a 64-bit PC?
Yes. > int.MaxValue: 2,147,483,647 Source: <https://www.dotnetperls.com/int-maxvalue>
**Yes, the answer will be the same on a 64-bit machine.** In .NET, an `int` is [a signed 32-bit integer](http://msdn.microsoft.com/en-us/library/5kzh1b5w.aspx), regardless of the processor. Its .NET framework type is `System.Int32`. The [C# Language specification](http://msdn.microsoft.com/en-us/vcsharp/aa336809.aspx) states: > The `int` type represents signed 32-bit > integers with values between > `–2,147,483,648` and `2,147,483,647`.
What is the int.MaxValue on a 64-bit PC?
[ "", "c#", ".net", "int", "int64", "int32", "" ]
I'm trying to write to the windows registry at HKEY\_CURRENT\_USER\Software\appname however I keep getting a permissions error when I attempt to write to the key, I have added the following to my assembly: ``` [assembly: RegistryPermissionAttribute(SecurityAction.RequestMinimum, Write = @"HKEY_CURRENT_USER\\Software")] ``` but this has not resolved the issue, is there something else that I should be doing?
The [RegistryPermissionAttribute](http://msdn.microsoft.com/en-us/library/system.security.permissions.registrypermissionattribute.aspx) is part of the [Code Access Security aka CAS](http://msdn.microsoft.com/en-us/library/930b76w0.aspx), this is a permission system which checks the permission you have inside of the .NET framework, these permissions are defined by the security policy. [There are 4 security policies](http://en.wikipedia.org/wiki/Code_Access_Security#Policy): * Enterprise - policy for a family of machines that are part of an Active Directory installation. * Machine - policy for the current machine. * User - policy for the logged on user. * AppDomain - policy for the executing application domain. The first 3 are configured in the configuration screen in the .NET Configuration tool, and the last is configured at runtime. The reason why I explain this first is because the RegistryPermissionAttribute only checks your .NET permissions, it does not check the **Operating System** permissions. You could use the [System.Security.AccessControl](http://msdn.microsoft.com/en-us/library/system.security.accesscontrol.aspx) to check the operating system permissions, but to get the permissions you'll probably need to either elevate or impersonate.
I don't suppose it's something as simple as you having opened the key without specifying that you want write access? The `OpenSubKey(string)` method only gives read-only access.
How to get registry write permissions in C#
[ "", "c#", "" ]
Somewhat inspired by [this question](https://stackoverflow.com/questions/659166/write-c-in-a-graphical-scratch-like-way/659203) about a graphical programming environment. I don't think that C++ or C# are really conducive to this type of environment, but perhaps there's something halfway there. Lot's of IDEs that I've used will use syntax highlighting to change the foreground (or even the background) colour of text for keywords, strings, comments, etc... Are there IDEs out there that will highlight larger syntactic structures? Here's an example of what I'm thinking of. [Example code structure http://img256.imageshack.us/img256/9441/codestructure.png](http://img256.imageshack.us/img256/9441/codestructure.png) (Please don't comment on my poor choice of colours... I'm not a graphic designer for a reason.) While it's not a graphical drag-and-drop environment, the highlighting would still give an overall view of the structure of the code. Personally, learning C# after years of C++, I still catch myself tripping over the fact that at the end of a file you usually have the end of a class and the end of a namespace, so the end of a function is two-levels in. (In a C++ code file, the end of a function is usually at the top level of indentation). I help myself out there by throwing in little comments at the close brackets: ``` } // end class } // end namespace ``` But it seems to me that some automatic colouring would make that completely unnecessary. Anyway, has this been done already? Bonus if it's an add-on to Visual Studio.
The closest thing that I've seen is [Codekana](http://www.codekana.com/), although doesn't have "background syntax-highlighting" it colorizes the different flow-control structures: [![](https://i.stack.imgur.com/jWYEk.gif)](https://i.stack.imgur.com/jWYEk.gif) (source: [codekana.com](http://www.codekana.com/img/post-hls.gif)) --- [![](https://i.stack.imgur.com/GTzxj.gif)](https://i.stack.imgur.com/GTzxj.gif) (source: [codekana.com](http://www.codekana.com/img/nested-hdr.gif)) * Red for loops * Green for if-blocks * Brown for else-blocks * Aquamarine for switch-blocks * Olive for exception blocks * Orange for 'return'
I'm the author of [Codekana](http://www.codekana.com). Indeed, what you describe above was the main goal for the product. BTW, I'm about to publish an article about the "making of" and the underlying technology, which is pretty nifty. It will probably be available next week (March 26, '09 or so). Recommended reading, if I may say so myself. The reason Codekana only provides outlines, instead of a colored background, are limitations in VS's text rendering extensibility. I will hopefully be able to implement a solid-background version at some point in the future, although it will definitely require serious hacking and "rocket surgery". I would have commented above, instead of providing another answer, but my reputation doesn't allow commenting. :( [UPDATE: Thanks for the upvotes, now I can comment!]
Is there an IDE out there that does structural syntax highlighting?
[ "", "c#", "visual-studio", "ide", "syntax-highlighting", "" ]
I currently have code that calls `matplotlib.pylab.plot` multiple times to display multiple sets of data on the same screen, and Matplotlib scales each to the global min and max, considering all plots. Is there a way to ask it to scale each plot independently, to the min and max of that particular plot?
There's no direct support for this, but here's some code from a [mailing list posting](http://article.gmane.org/gmane.comp.python.matplotlib.general/1987) that illlustrates two independent vertical axes: ``` x=arange(10) y1=sin(x) y2=10*cos(x) rect=[0.1,0.1,0.8,0.8] a1=axes(rect) a1.yaxis.tick_left() plot(x,y1) ylabel('axis 1') xlabel('x') a2=axes(rect,frameon=False) a2.yaxis.tick_right() plot(x,y2) a2.yaxis.set_label_position('right') ylabel('axis 2') a2.set_xticks([]) ```
This is how you create a single plot (add\_subplot(1,1,1)) and limit the scale on the y-axes. ``` myFig = figure() myPlot = self.figure.add_subplot(1,1,1) myPlot.plot([1,2,3,4,5], [5,4,3,2,1], '+r') myPlot.set_ylim(1,5) # Limit y-axes min 1, max 5 ```
Multiple overlapping plots with independent scaling in Matplotlib
[ "", "python", "matplotlib", "" ]
If .NET 2.0 nullable types was there from version 1, will DBNull.Value not be needed in the first place? Or RDBMS's null has no bearing with .NET's null? That is DBNull.Value will still be needed anyhow, regardless of .NET version 1 already have nullable types.
If [System.Data.DataSetExtensions.dll](http://msdn.microsoft.com/en-us/library/bb360891.aspx) is anything to go by, I'm guessing that DBNull.Value probably wouldn't exist if nullable value types had been available at the time. Using that library's extension methods on a DataRow that has a "foo" column set to DBNull.Value, you get... ``` row.Field<int?>("foo"); // returns a nullable int set to null row.Field<int>("foo"); // throws an InvalidCastException ``` This is, of course, incredibly convenient when combined with the C# ?? operator, when you want to provide a default value in place of a database null. Personally, I wasted no time implementing similar extension methods on IDataReader/IDataRecord.
Nullable types in .NET allow the variable in question to actually be null. DBNull is a way to say "in another environment, this value was considered to be null". Since we need a way do differentiate between *actually* null - or "natively" null, as in native to our current runtime - and null in another system we communicate with, nullable types and DBNull serve completely different purposes.
.NET 2.0 nullable types and database null musings
[ "", "c#", ".net", "database", "null", "" ]
I am currently being offered a job for an IT development business (notice I already do my freelance job and pays well enough), this job sounds good and it would enable me to keep my current job while doing the other one. The thing is, they want to outsource me without giving me any credit whatsoever for the development but the full payment. On the side of getting payment it sounds good, but on the side of only helping another company build its own reputation using MY work I don't quite like it (Ego mostly, yes I know), so I started thinking and came up with possibly encrypting my javascriptps, php's, etc. I've found some tools online to do that however I'm not sure how good is it to actually do this, I mean, would it affect the functionality or loading speed? also is it really that recommendable for me to do this? I don't know what to think right now...would u guys mind telling me your opinions on the matter? --- **Edit to Clarify** The thing here is not that I'm being contracted by a business so that I develop for them specifically they just want to outsource what I do, that is, this IT business will pay me for doing it while they tell the client they did it themselves...which then you can think "well they won't be able to duplicate it themselves" or "if they try to duplicate it at least they'll take some time to undestand the code" but let's face it in the end if they want to they will be able to understand and replicate my code (or something like it)... Now the issue here is that I am interested in working with them since it'd get me jobs abroad (notice I'm in Mexico and the jobs would be for another country) and I'd really like to get to be known there, but this IT business won't agree to give me some credit...and I currently don't have many other options for getting to be known there.
the more you code, the better you get (I'd hope). Do the work, get better and don't be greedy about your code. A lot of people would be stoked to be getting any work at all. If we were all encrypting our code to hide it from others communities like this wouldn't exist.
You have to come to some kind of agreement beforehand about who owns the copyright to the code. Are they hiring you to create a product, or are they hiring you to create a code base that they will own and maintain as they desire? If they assume that they are the owners of the code, encrypting it make cause all kinds of trouble for you, such as getting paid.
Is it recommended to encrypt javascript?
[ "", "javascript", "" ]
I have a varbinary(max) field in one of my tables but I don't need it every time and I'm looking for a way to fetch it from the database only when necessary. I'm using ADO.NET Entity Framework. How to do that?
The solution was to create a separate table with the varbinary field and make 1-to-1 relationship between the tables
It is not necessarily to create separate table. You should do several steps. Let's assume that we have table 'Documents' (Id, Name, Data (varbinary)). 1. Open EF designer, copy and paste the 'Document' entity. 2. Rename it to 'DocumentData'. Add mapping to 'Documents' table. 3. Delete 'Data' property in 'Document' entity. 4. Delete 'Name' property in 'DocumentData' entity. 5. Right-click 'DocumentData' entity and add new Association. Select 1-to-1 association with 'Document' entity. 6. Select new association, go to Properties, click '...' on 'Referential Constraint' field, select 'Document' entity as principal, leave all as default (Id -> Id) and click OK. Now build the project. NOTE. Now when creating new 'Document' entity you should also create new 'DocumentData' entity even if you don't want to place any data yet: ``` Document doc = new Document(); doc.Name = "My document"; doc.DocumentData = new DocumentData(); context.Documents.Add(doc); context.SaveChanges(); ```
How to load varbinary(max) fields only when necessary with ADO.NET Entity Framework?
[ "", "c#", "linq", "entity-framework", "ado.net", "linq-to-entities", "" ]
I need to write a compiler. It's homework at the univ. The teacher told us that we can use any API we want to do the parsing of the code, as long as it is a good one. That way we can focus more on the JVM we will generate. So yes, I'll write a compiler in Java to generate Java. Do you know any good API for this? Should I use regex? I normally write my own parsers by hand, though it is not advisable in this scenario. Any help would be appreciated.
Regex is good to use in a compiler, but only for recognizing tokens (i.e. no recursive structures). The classic way of writing a compiler is having a **lexical analyzer** for recognizing tokens, a **syntax analyzer** for recognizing structure, a **semantic analyzer** for recognizing meaning, an **intermediate code generator**, an **optimizer**, and last a **target code generator**. Any of those steps can be merged, or skipped entirely, if makes the compiler easier to write. There have been many tools developed to help with this process. For Java, you can look at * ANTLR - <http://www.antlr.org/> * Coco/R - <http://ssw.jku.at/Coco/> * JavaCC - <https://javacc.dev.java.net/> * SableCC - <http://sablecc.org/>
I would recommend [ANTLR](http://antlr.org/), primarily because of its output generation capabilities via StringTemplate. What is better is that [Terence Parr's book](http://www.pragmaticprogrammer.com/titles/tpantlr/index.html) on the same is by far one of the better books oriented towards writing compilers with a parser generator. Then you have [ANTLRWorks](http://antlr.org/works/index.html) which enables you to study and debug your grammar on the fly. To top it all, the [ANTLR wiki + documentation](http://www.antlr.org/wiki/display/ANTLR3/ANTLR+v3+documentation), (although not comprehensive enough to my liking), is a good place to start off for any beginner. It helped me refresh knowledge on compiler writing in a week.
How can I parse code to build a compiler in Java?
[ "", "java", "parsing", "compiler-construction", "parser-generator", "" ]
I am building a web application which will generate XML from a database based on a specific schema and depending on certain input parameters (such as dates or search terms) carried on a querystring. Is the best approach to write a web service which returns the required XML or to write an ashx handler that outputs the XML? Am I able to specify the XML exactly as I want it in each instance?
Consider writing a REST-style WCF service using [WebHttpBinding](http://msdn.microsoft.com/en-us/library/system.servicemodel.webhttpbinding.aspx).
I would do ashx in a situation like this. From your description, it sounds like a single search facility, a single table (or limited group of tables in a single database) and a relatively limited scope. All the MVC stuff is probably overkill. Don't over-engineer it. Just create a class that implements IHttpHandler (or just do a generic handler if you're in Visual Studio 2005 or higher), parse the query string, either create a dynamic SQL string or pass the parameters to a stored procedure, and then use an XmlTextWriter to create the output. You can point the XmlTextWriter either to the output stream of the response or to a memory stream. I use the memory stream approach because it gives me better error handling options. Then set the content type to application/xml and stream the results. I like this approach because it's easy to understand, easy to implement, easy to maintain, and gives you complete control. The downsides are that it's tightly bound to the database and may be less flexible than one of the web service framework based approaches if your application's scope increases over time.
Best approach for web application returning only XML?
[ "", "c#", ".net", "web-services", "" ]
We've gotten a homework assignment in Java, which relates to inheritance. I don't have a problem with the programming in itself, but I'm a bit unsure about some of the math, and would like confirmation/corrections from someone a bit more knowledgable. The assignment starts with a abstract class, GeometricObject, which is extended into three two-dimensional objects. A rectangle, a circle and a triangle. Those three are then extended into a cuboid for the rectangle, a cylinder and a sphere for the circle, and the triangle into a triangular prism. Each of these three-dimensional objects are hollow, and has a defined thickness and is made of a special metal, so we are to calculate their weight. And herein lies the problem, since I'm a bit unsure as to how I find the "inner volume" on some of them. * Cuboid: Here I assume that I can just subtract 2 \* thickness from the width, height and depth, and then everything looks fine. * Cylinder: Subtract thickness from the radius making up the base, and 2\*thickness from the height * Sphere: Subtract thickness from the radius * Prism: This is where I'm a bit stuck. Each object gets passed a baseline, height of the triangle, and the height for the entire prism. How can I use this in order to find the "inner prism" representing the inner volume? *Update*: Forgot to mention that when creating the object, we specify the outmost sizes, and the hollow part is inside of this. The other way around is not allowed. *Update again*: The triangle is an isosceles triangle. *Update yet again*: Mixed up radius and diameter for the circular. Corrected now.
I think you cannot get this result from the data you have (baseline length & triangle height). You have to get other information, like location of the points or the angles at the baseline. Edit: since the triangle is isosceles: As AnthonyWJones already pointed out, the inner triangle is similar to the outer triangle. Therefore, the only thing you need is find the ratio between the two. [sketch http://img76.imageshack.us/img76/4164/g2654.png](http://img76.imageshack.us/img76/4164/g2654.png) You can find it easily from the height. Since triangles CQP and ACS are similar ``` h2 : |PQ| = |AC| : |AS| ``` where ``` |PQ| = h1 (= the thickness of the metal) |AC| = sqrt(base^2/4+height^2) |AS| = base/2 ``` Then, you compute `h2` and the ratio `r = (height - h1 - h2)/height` is the ratio between the two triangles. The area of the inner triangle is then `r^2 * area of the outer triangle`.
One thing you know about the inner prism is that it will have the same ratios as the outer prism. In other words given the height of the inner prism you can calculate the inner base length and from there the volume. You know the base will have 1 unit thickness. So that leaves calculating the distance from the pinnacle of the inner prism to the pinnacle of the outer prism. That distance is the hypontenuse of a right angled triangle. The angles in the triangle are known since they are function of the base length and height. One side of the triangle is of `thickness` length being the perpendicular from the inner edge at the inner pinnacle to the outer edge. (The final side of the triangle is where that perpendicular intersects the outer edge up to the outer pinnacle). This is enough info to use standard trig to caclulate the hypotenuse length. This length plus 1 thickness (for the base) subtracted from the original height gives you the inner height. The ratio between the inner and outer heights can be applied to the base length. There a probably cleverer ways to do this but this would be my common bloke approach.
Calculating volumes of hollow three dimensional geometric objects
[ "", "java", "math", "geometry", "" ]
Lets say I have the following Django model: ``` class StandardLabel(models.Model): id = models.AutoField(primary_key=True) label = models.CharField(max_length=255) abbreviation = models.CharField(max_length=255) ``` Each label has an ID number, the label text, and an abbreviation. Now, I want to have these labels translatable into other languages. What is the best way to do this? As I see it, I have a few options: 1: Add the translations as fields on the model: ``` class StandardLabel(models.Model): id = models.AutoField(primary_key=True) label_english = models.CharField(max_length=255) abbreviation_english = models.CharField(max_length=255) label_spanish = models.CharField(max_length=255) abbreviation_spanish = models.CharField(max_length=255) ``` This is obviously not ideal - adding languages requires editing the model, the correct field name depends on the language. 2: Add the language as a foreign key: ``` class StandardLabel(models.Model): id = models.AutoField(primary_key=True) label = models.CharField(max_length=255) abbreviation = models.CharField(max_length=255) language = models.ForeignKey('languages.Language') ``` This is much better, now I can ask for all labels with a certain language, and throw them into a dict: ``` labels = StandardLabel.objects.filter(language=1) labels = dict((x.pk, x) for x in labels) ``` But the problem here is that the labels dict is meant to be a lookup table, like so: ``` x = OtherObjectWithAReferenceToTheseLabels.object.get(pk=3) thelabel = labels[x.labelIdNumber].label ``` Which doesn't work if there is a row per label, possibly with multiple languages for a single label. To solve that one, I need another field: ``` class StandardLabel(models.Model): id = models.AutoField(primary_key=True) group_id = models.IntegerField(db_index=True) label = models.CharField(max_length=255) abbreviation = models.CharField(max_length=255) language = models.ForeignKey('languages.Language') class Meta: unique_together=(("group_id", "language"),) #and I need to group them differently: labels = StandardLabel.objects.filter(language=1) labels = dict((x.group_id, x) for x in labels) ``` 3: Throw label text out into a new model: ``` class StandardLabel(models.Model): id = models.AutoField(primary_key=True) text = models.ManyToManyField('LabelText') class LabelText(models.Model): id = models.AutoField(primary_key=True) label = models.CharField(max_length=255) abbreviation = models.CharField(max_length=255) language = models.ForeignKey('languages.Language') labels = StandardLabel.objects.filter(text__language=1) labels = dict((x.pk, x) for x in labels) ``` But then this doesn't work, and causes a database hit every time I reference the label's text: ``` x = OtherObjectWithAReferenceToTheseLabels.object.get(pk=3) thelabel = labels[x.labelIdNumber].text.get(language=1) ``` I've implemented option 2, but I find it very ugly - i don't like the group\_id field, and I can't think of anything better to name it. In addition, StandardLabel as i'm using it is an abstract model, which I subclass to get different label sets for different fields. I suppose that if option 3 /didn't/ hit the database, it's what I'd choose. I believe the real problem is that the filter `text__language=1` doesn't cache the `LabelText` instances, and so the DB is hit when I `text.get(language=1)` What are your thoughts on this? Can anyone recommend a cleaner solution? **Edit**: Just to make it clear, these are not form labels, so the Django Internationalization system doesn't help.
I'd much prefer to add a field per language than a new model instance per language. It does require schema alteration when you add a new language, but that isn't hard, and how often do you expect to add languages? In the meantime, it'll give you better database performance (no added joins or indexes) and you don't have to muck up your query logic with translation stuff; keep it all in the templates where it belongs. Even better, use a reusable app like [django-transmeta](http://code.google.com/p/django-transmeta/) or [django-modeltranslation](http://code.google.com/p/django-modeltranslation/) that makes this stupid simple and almost completely transparent.
Another option you might consider, depending on your application design of course, is to make use of Django's internationalization features. The approach they use is quite common to the approach found in desktop software. I see the question was edited to add a reference to Django internationalization, so you do know about it, but the intl features in Django apply to much more than just Forms; it touchs quite a lot, and needs only a few tweaks to your app design. Their docs are here: <http://docs.djangoproject.com/en/dev/topics/i18n/#topics-i18n> The idea is that you define your model as if there was only one language. In other words, make no reference to language at all, and put only, say, English in the model. So: ``` class StandardLabel(models.Model): abbreviation = models.CharField(max_length=255) label = models.CharField(max_length=255) ``` I know this looks like you've totally thrown out the language issue, but you've actually just relocated it. Instead of the language being in your data model, you've pushed it to the view. The django internationalization features allow you to generate text translation files, and provides a number of features for pulling text out of the system into files. This is actually quite useful because it allows you to send plain files to your translator, which makes their job easier. Adding a new language is as easy as getting the file translated into a new language. The translation files define the label from the database, and a translation for that language. There are functions for handling the language translation dynamically at run time for models, admin views, javascript, and templates. For example, in a template, you might do something like: ``` <b>Hello {% trans "Here's the string in english" %}</b> ``` Or in view code, you could do: ``` # See docs on setting language, or getting Django to auto-set language s = StandardLabel.objects.get(id=1) lang_specific_label = ugettext(s.label) ``` Of course, if your app is all about entering new languages `on the fly`, then this approach may not work for you. Still, have a look at the Internationalization project as you may either be able to use it "as is", or be inspired to a django-appropriate solution that does work for your domain.
Django labels and translations - Model Design
[ "", "python", "django", "django-models", "localization", "" ]
I would like to write a function `GetHashCodeOfList()` which returns a hash-code of a list of strings regardless of order. Given 2 lists with the same strings should return the same hash-code. ``` ArrayList list1 = new ArrayList() list1.Add("String1"); list1.Add("String2"); list1.Add("String3"); ArrayList list2 = new ArrayList() list2.Add("String3"); list2.Add("String2"); list2.Add("String1"); GetHashCodeOfList(list1) = GetHashCodeOfList(list2) //this should be equal. ``` I had a few thoughts: 1. I can first sort the list, then combine the sorted list into 1 long string and then call `GetHashCode()`. However sorting is a slow operation. 2. I can get the hash of each individual string (by calling `string.GetHashCode()`) in the list, then multiplying all hashes and calling Mod `UInt32.MaxValue`. For example: `"String1".GetHashCode() * "String2".GetHashCode * … MOD UInt32.MaxValue`. But this results in a number overflow. Does anyone have any thoughts? Thanks in advance for your help.
There are various different approaches here the under two main categories, each typically with their own benefits and disadvantages, in terms of effectiveness and performance. It is probably best to choose the simplest algorithm for whatever application and only use the more complex variants if necessary for whatever situation. Note that these examples use `EqualityComparer<T>.Default` since that will deal with null elements cleanly. You could do better than zero for null if desired. If T is constrained to struct it is also unnecessary. You can hoist the `EqualityComparer<T>.Default` lookup out of the function if so desired. ## Commutative Operations If you use operations on the hashcodes of the individual entries which are [commutative](http://en.wikipedia.org/wiki/Commutativity) then this will lead to the same end result regardless of order. There are several obvious options on numbers: ### XOR ``` public static int GetOrderIndependentHashCode<T>(IEnumerable<T> source) { int hash = 0; foreach (T element in source) { hash = hash ^ EqualityComparer<T>.Default.GetHashCode(element); } return hash; } ``` One downside of that is that the hash for { "x", "x" } is the same as the hash for { "y", "y" }. If that's not a problem for your situation though, it's probably the simplest solution. ### Addition ``` public static int GetOrderIndependentHashCode<T>(IEnumerable<T> source) { int hash = 0; foreach (T element in source) { hash = unchecked (hash + EqualityComparer<T>.Default.GetHashCode(element)); } return hash; } ``` Overflow is fine here, hence the explicit `unchecked` context. There are still some nasty cases (e.g. {1, -1} and {2, -2}, but it's more likely to be okay, particularly with strings. In the case of lists that may contain such integers, you could always implement a custom hashing function (perhaps one that takes the index of recurrence of the specific value as a parameter and returns a unique hash code accordingly). Here is an example of such an algorithm that gets around the aforementioned problem in a fairly efficient manner. It also has the benefit of greatly increasing the distribution of the hash codes generated (see the article linked at the end for some explanation). A mathematical/statistical analysis of exactly how this algorithm produces "better" hash codes would be quite advanced, but testing it across a large range of input values and plotting the results should verify it well enough. ``` public static int GetOrderIndependentHashCode<T>(IEnumerable<T> source) { int hash = 0; int curHash; int bitOffset = 0; // Stores number of occurences so far of each value. var valueCounts = new Dictionary<T, int>(); foreach (T element in source) { curHash = EqualityComparer<T>.Default.GetHashCode(element); if (valueCounts.TryGetValue(element, out bitOffset)) valueCounts[element] = bitOffset + 1; else valueCounts.Add(element, bitOffset); // The current hash code is shifted (with wrapping) one bit // further left on each successive recurrence of a certain // value to widen the distribution. // 37 is an arbitrary low prime number that helps the // algorithm to smooth out the distribution. hash = unchecked(hash + ((curHash << bitOffset) | (curHash >> (32 - bitOffset))) * 37); } return hash; } ``` ### Multiplication Which has few if benefits over addition: small numbers and a mix of positive and negative numbers they may lead to a better distribution of hash bits. As a negative to offset this "1" becomes a useless entry contributing nothing and any zero element results in a zero. You can special-case zero not to cause this major flaw. ``` public static int GetOrderIndependentHashCode<T>(IEnumerable<T> source) { int hash = 17; foreach (T element in source) { int h = EqualityComparer<T>.Default.GetHashCode(element); if (h != 0) hash = unchecked (hash * h); } return hash; } ``` ## Order first The other core approach is to enforce some ordering first, then use any hash combination function you like. The ordering itself is immaterial so long as it is consistent. ``` public static int GetOrderIndependentHashCode<T>(IEnumerable<T> source) { int hash = 0; foreach (T element in source.OrderBy(x => x, Comparer<T>.Default)) { // f is any function/code you like returning int hash = f(hash, element); } return hash; } ``` This has some significant benefits in that the combining operations possible in `f` can have significantly better hashing properties (distribution of bits for example) but this comes at significantly higher cost. The sort is `O(n log n)` and the required copy of the collection is a memory allocation you can't avoid given the desire to avoid modifying the original. `GetHashCode` implementations should normally avoid allocations entirely. One possible implementation of `f` would be similar to that given in the last example under the Addition section (e.g. any constant number of bit shifts left followed by a multiplication by a prime - you could even use successive primes on each iteration at no extra cost, since they only need be generated once). That said, if you were dealing with cases where you could calculate and cache the hash and amortize the cost over many calls to `GetHashCode` this approach may yield superior behaviour. Also the latter approach is even more flexible since it can avoid the need to use the `GetHashCode` on the elements if it knows their type and instead use per byte operations on them to yield even better hash distribution. Such an approach would likely be of use only in cases where the performance was identified as being a significant bottleneck. Finally, if you want a reasonably comprehensive and fairly non-mathematical overview of the subject of hash codes and their effectiveness in general, [these blog posts](http://blog.roblevine.co.uk/category/hash-codes/) would be worthwhile reads, in particular the *Implementing a simple hashing algorithm (pt II)* post.
An alternative to sorting the string lists would be to get the hash codes of the strings and then sort the hash codes. (Comparing ints is less expensive than comparing strings.) You can then use an algorithm to merge the hash codes that (hopefully) gives a better distribution. Example: ``` GetHashCodeOfList<T>(IEnumerable<T> list) { List<int> codes = new List<int>(); foreach (T item in list) { codes.Add(item.GetHashCode()); } codes.Sort(); int hash = 0; foreach (int code in codes) { unchecked { hash *= 251; // multiply by a prime number hash += code; // add next hash code } } return hash; } ```
Getting hash of a list of strings regardless of order
[ "", "c#", ".net", "vb.net", "string", "hash", "" ]
I have a program which is like this ``` list<int>:: iterator n = alist.begin(); while(n!= (list<int>::iterator)0) { printf("Element is %d\n",*n); n = alist.erase(n); } ``` So here i am comparing iterator with zero. but after deleting the last element the compiler is showing this error. ``` *** glibc detected *** ./new: free(): invalid pointer: 0xbf99cb10 *** ======= Backtrace: ========= /lib/libc.so.6[0xb7d956e1] /lib/libc.so.6(cfree+0x89)[0xb7d96d79] /usr/lib/libstdc++.so.6(_ZdlPv+0x21)[0xb7f3ff81] ./new[0x8048c81] ./new[0x8048ca6] ./new[0x8048d07] ./new[0x8048d39] ./new(__gxx_personality_v0+0x216)[0x804888e] /lib/libc.so.6(__libc_start_main+0xdc)[0xb7d46f9c] ./new(__gxx_personality_v0+0x49)[0x80486c1] ======= Memory map: ======== 08048000-0804a000 r-xp 00000000 08:09 3704751 /home/sathya/chaithra/archivesthrash/new ``` If I want the iterator to be zero if the queue/list is empty.. what i should do? Because in my project I need to compare this iterator with zero only and not with alist.end().. What may be the probable solution to this...?
Why do you think the iterator will ever "be zero"? Iterators are not pointers or indexes. If you need to check if a container is empty, use the empty() member function.
Change this to ``` list<int>:: iterator n = alist.begin(); while(n!= alist.end()) { printf("Element is %d\n",*n); n = alist.erase(n); } ``` or ``` list<int>:: iterator n = alist.begin(); while(alist.size() > 0) { printf("Element is %d\n",*n); n = alist.erase(n); } ``` because you must not compare an iterator to NULL - that's not a defined state for an iterator.
When will STL iterator be equal to zero?
[ "", "c++", "stl", "iterator", "" ]
I definitely remember seeing somewhere an example of doing so using reflection or something. It was something that had to do with `SqlParameterCollection` which is not creatable by a user (if I'm not mistaken). Unfortunately cannot find it any longer. Can anyone please share this trick here? Not that I consider it a valid approach in development, I'm just very interested in the possibility of doing this.
``` // the types of the constructor parameters, in order // use an empty Type[] array if the constructor takes no parameters Type[] paramTypes = new Type[] { typeof(string), typeof(int) }; // the values of the constructor parameters, in order // use an empty object[] array if the constructor takes no parameters object[] paramValues = new object[] { "test", 42 }; TheTypeYouWantToInstantiate instance = Construct<TheTypeYouWantToInstantiate>(paramTypes, paramValues); // ... public static T Construct<T>(Type[] paramTypes, object[] paramValues) { Type t = typeof(T); ConstructorInfo ci = t.GetConstructor( BindingFlags.Instance | BindingFlags.NonPublic, null, paramTypes, null); return (T)ci.Invoke(paramValues); } ```
You can use one of the overloads of **Activator.CreateInstance** to do this: [`Activator.CreateInstance(Type type, bool nonPublic)`](https://msdn.microsoft.com/en-us/library/he47tyc4(v=vs.110).aspx) Use `true` for the `nonPublic` argument. Because `true` matches a public or non-public default constructor; and `false` matches only a public default constructor. For example: ``` class Program { public static void Main(string[] args) { Type type=typeof(Foo); Foo f=(Foo)Activator.CreateInstance(type,true); } } class Foo { private Foo() { } } ```
How to instantiate an object with a private constructor in C#?
[ "", "c#", "constructor", "instantiation", "private-constructor", "" ]
What is the best way to grab n items from an IEnumerable<T> in random order? I'm writing a store API and need to provide a small set of random items from a sometimes huge enumeration of items. The underlying enumerable is sometimes an array, and sometimes a lazy evaluated filter of said array. Since I'm just grabbing a proportionally small number of items from the enumerations, it is better to use some sort of repeatedly random index into the enumeration and dupe check every time rather than randomly sort the entire list using an existing algorithm and grab top x, right? Any better ideas?
If you know the number of items in advance, it's fairly trivial to calculate n random numbers within that range, and then grab those with these indexes.
In another answer I provided a way of returning [a single random element](https://stackoverflow.com/questions/648196/random-row-from-linq-to-sql/648240#648240) from a sequence, using just a single pass. I *suspect* this could be adjusted reasonably easily to use a circular buffer and select a random sequence of a given size, but you'd have to be fairly careful to get the probabilities balanced.
What is the best way to efficiently extract a small random subset of a large enumerable?
[ "", "c#", ".net", "sorting", "random", "" ]
In SQLSERVER/MSSQL, here's the problem: ``` SELECT * from [Translation Color] order by [Language Code] ``` I want records ordered in alphabetical order starting by the 'I' letter. Example of result: 'Ioren' 'Iumen' 'Tart' 'Arfen' 'Coldry' I don't want to use union or more sql statements.. just try to catch it with an order by special clause. I've tried with: ``` ORDER BY <field> REGEXP '^I' DESC ``` but it didn't work. Any ideas?
This should do it: ``` ORDER BY CASE WHEN SUBSTRING([Translation Color],1,1) = 'l' THEN 1 ELSE 0 END DESC ``` EDIT: Full answer for ordering completely starting at i, then looping back round to h is: ``` ORDER BY CASE WHEN ASCII(UPPER(SUBSTRING([Translation Color],1,1))) < 73 THEN ASCII(UPPER(SUBSTRING([Translation Color],1,1))) + 26 ELSE ASCII(UPPER(SUBSTRING([Translation Color],1,1))) END ASC, [Translation Color] ASC ``` Please note that this will affect performance on large tables.
``` SELECT * FROM [Translation Color] ORDER BY CASE WHEN [Language Code] LIKE '[I-Zi-z]%' THEN 0 ELSE 1 END, [Language Code] ```
Get records ordered alphabetically starting by a certain letter in Sql Server
[ "", "sql", "sql-server", "regex", "sql-order-by", "" ]
My Code: ``` // Read in Xml-file XmlDocument doc = new XmlDocument(); doc.Load("C:/Web.config"); XmlNode d = doc.SelectSingleNode("/configuration"); XmlNode MYNODE = doc.CreateNode("element", "connectionStrings", ""); //newParent.(childNode); d.AppendChild(MYNODE); //Saving the document doc.Save("C:/Web.config"); ``` MyOutput in my Web.config: ``` <connectionStrings /> ``` The output that i actually want in my Web.config: ``` <connectionStrings> </connectionStrings> ``` What must i change in my code to get the correct output? Also, what must I do if i want my tags to appear just above another tag...Say my --SharePoint-- Tag. Regards Etienne
both of them are well formed xml formats. but, if you add new childs to your your appended child node, you'll get what you want. For example just add an space into the connectionstrings node : ``` XmlNode MYNODE = doc.CreateNode("element", "connectionStrings", ""); MYNODE.InnerText = " "; ``` this will have no effect in actual use of connection strings elements.. but the output will be as you want.
The output is correct. Since you do not have any child elements within the `connectionStrings` tag, it renders as an empty tag. `<connectionStrings />` means the same thing as `<connectionStrings></connectionStrings>`. If you want to insert a tag before a particular node, use the `InsertBefore` method: ``` XmlNode sharePoint = doc.SelectSingleNode("SharePoint"); XmlNode MYNODE = doc.CreateNode("element", "connectionStrings", ""); doc.InsertBefore(MYNODE, sharePoint); ```
Creating a Parent Node for XML document in C#
[ "", "c#", "asp.net", "xml", "" ]
I would like to know during execution of a program whether it is being executed using the Mono runtime or the Microsoft runtime. I'm currently using the following code to determine whether I'm on a MS CLR: ``` static bool IsMicrosoftCLR() { return RuntimeEnvironment.GetRuntimeDirectory().Contains("Microsoft"); } ``` However, this is somewhat dependent on the installation folder of the runtime and I'm not sure whether this will work on all installations. Is there a better way to check for the current runtime?
From the Mono Project's [Guide to Porting Winforms Applications](http://www.mono-project.com/Guide:_Porting_Winforms_Applications): ``` public static bool IsRunningOnMono () { return Type.GetType ("Mono.Runtime") != null; } ``` I'm sure you'll have a lot more questions, so worth checking this guide and the [mono-forums](http://www.go-mono.com/forums/)
You can check for the Mono Runtime Like this ``` bool IsRunningOnMono = (Type.GetType ("Mono.Runtime") != null); ```
How to detect which .NET runtime is being used (MS vs. Mono)?
[ "", "c#", "mono", "clr", "" ]
I have a small form that displays some progress information. Very rarely I have to show a rather long message and I want to be able to resize this form when needed so that this message fits in the form. So how do I find out how wide string `S` will be rendered in font `F`?
It depends on the rendering engine being used. You can basically switch between GDI and GDI+. Switching can be done by setting the `UseCompatibleTextRendering` property accordingly When using GDI+ you should use `MeasureString`: ``` string s = "A sample string"; SizeF size = e.Graphics.MeasureString(s, new Font("Arial", 24)); ``` When using GDI (i.e. the native Win32 rendering) you should use the `TextRenderer` class: ``` SizeF size = TextRenderer.MeasureText(s, new Font("Arial", 24)); ``` See this article: [Text Rendering: Build World-Ready Apps Using Complex Scripts In Windows Forms Controls](http://msdn.microsoft.com/en-us/magazine/cc751527.aspx)
How about this: ``` Size stringsize = graphics.MeasureString("hello", myFont); ``` ([Here](http://msdn.microsoft.com/en-us/library/ms535829(VS.85).aspx) is the MSDN link.)
How to determine the size of a string given a font
[ "", "c#", ".net", "winforms", "fonts", "" ]
I am not interested in the contents of a row, I just want to know if a row exists. The `Name` column is a primary key, so there will either be 0 or 1 matching rows. Currently, I am using: ``` if ((from u in dc.Users where u.Name == name select u).Count() > 0) // row exists else // row doesn't exist ``` While the above works, it does a lot of unnecessary work by selecting all the contents of the row (if it exists). Does the following create a faster query: ``` if (dc.Users.Where(u => u.Name == name).Any()) ``` ...or is there an even faster query?
The `Count()` approach may do extra work, as (in TSQL) `EXISTS` or `TOP 1` are often much quicker; the db can optimise "is there at least one row". Personally, I would use the any/predicate overload: ``` if (dc.Users.Any(u => u.Name == name)) {...} ``` Of course, you can compare what each one does by watching the TSQL: ``` dc.Log = Console.Out; ```
Of Course ``` if (dc.Users.Where(u => u.Name == name).Any()) ``` this is best and if multiple conditions to check then it is very simple to write as Say you want to check the user for company then ``` if (dc.Users.Where(u => u.ID== Id && u.Company==company).Any()) ```
What is the fastest way to determine if a row exists using Linq to SQL?
[ "", "c#", "linq-to-sql", "lambda", "" ]
I am looking for a regex for Javascript to search for text ("span" for example) in HTML. Example: > `<div>Lorem span Ipsum dor<a href="blabla">la`**bla**`la</a> dsad <span>2</span> ... </div>` BUT only the "span" after "Lorem" should be matched, not the `<span>` tag. For a second example, if we search for "bla", only the bold text should be matched. EDIT: The HTML is gotten by innerHTML, the matchings will be surrounded with `<span class="x">$text</span>`, an then rewritten to innerHTML of this node, and all these without *killing* the other tags. EDIT2 and My Solution: I wrote my own search, it is searching char by char, with cache and flags. Thanks for ure Help guys!
You could use dom methods to process every text node. This method takes a parent node for the first argument and loops through all of its childnodes, processing the text nodes with the function passed as the second argument. The function is where you would operate on the test node's data, to find or replace or delete or wrap the found text in a 'highlighted' span, for example. You can call the function with only the first argument, and it will return an array of text nodes, and you can then use that array to manipulate the text- the array items in that case are each nodes, and have data, parents and siblings. ``` document.deepText= function(hoo, fun){ var A= [], tem; if(hoo){ hoo= hoo.firstChild; while(hoo!= null){ if(hoo.nodeType== 3){ if(fun){ if((tem= fun(hoo))!== undefined){ A[A.length]= tem; } } else A[A.length]= hoo; } else A= A.concat(arguments.callee(hoo, fun)); hoo= hoo.nextSibling; } } return A; } ``` //test case ``` function ucwords(pa, rx){ var f= function(node){ var t= node.data; if(t && t.search(rx)!=-1){ node.data= t.replace(rx,function(w){return w.toUpperCase()}); return node; } return undefined; } return document.deepText(pa, f); } ``` **ucwords(document.body,/\bspan\b/ig)**
If you've got the HTML in a DOM element, you may use textContent/innerText to grab the text (without any HTML tags): ``` var getText = function(el) { return el.textContent || el.innerText; }; // usage: // <div id="myElement"><span>Lorem</span> ipsum <em>dolor<em></div> alert(getText(document.getElementById('myElement'))); // "Lorem ipsum dolor" ```
Matching text in HTML without contents of the tag
[ "", "javascript", "html", "regex", "" ]
I am trying to take one step towards optimizing a 90GB+ table: **Old Table** Every day the table grabs approx. 750,000 records from an external source and adds them to the table with the new date. This has been going on for three years from what I understand. 97% of the records don't change from one day to the next. **New Table** I am trying to go through old table (millions and millions of records) and eliminate redundancy which will likely reduce the table size quite dramatically. *old\_table* * date * record\_id * data\_field (really many fields, but for the sake of the example) *new\_table\_index* * date * index\_id *new\_table* * index\_id * record\_id * data\_field **Logic as we go through each record in old\_table** if (record\_id is not in new\_table) or (record\_id is in new\_table, but the latest entry of it has a different data\_field) > insert it into the new\_table and get the index\_id else > get the latest entry index\_id for that record\_id from the new\_table\_index always > insert the index\_id and date into the new\_table\_index Any thoughts on optimal ways to do this? I am not advanced enough with MySQL to put this all together. When I tried writing a script in PHP it used up 3GB of memory and then failed. Other suggestions or queries??? Thanks so much!
I ended up using a hybrid of PHP and MySQL (after swinging too far each way at first): * INSERT LINK TO PREVIOUS DAY FOR ALL PREVIOUS DAY PRs (using INSERT - SELECT) * COMPARE PRs AGAINST PREVIOUS DAY, INSERT IF CHANGED (using INSERT - SELECT) * INSERT LINK FOR NEWLY UPDATED PRs (using SELECT - php foreach - UPDATE) * ADD NEW PRs ON EACH DAY (using INSERT - SELECT) * INSERT LINK FOR NEW PRs (using INSERT - SELECT) Still need to perfect the one with the php foreach loop, but for the most part this did the trick! Thanks for all your help!
You could use this: ``` new_table * date * record_id (pk) * data_field INSERT INTO new_table (date,record_id,data_field) SELECT date, record_id, data_field FROM old_table ON DUPLICATE KEY UPDATE date=old_table.data, data_field=old_table.data_field; ``` record id is the primary key and this same insert could be added below the insert into the old\_table. see [mySQL](http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html)
Need MySQL INSERT - SELECT query for tables with millions of records
[ "", "php", "mysql", "" ]
I am writing an installer class for my web service. In many cases when I use WMI (e.g. when creating virtual directories) I have to know the siteId to provide the correct metabasePath to the site, e.g.: ``` metabasePath is of the form "IIS://<servername>/<service>/<siteID>/Root[/<vdir>]" for example "IIS://localhost/W3SVC/1/Root" ``` How can I look it up programmatically in C#, based on the name of the site (e.g. for "Default Web Site")?
Here is how to get it by name. You can modify as needed. ``` public int GetWebSiteId(string serverName, string websiteName) { int result = -1; DirectoryEntry w3svc = new DirectoryEntry( string.Format("IIS://{0}/w3svc", serverName)); foreach (DirectoryEntry site in w3svc.Children) { if (site.Properties["ServerComment"] != null) { if (site.Properties["ServerComment"].Value != null) { if (string.Compare(site.Properties["ServerComment"].Value.ToString(), websiteName, false) == 0) { result = int.Parse(site.Name); break; } } } } return result; } ```
You can search for a site by inspecting the `ServerComment` property belonging to children of the metabase path `IIS://Localhost/W3SVC` that have a `SchemaClassName` of `IIsWebServer`. The following code demonstrates two approaches: ``` string siteToFind = "Default Web Site"; // The Linq way using (DirectoryEntry w3svc1 = new DirectoryEntry("IIS://Localhost/W3SVC")) { IEnumerable<DirectoryEntry> children = w3svc1.Children.Cast<DirectoryEntry>(); var sites = (from de in children where de.SchemaClassName == "IIsWebServer" && de.Properties["ServerComment"].Value.ToString() == siteToFind select de).ToList(); if(sites.Count() > 0) { // Found matches...assuming ServerComment is unique: Console.WriteLine(sites[0].Name); } } // The old way using (DirectoryEntry w3svc2 = new DirectoryEntry("IIS://Localhost/W3SVC")) { foreach (DirectoryEntry de in w3svc2.Children) { if (de.SchemaClassName == "IIsWebServer" && de.Properties["ServerComment"].Value.ToString() == siteToFind) { // Found match Console.WriteLine(de.Name); } } } ``` This assumes that the `ServerComment` property has been used (IIS MMC forces its used) and is unique.
How can I look up IIS site id in C#?
[ "", "c#", ".net", "iis", "installation", "wmi", "" ]
I have read some approaches to storing viewstate on the server: [Here is one](http://aspalliance.com/72_Server_Side_Viewstate) [Here is another](http://www.clanmonroe.com/Blog/archive/2008/08/05/efficient-server-side-view-state-persistence.aspx) But they are sort of complicated. I am looking for a way to persist an object without having to serialize it. I could use session state, but if a user opens more than one window, there could be overwrites of the object. Is there a simple solution to this?
In this situation I would put store the object in the session using a unique key and tie the key to the page. All this can be abstracted into properties on the page class. ``` public string PersistanceKey { get { if(ViewState["PersistanceKey"] == null) ViewState["PersistanceKey"] = "Object" + Guid.NewGuid().ToString(); return (string)ViewState["PersistanceKey"]; } } public PersistanceObject Persistance { get { if(Session[this.PersistanceKey] == null) Session[this.PersistanceKey] = new PersistanceObject(); return (PersistanceObject)Session[this.PersistanceKey]; } ``` The different session keys would allow different objects on a per-page basis. Alternately, instead of using the Session object, you could consider using the application cache (the Cache object) to automatically remove stale entries out of memory, but this has its own caveats. It should be noted that Joel's warnings on his answer about memory usage are entirely accurate. This might not be the best idea for low-memory, high-usage, or large-persistance-object scenarios.
> I am looking for a way to persist an object without having to serialize it. Be careful with that. This will have a dramatic impact on the memory use of your site, and memory use is often the biggest impediment to scalability.
ASP.NET Server Side Viewstate
[ "", "c#", "asp.net", "session", "viewstate", "session-state", "" ]
VB has a couple of native functions for converting a char to an ASCII value and vice versa - Asc() and Chr(). Now I need to get the equivalent functionality in C#. What's the best way?
You could always add a reference to Microsoft.VisualBasic and then use the exact same methods: [Strings.Chr](http://msdn.microsoft.com/en-us/library/microsoft.visualbasic.strings.chr.aspx) and [Strings.Asc](http://msdn.microsoft.com/en-us/library/microsoft.visualbasic.strings.asc.aspx). That's the easiest way to get the exact same functionality.
For `Asc()` you can cast the `char` to an `int` like this: ``` int i = (int)your_char; ``` and for `Chr()` you can cast back to a `char` from an `int` like this: ``` char c = (char)your_int; ``` Here is a small program that demonstrates the entire thing: ``` using System; class Program { static void Main() { char c = 'A'; int i = 65; // both print "True" Console.WriteLine(i == (int)c); Console.WriteLine(c == (char)i); } } ```
What's the equivalent of VB's Asc() and Chr() functions in C#?
[ "", "c#", "vb.net", "ascii", "" ]
I'm currently trying to write a thread-safe logger class. I'm not very familiar with correct design and best practices in this area. Is there a flaw in my code? ``` public class WriteStuff { private readonly StreamWriter m_Writer; private readonly object m_WriteLock = new object (); public WriteStuff(String path) { m_Writer = File.CreateText (path); m_Writer.WriteLine ("x"); m_Writer.Flush (); } public void ListenTo(Foo foo) { foo.SomeEvent += new EventHandler<SomeArgs> (Foo_Update); } private void Foo_Update(object sender, SomeArgs args) { lock (m_WriteLock) { m_Writer.WriteLine (args); m_Writer.Flush (); } } } ```
What you've posted looks fine from a multi-threading perpective. Although I could be wrong, it would appear that any other code that does some multi-threading (even using the foo object) should be safe. Certainly, I can't see any [deadlocks](http://msdn.microsoft.com/en-us/magazine/cc163618.aspx) in the that section of code. A few things worth noting anyway (apart from being very careful with deadlocks and testing rigourously to insure they won't occur): * It's best to put a lock around the code within the constructor, as I believe it's *possible* in certain circumstances that methods can be called before the constructor block has finished executing. (Someone please correct me if I'm wrong on this one.) * The `StreamWriter` object in this case is private, which is good. If it were protected or internal you would certainly have to be cautious about how other code utilised the object (in fact I think it would be best to almost *always* declare such objects as private). * You've done locking the right way! It's always safest to lock on a separate private instance object because you *know* that object can't be locked by any other code than your own (which isn't the case if you lock `this` or the `StreamWriter` object itself). Still, I may be missing something, and there is a small possibility that some other code not shown above might cause problems, but as far as I can see it that code isn't flawed except for a *possible* missing lock around the constructor code. You're more likely to have to watch out for deadlock situations when you start doing more complex multi-threading, especially across classes/instances. Anyway, hope that helps.
Well, that looks OK to me; I'd probably implement `IDisposable` as a means to `Close()` the file, but... Of course, you could also use any of the (many) pre-canned logging frameworks. --- Update: One thought: you might want to consider what happens if the file already exists; you don't want to stomp on your logs...
C# Am i using lock correctly?
[ "", "c#", "multithreading", "logging", "locking", "" ]
The swing-UI of Java-programs doesn't work perfectly together with the [awesome-wm](http://awesome.naquadah.org/). awesome is a window-manager for UNIX, that automatically resizes program-windows, and the Swing-UI doesn't recognize these resizes correctly. I don't care if awesome or Java is guilty, what I wanna know if I can change my Java-programs in a way, that they will work with awesome. So that users of my programs get the correct experience, even when they use exotic window-managers.
From the man page of awesome: ``` BUGS Of course there´s no bug in awesome. But there may be unexpected behaviours. Java applications which use the XToolkit/XAWT backend may draw grey windows only. The XToolkit/XAWT backend breaks ICCCM-compliance in recent JDK 1.5 and early JDK 1.6 versions, because it assumes a reparenting window manager. As a workaround you can use JDK 1.4 (which doesn´t contain the XToolkit/XAWT backend) or you can set the following environment variable (to use the older Motif backend instead): AWT_TOOLKIT=MToolkit ```
Easiest workaround - get [wmname from suckless](http://tools.suckless.org/wmname) and use it to set the name of the window manager to LG3D: ``` wmname LG3D ``` 98% of the time this will fix the issue.
How can I fix a Java-GUI-program (swing), that it works with awesome-wm?
[ "", "java", "user-interface", "swing", "awesome-wm", "" ]
Im trying restore database from backup dynamically with application code simple sql command for restore ``` con.execute("RESTORE FILELISTONLY FROM DISK='c:\old.bak' " & vbcrlf &_ "RESTORE DATABASE newdb " & vbcrlf &_ "FROM DISK='c:\old.bak' " & vbcrlf &_ "WITH MOVE 'newdb' TO 'c:\newdb.mdf', " & vbcrlf &_ "MOVE 'newdb_log' TO 'c:\newdb_log.ldf'") ``` but it doesn't fire, i mean no any errors tried check it with sql profiler and look correctly ``` RESTORE FILELISTONLY FROM DISK='c:\old.bak' RESTORE DATABASE newdb FROM DISK='c:\old.bak' WITH MOVE 'newdb' TO 'c:\newdb.mdf', MOVE 'newdb_log' TO 'c:\newdb_log.ldf' ``` if i run sql from sql profiler its works how can u explain this issue?
Try killing all the users in the database your restoring to before running the restore, a restore will fail if there are any active connections to the db. There is example code of how to do this [all over.](http://www.google.com/search?q=kill+all+users+database+sql+server) Also make sure your connection string used when you call the restore from your application doesn't connect to the db your restoring to, but connects to something like master or msdb.
I guess the connection you are using as a lock on the database so it can't restore. When you run it directly, you haven't. What about this? ``` con.execute("USE master" & vbcrlf &_ "RESTORE FILELISTONLY FROM DISK='c:\old.bak' " & vbcrlf &_ "RESTORE DATABASE newdb " & vbcrlf &_ "FROM DISK='c:\old.bak' " & vbcrlf &_ "WITH MOVE 'newdb' TO 'c:\newdb.mdf', " & vbcrlf &_ "MOVE 'newdb_log' TO 'c:\newdb_log.ldf'") ``` Or changing the database in the connection string?
Restore sql doesnt fire
[ "", "asp.net", "sql", "sql-server", "asp-classic", "database-restore", "" ]
Is there any speed difference between these two versions? ``` <?php echo $var; ?> <?=$var?> ``` Which do you recommend, and why?
Performance difference is insignificant. Moreover, with use of APC, performance difference is zero, null, nada. Short tags are problematic within XML, because `<?` is also markup for XML processing tag. So if you're writing code that should be portable, use the long form. See `short_open_tag` description in <http://www.php.net/manual/en/ini.core.php>
Technically the parser has to parse every character of the longer version, and there's a few more characters for every transfer. If your webserver doesn't "pre-compile" (ie: cache tokenized PHP pages) then there is a slight performance difference. This should be insignificant except, perhaps, when you start talking about billions of runs.
Is there a speed difference between <?php echo $var; ?> and <?=$var?>?
[ "", "php", "syntax", "php-shorttags", "" ]
What is the best way to reset the scroll position to the top of page after the an asynchronous postback? The asynchronous postback is initiated from a ASP.NET GridView CommandField column and the ASP.NET update panel Update method is called in the GridView OnRowCommand. My current application is an ASP.NET 3.5 web site. **EDIT:** I have received excellent feedback from everyone, and I ended using the PageRequestManager method in a script tag, but my next question is: How do I configure it to only execute when the user clicks the ASP.NET CommandField in my GridView control? I have other buttons on the page that perform an asynchronous postback that I do not want to scroll to the top of the page. **EDIT 1:** I have developed a solution where I do not need to use the PageRequestManager. See my follow up answer for solution.
Here is the following solution I developed based on this [source](http://www.codeproject.com/KB/webforms/Gridview_Delete_confirmLS.aspx?display=Print) ASP.NET Webform ``` <script language="javascript" type="text/javascript"> function SetScrollEvent() { window.scrollTo(0,0); } </script> <asp:GridView id="MyGridView" runat="server" OnRowDataBound="MyGridView_OnRowDataBound"> <Columns> <asp:CommandField ButtonType="Link" ShowEditButton="true" /> </Columns> </asp:GridView> ``` ASP.NET Webform code behind ``` protected void MyGridView_OnRowDataBound(object sender, GridViewRowEventArgs e) { if(e.Row.RowType.Equals(DataControlRowType.DataRow)) { foreach (DataControlFieldCell cell in e.Row.Cells) { foreach(Control control in cell.Controls) { LinkButton lb = control as LinkButton; if (lb != null && lb.CommandName == "Edit") lb.Attributes.Add("onclick", "SetScrollEvent();"); } } } } ```
As you're using UpdatePanels you're going to need to hook into the ASP.NET AJAX [PageRequestManager](http://msdn.microsoft.com/en-us/library/bb311028.aspx) You'll need to add a method to the [endRequest](http://msdn.microsoft.com/en-us/library/bb383810.aspx) event hooks that are: > Raised after an asynchronous postback is finished and control has been returned to the browser. So you'd have something like: ``` <script type="text/javascript"> Sys.WebForms.PageRequestManager.getInstance().add_endRequest(pageLoaded); function pageLoaded(sender, args) { window.scrollTo(0,0); } </script> ``` Which will force the browser to scroll back to the top of the page once an update request has completed. There are other events you could hook into instead of course: ``` beginRequest // Raised before the request is sent initializeRequest // Raised as the request is initialised (good for cancelling) pageLoaded // Raised once the request has returned, and content is loaded pageLoading // Raised once the request has returned, and before content is loaded ``` The beauty of asynchronous post-backs is that the page will maintain the scroll height without you having to set MaintainScrollPosition, as there is no "full page reload" happening, in this case you actually want that effect to happen, so you will need to manually create it. ***Edit to respond to updated question*** Ok, so if you need to only reset the postion on certain button presses you'll need to do something like this: Start by hooking into the BeginRequest instead/as well: ``` Sys.WebForms.PageRequestManager.getInstance().add_beginRequest(BeginRequestHandler); ``` This is because in the args parameter you get access to: ``` args.get_postBackElement().id ``` Which will tell you the id of the button that started the whole event - you can then either check the value here, and move the page, or store it in a variable, and query it in the end request - being aware of race conditions, etc where the user clicks another button before your original update completes. That should get you going with any luck - there's quite a few examples around this on [Working with PageRequestManager Events](http://msdn.microsoft.com/en-us/library/bb398976.aspx)
Reset scroll position after Async postback - ASP.NET
[ "", "c#", "asp.net", "gridview", "asp.net-ajax", "scroll-position", "" ]
I would like to try this code: ``` public struct Direction { private int _azimuth; public int Azimuth { get { return _azimuth; } set { _azimuth = value; } } public Direction(int azimuth) { Azimuth = azimuth } } ``` But it fails on compilation, I understand that struct need to init all its fields. but i am trying to understand what happens under the CLR\IL hoods. why it need all the fields before any other method\property\this etc. Thanks.
Value Types are created on the stack (unless nested within a reference type) There is something about fields/locations on the stack that the CLR can't guarantee that they will be zeroed out (contrary to fields/locations on the managed heap which are guaranteed to be zeroed out). Hence they must be written to before being read. Otherwise it's a security hole. The struct's default ctor (which takes no parameters and which you're not allowed to explicitly specify) zeroes out all fields of a struct and hence you can use a struct after you do. ``` new BimonthlyPairStruct() ``` However when you implement your parameterized ctor, **you** must ensure that all fields have been initialized - which is required for the CLR to pass your code as safe/**verified** . See Also: CLR via C# 2nd Ed - Pg 188
I just found an explanation in the [MSDN forum](http://social.msdn.microsoft.com/Forums/en-US/csharplanguage/thread/40085588-ccb0-42bf-8492-fa48f072b877/) stating that this rule is enforced because zeroing out the memory is skipped if you use a none default constructor. So you will have to provide initialization values for all fields in order to avoid some fields containing random values. You achieve this easily be calling the parameter less default constructor, but at the cost of initializing some fields twice. I cannot tell if this explanation is correct, but it sounds reasonable. > When you define a non-default initializer, C# requires you to set all fields because it > skips the zeroing of memory and lets you initialize it - otherwise you'd have to have a > double initialization performance hit. If you don't care about the (very slight) > performance hit you can always chain a call to the : this() initializer and then only > initialize selected fields.
Why Must I Initialize All Fields in my C# struct with a Non-Default Constructor?
[ "", "c#", "struct", "clr", "stack", "" ]
I would like to be able to see and monitor my internet data (http/emule/email) on my own PC using Windows XP. I am thinking of something like WireShark but I would like to control it programmatically. I would be using C or C++. How can I do this?
WireShark uses [winpcap](http://www.winpcap.org/) to do it's thing. Winpcap comes with a [C interface](http://www.winpcap.org/devel.htm).
winpcap is probably the most well known choice, but you could also write a Layered Service Provider. There's not a whole lot of documentation, but a good place to start is the article on msdn: <http://www.microsoft.com/msj/0599/LayeredService/LayeredService.aspx> This has some advantages vs layer 2 packet parsing, but also some disadvantages. You'll need to evaluate where in the stack you want to live. edit: Obviously, when I say pcap -- I really mean any similar approach. Obviously, winpcap was not the first driver/library combo to provide this sort of information.
What component do I need to monitor my internet traffic on my PC?
[ "", "c++", "network-programming", "" ]
When I define my class properties with out using fields (which was enabled with C# 3.0), class diagram is not showing my one to one/many relationships. Inheritance is visible in the diagram though.. Is there a way to fix this?
When you right click on the field or property you can select show as association or show as collection association.
It's for people like me who do not find where is the menu item in 1 minute.![enter image description here](https://i.stack.imgur.com/tbcCd.png)
Visual Studio Class Diagram not showing relationships
[ "", "c#", ".net", "visual-studio", "" ]
Ive made a small program in C#.net which doesnt really serve much of a purpose, its tells you the chance of your DOOM based on todays news lol. It takes an RSS on load from the BBC website and will then look for key words which either increment of decrease the percentage chance of DOOM. Crazy little project which maybe one day the classes will come uin handy to use again for something more important. I recieve the RSS in an xml format but it contains alot of div tags and formatting characters which i dont really want to be in the database of keywords, What is the best way of removing these unwanted characters and div's? Thanks, Ash
If you want to remove the DIV tags WITH content as well: ``` string start = "<div>"; string end = "</div>"; string txt = Regex.Replace(htmlString, Regex.Escape(start) + "(?<data>[^" + Regex.Escape(end) + "]*)" + Regex.Escape(end), string.Empty); ``` Input: `<xml><div>junk</div>XXX<div>junk2</div></xml>` Output: `<xml>XXX</xml>`
IMHO the easiest way is to use regular expressions. Something like: ``` string txt = Regex.Replace(htmlString, @"<(.|\n)*?>", string.Empty); ``` Depending on which tags and characters you want to remove you will modify the regex, of course. You will find a lot of material on this and other methods if you do a web search for ['strip html C#'](http://www.google.com/search?hl=en&q=c%23+strip+html&btnG=Search). SO question [Render or convert Html to ‘formatted’ Text (.NET)](https://stackoverflow.com/questions/356597/render-or-convert-html-to-formatted-text-net) might help you, too.
Removing <div>'s from text file?
[ "", "c#", "xml", "html", "" ]
Whenever you allocate a new array in C# with ``` new T[length] ``` the array entries are set to the default of T. That is `null` for the case that `T` is a reference type or the result of the default constructor of `T`, if `T` is a value type. In my case i want to initialize an `Int32` array with the value -1: ``` var myArray = new int[100]; for (int i=0; i<myArray.Length; i++) { myArray[i] = -1; } ``` So after the memory is reserved for the array, the CLR loops over the newly allocated memory and sets all entries to default(int) = 0. After that, my code sets all entries to -1. That makes the initialization redundant. Does the JIT detect this and neglects the initialization to 0 and if not, is there a way to directly initialize a portion of memory with a custom value? Referring to [C# Array initialization - with non-default value](https://stackoverflow.com/questions/136836/c-array-initialization-with-non-default-value) , using `Enumerable.Repeat(value, length).ToArray()` is no option, because `Enumerable.ToArray` allocates a new array and copies the values to it afterwards.
It's not redundant. Suppose an exception is thrown during your initialization loop. If the CLR hasn't cleared the memory first, you might be able to "see" the original uninitialized memory, which is a very bad idea, particularly from a security standpoint. That's why the CLR guarantees that any newly allocated memory is wiped to a 0 bit pattern. The same argument holds for fields in an object, by the way. I suppose in both cases the CLR could check that you're not going to make the array visible elsewhere before finishing initialization, but it's a complicated check to avoid a pretty simple "wipe this area of memory".
Similar to Dan's answer but without the need of using collections: ``` int[] myArray = Enumerable.Repeat(-1, 100).ToArray(); ```
Direct array initialization with a constant value
[ "", "c#", "arrays", "initialization", "constants", "" ]
How do you guys typically delete files on Linux OS? I am thinking of using the `unlink` function call, but I wonder if you have a better idea, as the C++ standard has no mention of file deletion operation and it is system dependent.
Yep -- the C++ standard leaves this stuff up to the OS, so if you're on Linux (or any POSIX system), [`unlink()`](http://opengroup.org/onlinepubs/007908775/xsh/unlink.html) is what you've got. The C standard provides [`remove()`](http://en.cppreference.com/w/cpp/io/c/remove), which you could try, but keep in mind that its [behavior is unspecified](http://www.open-std.org/JTC1/SC22/WG15/docs/rr/9945-1/9945-1-59.html) for anything other than a 'regular file', so it doesn't really shield you from getting into platform-specific filesystem details (links, etc). If you want something higher-level, more robust, and more portable, check out [Boost Filesystem](http://www.boost.org/doc/libs/1_38_0/libs/filesystem/doc/index.htm).
The Standard includes a function called [remove](http://www.cplusplus.com/reference/clibrary/cstdio/remove.html) which does that. Though i would prefer `boost.filesystem` for that (if i already use boost anyway). ``` #include <cstdio> int main() { std::remove("/home/js/file.txt"); } ```
Remove file in C++ under UNIX
[ "", "c++", "unix", "file-io", "posix", "" ]
if I create ``` typedef double (MyClass::*MemFuncGetter)(); ``` in a header file, do I need to include "MyClass.h" or would **forward declaring** suffice? Header file: ``` #ifndef _TEST_ #define _TEST_ #include "MyClass.h" //do I need this? //or I can just say class MyClass; typedef double (MyClass::*MemFuncGetter)(); #endif ``` What are the linkage rules here?
You are fine with just the forward declaration of the class: ``` #ifndef _TEST_ #define _TEST_ class MyClass; typedef double (MyClass::*MemFuncGetter)(); #endif ``` But note that by not including the whole class, the compiler has to do extra work to handle the cases when MyClass is a multiple-virtual inheritance mess, since it doesn't know. In some cases this can mean that each function pointer actually takes up to 20 bytes of memory. Whereas if you had defined the whole, each function pointer would only take 4. (Of course the sizes are all compiler-dependant).
Create file MyClassFwd.h and put there ``` class MyClass; typedef double (MyClass::*MemFuncGetter)(); ``` And include forward decl - that will be enough. Don't copy and paste typedef. In your 'MyClass.h' simply include 'MyClassFwd.h'
rules for inclusion in header files when using type in typedef
[ "", "c++", "header-files", "" ]
I'm writing a Python package. The package needs to know its version number internally, while also including this version in the `setup.py` script for `distutils`. What's the best way of doing this, so that the version number doesn't need to be maintained in two separate places? I don't want to import the `setup.py` script from the rest of my library (that seems rather silly) and I don't want to import my library from the `setup.py` script (likewise). Ideally, I'd just set a keyword in `svn` and have that automatically substituted into the files, but that doesn't seem to be possible in `svn`. I could read a common text file containing the version number in both places--is this the best solution? **To clarify**: I want to maintain the version number in *one* place. Yes, I could put a variable in the package, and again in the `setup.py` file. But then they'd inevitably get out of sync.
Inside of your main package, you probably have an `__init__.py`, right? Directory structure: ``` > ./packageTest > ./packageTest/__init__.py > ./packageTest/setup.py ``` Inside the `__init__.py` file, add the following line: ``` # package directory __init__.py __version__ = 1.0 ``` setup.py file: ``` # setup.py from packageTest import __version__ ... ``` Now in any module that imports from the package directory (I'll call packageTest), you can do this: ``` from packageTest import setup print 'Setup.py version:', setup.__version__ # prints Setup.py version: 1.0 ```
Importing the setup script inside your package is silly (especially since it may no longer be present after your library is installed), but importing your library inside setup.py should be fine. A separate text file would work too, but has the problem that you must install the text file with your package if you want to access the version number at runtime.
How do I assign a version number for a Python package using SVN and distutils?
[ "", "python", "svn", "distutils", "" ]
I have a Form in my application that displays some data. When I first show the Form, I load some data into a DataTable then bind the DataTable to a DataGridView. I also start an asynchronous method that executes some slower database queries. When these slow queries complete, I need to update a few hundred rows in the DataTable, filling in values returned from the slower queries, like so: ``` foreach (DataRow row in data.Rows) { SlowLoadingData slow_stuff = slow_query_results[(int)row["id"]]; row.BeginEdit(); row[column_one] = slow_stuff.One; row[column_two] = slow_stuff.Two; row[column_three] = slow_stuff.Three; row.EndEdit(); } ``` This is extremely slow, hanging the UI thread for a minute or more, presumably because each row is triggering a redraw. After some research, I found a way to make it fast. First, bind the DataGridView to a BindingSource that is bound to the DataTable, instead of directly to the DataTable. Then, do as follows when you make changes to the DataTable: ``` binding_source.SuspendBinding(); binding_source.RaiseListChangedEvents = false; // foreach (DataRow in Data.Rows) ... code above binding_source.RaiseListChangedEvents = true; binding_source.ResumeBinding(); grid.Refresh(); ``` There is a problem, though, and it's a doozy: the code above prevents the DataGridView from detecting new rows added to the DataTable. **Any new rows added to the table do not appear in the grid.** The grid may also throw exceptions if you use the arrow keys to move the current cell selection off the bottom end of the grid, because the underlying data source has more rows but the grid has not created grid rows to display them. So, two possible solutions that I can see: 1. Is there a better way to suppress binding updates while making changes to the underlying DataTable? 2. Is there an easy way to tell the DataGridView to gracefully refresh its grid row collection to match the number of underlying DataTable rows? (Note: I've tried calling BindingSource.ResetBindings, but it seems to trigger more exceptions if you have *removed* rows from the DataTable!)
Have you considered disconnecting the dataGrid or the bindingSource while filling the table and reconnecting afterwards? It might look a bit ugly, but it should be a lot faster.
You can try using the [Merge method](http://msdn.microsoft.com/en-us/library/system.data.datatable.merge.aspx) on the DataTable. I'll try to create a simple demo app and post it here, but the idea is simple. When you want to update the Grid, query the results into a new DataTable, and then merge the old table with the new table. As long as both tables have primary keys (you can create them them im memory if they don't come back from the DB) then it should track changes and update the DataGridView seamlessly. It also has the advantage of not losing the users place on the grid. OK, here's a sample. I create a form with two buttons and one dataGridView. On button1 click, I populate the main table with some data, and bind the grid to it. Then, on second click, I create another table with the same schema. Add data to it (some that have the same primary key, and some that have new ones). Then, they merge them back to the original table. It updates the grid as expected. ``` public partial class Form1 : Form { private DataTable mainTable; public Form1() { InitializeComponent(); this.mainTable = this.CreateTestTable(); } private void button1_Click(object sender, EventArgs e) { for (int i = 1; i <= 10; i++) { this.mainTable.Rows.Add(String.Format("Person{0}", i), i * i); } this.dataGridView1.DataSource = this.mainTable; } private void button2_Click(object sender, EventArgs e) { DataTable newTable = this.CreateTestTable(); for (int i = 1; i <= 15; i++) { newTable.Rows.Add(String.Format("Person{0}", i), i + i); } this.mainTable.Merge(newTable); } private DataTable CreateTestTable() { var result = new DataTable(); result.Columns.Add("Name"); result.Columns.Add("Age", typeof(int)); result.PrimaryKey = new DataColumn[] { result.Columns["Name"] }; return result; } } ```
optimize updates to DataTable bound to DataGridView
[ "", "c#", ".net", "datagridview", "datatable", "" ]
I have some data I want to work with. Two string and two numbers and I have say 8000 rows of data. Is a dataset the best option here to use, or could I use a struct and have a list of structs? Would there be much performance difference between the list and the dataset?
`DataSet`s and `DataTable`s are often more verbose and have some overhead to access, but usually are interoperable with whatever data-binding sort of stuff you're using. If you have the choice (i.e. you're not hooked to some component that uses `DataTable`s) I'd strongly suggest using an appropriate strongly-typed collection, like a generic `List`, `Dictionary`, or `SortedDictionary`. I propose that the flexibility and transparency will benefit you in the long run, if there is a long run to your project. P.S. Two strings and two numbers is big enough that I doubt you'll see any benefit from making it a `struct` instead of a class. Of course, you should profile it for yourself, but that's my intuition. I agree with the post mentioning that you should be sure you understand the fundamental differences between the two if this is a case of premature optimization.
Ok, I'm extrapolating based on limited data, but the fact that you are asking between a list of structs and a dataset implies to me that you're somewhat new to C# and have not been introduced to the fact that a struct is a [ValueType](http://msdn.microsoft.com/en-us/library/system.valuetype.aspx) and therefore lives on the stack. You have probably heard that a struct grants you "better performance" somewhere, and want to get the best performance you can for your list of 8000 items. First, I believe that you are prematurely optimizing. Don't. Do whatever works within the scope of your program. If you are building this list of 8000 items yourself programmatically, perhaps from an XML or flat file, I'd suggest you use a list of ***objects***, as that will be the easiest for you to program against. Your post does not imply that you have a relationship between two tabular sets of data, so a DataSet would be unnecessary. That said, if you are receiving that list from the database somehow, and your current data layer is ADO.NET 2.0 (and therefore returns DataSets and DataTables), then I'd say use that. If you are receiving the list from the database but your data layer is not yet defined, I would suggest you look into [LINQ to SQL](http://msdn.microsoft.com/en-us/library/bb386976.aspx) or [Entity Framework](http://msdn.microsoft.com/en-us/library/aa697427(VS.80).aspx). Again, I caution you against prematurely optimizing, especially given that you don't appear to understand how structs work. Again this is an assumption and I apologize in advance if I am wrong.
List of Structs or Dataset in C#?
[ "", "c#", "arrays", "list", "dataset", "" ]
Does anyone have links and resources to connect to an AS400 from Java? I remember years ago, somebody told me about a connector that simulates KeyStrokes from the keyboard and other "purest" approach that connected directly. On the web I have found a lot of links, but I cannot find a complete product to do this (I am probably not using the right keywords). **EDIT** Thanks for the answers: What we are looking for is a way to access the data inside the AS400 and/or the screens it uses and expose them for other new applications re-use. Either as a webservice of some sort, or directly through Java ( and java will expose the operations using webservices ) Thanks in advance. **EDIT** As per MicSim post, I've also found this link: <http://www.ibm.com/developerworks/library/ws-as400/index.html>
What you are looking for is probably the Toolbox for Java™ & JTOpen from IBM. There is also an AS400 class in the toolbox for performing specific AS400 tasks. You can look [here](http://www-03.ibm.com/systems/i/software/toolbox/faqjdbc.html) and [here](http://publib.boulder.ibm.com/infocenter/iadthelp/v7r0/index.jsp?topic=/com.ibm.etools.iseries.toolbox.doc/as400obj.htm) for more details. Just googled it and hope it's helpful.
IBM's 5250 screen-scraping technology was "WebFacing" - I would post a link but you're probably better off Googling it, since IBM's documentation is so scattered. There are other technologies available too but: Screen-scraping was never anyone's favourite since typically you end up with something which, although it looks more up-to-date, actually is harder to use than a green screen and no more functional. The 5250 is probably the single best data entry platform I've ever used - web forms in a browser are one of the worst. As mentioned, jt400 is the way to go for most other things. In particular: JDBC - for all things SQL. If you do it right and address your files as though they really are tables, it's a way to get away from the 400 entirely. Record-level access - write Java programs using a similar database API to RPGLE (all those chains, setlls that 400 programmers love) Call programs, system commands, manage resources (data queues, data areas, prints / spools, jobs etc etc) Good luck
Where can I find an AS400 to Java interface?
[ "", "java", "ibm-midrange", "interop", "" ]