Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I've been looking for a good cross-platform 2D drawing library that can be called from C++ and can be used to draw some fairly simple geometry; lines, rectangles, circles, and text (horizontal and vertical) for some charts, and save the output to PNG. I think a commercial package would be preferable over open source because we would prefer not to have to worry about licensing issues (unless there's something with a BSD style license with no credit clause). I've looked at Cairo Graphics which seemed promising, but the text rendering looks like crap out of the box, and upgrading the text back-end brings us into murky license land. I need it for Windows, Mac and Linux. Preferably something fairly lightweight and simple to integrate. I've thought about Qt but that's way too heavy for our application. Any ideas on this would be awesome.
Try [Anti-Grain Geometry](http://www.antigrain.com/). From the description: > Anti-Grain Geometry (AGG) is an Open Source, free of charge graphic library, written in industrially standard C++. The terms and conditions of use AGG are described on The License page. AGG doesn't depend on any graphic API or technology. Basically, you can think of AGG as of a rendering engine that produces pixel images in memory from some vectorial data. But of course, AGG can do much more than that. The ideas and the philosophy of AGG are: * Anti-Aliasing. * Subpixel Accuracy. * The highest possible quality. * High performance. * Platform independence and compatibility. * Flexibility and extensibility. * Lightweight design. * Reliability and stability (including numerical stability).
Another one: [Skia](http://code.google.com/p/skia/). Used in Android and Chrome, under active development, HW acceleration.
Cross-platform drawing library
[ "", "c++", "cross-platform", "drawing", "" ]
A project won't compile because of a missing assembly. The file that causes the error has a ``` using Microsoft.VisualStudio.DebuggerVisualizers; ``` The `VisualStudio` part is marked in red. What do I have to install to fix it? To me it sounds like something that would have come with Visual Studio, but that is what I am using, so it is installed... ***Clarification:*** I know what assembly it exists in, and the reference is added earlier to the project references. But how do I get it? What SDK do I have to install? Or have I forgotten to check something off when I installed Visual Studio?
You should be able to find it in the "Add Reference" dialog in the .Net tab. If not, the `Microsoft.VisualStudio.DebuggerVisualizers.dll` lives in the `Common7\IDE\PublicAssemblies` subdirectory of Visual Studio's installation directory. You could add it manually from there.
For Visual Studio 2010 the `Microsoft.VisualStudio.DebuggerVisualizers` assembly exists under `C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\ReferenceAssemblies\v2.0` (or relevant Program Files root). For a debugger visualizer to target Visual Studio 2010 it must reference the 10.0 version of this assembly or it will fail with an invalid cast exception.
Where do I find Microsoft.VisualStudio.DebuggerVisualizers?
[ "", "c#", "visual-studio", "assembly-resolution", "" ]
Is it possible to change the scrollbar position when the user reaches a certain point scrolling down a web page? For example once you reached half way down down the page the scrollbar would move automatically back to the top.
You can calculate the percentage of the current position of the scrollbar using the [onscroll](https://developer.mozilla.org/en/DOM/window.onscroll) event, and if it reaches the 50 % the scroll position can be set to the top of the page with the [scrollTo](https://developer.mozilla.org/en/DOM/window.scrollTo) function: ``` window.onload = function () { window.onscroll = function () { var doc = document.body, scrollPosition = doc.scrollTop, pageSize = (doc.scrollHeight - doc.clientHeight), percentageScrolled = Math.floor((scrollPosition / pageSize) * 100); if (percentageScrolled >= 50){ // if the percentage is >= 50, scroll to top window.scrollTo(0,0); } }; }; ``` You can check my example [here](http://jsbin.com/eqote).
Yup, I've seen it a few times. Here is some JS code: ``` window.scrollBy(0,50) ``` 50 is the amount of pixels you want to scroll by.
How to change scrollbar position?
[ "", "javascript", "scrollbar", "position", "" ]
I have a handheld scanning device running Windows 6.0. the handheld is responsible for scanning barcodes at a stationary position outdoors as people enter and leave a building. The primary concern for this application is to get people in and out as quickly as possible. Now the handheld is communicating with a computer in another building that is behind thick concrete walls. All it does is connect to a socket, send the barcode information, and receive a small xml file in response. What do you think would be the best way to handle the application not getting a strong signal? Should I let the OS handle the warnings and possible disconnection, or should I monitor the application and shut it down if the signal strength drops to 1 bar for a defined period of time? The problem with letting the OS handle everything, is that I don't know if the user will have low signal strength warnings enabled. Plus if it is a weak signal, and the user continues to use the application, it will greatly slow down the queue. If I shut it down myself, I at least ensure the user has to stop using the application and switch to a manual method, but the downside is if the weak signal was just an anomaly then I've disconnected the application and now I've stuck the user with the responsibility of trying to reconnect every few minutes and seeing if it works yet. (Once the connection is lost, it can't automatically be restored, it requires user input). This is the first application I've made for a handheld device, so I'm not sure what the standard approach to an issue like this is. Any advice?
One thing to consider is a better antenna on the wireless router (google will turn up many options). Some devices can also accept an external antenna, either natively or via a hardware hack. On the software side, perhaps consider displaying a warning in a status area indicating that the signal strength is weak and suggesting the manual method until a better signal strength is achieved. I would certainly leave it up to the user to decide how to proceed. Few things are more irritating to an end user than software trying to be more clever than the user.
Do you do anything with the result? If not, you could just add the scanned bar code to some kind of queue which would then allow you to immediately scan the next one while the queue is processed on another thread.
What is the best way to handle a weak wireless signal from a mobile app?
[ "", "c#", "windows-mobile", "wireless", "" ]
I have the following line of code: ``` <li> <label for="q2-1" onclick="setActive(this.id, 'question2-1-input');" id="q2-1-label"><input disabled="disabled" type="checkbox" name="question2" id="question2-1-input" value="1" />Opinions</label> </li> ``` which calls: ``` function setActive(questionID, questionIDinput) { alert('setActive'); } ``` The above code works great in FF & Safari but not in IE. Is something above not IE friendly? Thanks
Do not use "setActive". It is already taken (name of an existing function). Replace it with something else (setActive1() will do ;) ) and it will start working in IE as well.
I believe that you should put the `onclick` even on the `input` instead of the `label`.
JavaScript Q. Works in FF&Safari, not IE7 or IE8
[ "", "javascript", "" ]
I have an application that IMPLICITLY opens a handle on a dll/file. At some point in the application, I want to release this handle. How can I do it? My application is in C#.
use PInvoke if you have an handler that you want to close ``` [System.Runtime.InteropServices.DllImport("Kernel32")] private extern static Boolean CloseHandle(IntPtr handle); ```
What exactly you are trying to do? If you want to load an assembly to do some stuff with that, and then unload it completely, you need to rely on creating a new app domain. ``` public static void Main(string[] args) { AppDomain appDomain = AppDomain.CreateDomain("NewAppDomain"); appDomain.DoCallBack(new CrossAppDomainDelegate(AsmLoad)); // At this point, your assembly is locked, you can't delete AppDomain.Unload(appDomain); Console.WriteLine("AppDomain unloaded"); //You've completely unloaded your assembly. Now if you want, you can delete the same } public static void AsmLoad() { Assembly assembly = Assembly.LoadFrom(@"c:\Yourassembly.dll"); //Loaded to the new app domain. You can do some stuff here Console.WriteLine("Assembly loaded in {0}",AppDomain.CurrentDomain.FriendlyName); } ``` Have a look at this post for more, <http://blogs.msdn.com/suzcook/archive/2003/07/08/57211.aspx> > Or, if you're only worried about > keeping the file locked, you could use > shadow copying. That will make a copy > of the file on disk and load it from > the new location. The original file > will not be locked by that load. To > do that, set > AppDomainSetup.ShadowCopyFiles to > "true" when creating the AppDomain or > set AppDomain.ShadowCopyFiles to true > after it's already been created.
How to release a handle through C#?
[ "", "c#", "handle", "filehandle", "" ]
I have a string: ``` $string = "R 124 This is my message"; ``` At times, the string may change, such as: ``` $string = "R 1345255 This is another message"; ``` Using PHP, what's the best way to remove the first two "words" (e.g., the initial "R" and then the subsequent numbers)?
try ``` $result = preg_replace('/^R \\d+ /', '', $string, 1); ``` or (if you want your spaces to be written in a more visible style) ``` $result = preg_replace('/^R\\x20\\d+\\x20/', '', $string, 1); ```
``` $string = explode (' ', $string, 3); $string = $string[2]; ``` Must be much faster than regexes.
Remove first two words from a string
[ "", "php", "cpu-word", "truncate", "" ]
I've noticed that some .NET structs can be compared to null. For example: ``` TimeSpan y = new TimeSpan(); if (y == null) return; ``` will compile just fine (the same with the Guid struct). Now I know that stucts are value type and that the code above should not compile, unless there's an overload of operator == which takes an object. But, as far as I could tell there isn't. I've looked at the class with Reflector, and also at the docs on MSDN. The two of them do implement the following interfaces: ``` IComparable, IComparable<T>, IEquatable<T> ``` but, trying to implment the same Interfaces did not seem to help: ``` struct XX : IComparable, IComparable<XX>, IEquatable<XX> { public int CompareTo(Object obj) { return 0; } public int CompareTo (XX other){ return 0; } public bool Equals (XX other){ return false; } public override bool Equals(object value){ return false; } public static int Compare(XX t1, XX t2){ return 0; } } ``` I'm using: .NET 2.0 Visual Studio 2005. Does anyone has any idea what's the reason for this ? I am just trying to get a better understanding. This isn't an issue as I know I shouldn't compare structs to null anyway.
It's the `==` operator. The `TimeSpan` class has an overload of the equality operator: ``` public static bool operator ==(DateTime d1, DateTime d2) { return (t1._ticks == t2._ticks); } ``` This in itself doesn't make it possible to compare with `null`, **but**... With the arrival of nullable types, **each struct is implicitly convertible to its nullable type**, so when you see something like ``` TimeSpan y = new TimeSpan(); if (y == null) return; ``` You *don't see* that this is happening: ``` TimeSpan y = new TimeSpan(); if ((Nullable<TimeSpan>)y == (Nullable<TimeSpan>)null) return; ``` Null gets the implicit conversion (implicit assignment?), but *not all `System.Object` objects* do: ``` TimeSpan y = new TimeSpan(); object o = null; if (y == o) //compiler error return; ``` Okay, but the equality operator doesn't take nullable arguments, does it? Well, [msdn](http://msdn.microsoft.com/en-us/library/2cf62fcy.aspx) is of help here, stating: > The predefined unary and binary > operators and any user-defined > operators that exist for value types > may also be used by nullable types. > These operators produce a null value > if [any of] the operands are null; otherwise, > the operator uses the contained value > to calculate the result. So you effectively get **a nullable implementation for each operator for free**, with a fixed defined behaviour. The "contained value" mentioned above is the actual value the non-nullable operator would return.
This case is covered for generics in section 7.9.6 of the C# language specification. > The x == null construct is permitted even though T could represent a value type, and the result is simply defined to be false when T is a value type. I dug through the spec for a bit and couldn't find a more general rule. Jon's [answer](https://stackoverflow.com/questions/1225949/why-can-timespan-and-guid-structs-be-compared-to-null/1226003#1226003) indicates it's a nullable promotion issue. This rule (or a similar variation) does seem to be being applied here. If you look at the reflected output closely you'll notice the comparison isn't there. The C# compiler is apparently optimizing this comparison away and replacing it with false. For instance, if you type the following ``` var x = new TimeSpan(); var y = x == null; Console.WriteLine(x); ``` Then decompile it you'll see the following ``` var x = new TimeSpan(); var y = false; Console.WriteLine(x); ```
Why can TimeSpan and Guid Structs be compared to null?
[ "", "c#", ".net", ".net-2.0", "null", "struct", "" ]
I tried using `webBrowser1.Document.Body.ScrollTop` and `webBrowser1.Document.Body.ScrollLeft`, but they don't work. They always return 0 and I can't access `webBrowser1.Document.documentElement.ScrollTop` and `.ScrollLeft`.
OK, I solved it: ``` Dim htmlDoc As HtmlDocument = wb.Document Dim scrollTop As Integer = htmlDoc.GetElementsByTagName("HTML")(0).ScrollTop ```
To actually scroll, we found that the [`ScrollIntoView`](http://msdn.microsoft.com/en-us/library/system.windows.forms.htmlelement.scrollintoview.aspx) method worked nicely. For example, to scroll to the top-left of the page. ``` this.webBrowser.Document.Body.FirstChild.ScrollIntoView(true); ``` However, we were not successful in actually getting the scroll position (that said, we didn't spend long trying). If you are in control of the HTML content, you might consider using some javascript to copy the scroll position into a hidden element and then read that value out using the DOM. [`ScrollTop`](http://msdn.microsoft.com/en-us/library/system.windows.forms.htmlelement.scrolltop.aspx) and [`ScrollLeft`](http://msdn.microsoft.com/en-us/library/system.windows.forms.htmlelement.scrollleft.aspx) merely allow an offset to be provided between the boundary of an element and its content. There appears to be no way to manipulate the scroll by those values. Instead, you have to use [`ScrollIntoView`](http://msdn.microsoft.com/en-us/library/system.windows.forms.htmlelement.scrollintoview.aspx).
How to retrieve the scrollbar position of the webbrowser control in .NET
[ "", "c#", "" ]
We have a custom php email marketing app, and an interesting problem: If the subject line of the message contains a word with accents, it 'swallows' the spaces between it and the following word. An example: the phrase *Ángel Ríos escucha y sorprende* is shown (by at least gmail and lotus notes) as *ÁngelRíos escucha y sorprende* The particular line in the message source shows: `Subject: =?ISO-8859-1?Q?=C1ngel?= =?ISO-8859-1?Q?R=EDos?= escucha y sorprende` (semi-full headers): ``` Delivered-To: me@gmail.com Received: {elided} Return-Path: <return@path> Received: {elided} Received: (qmail 23734 invoked by uid 48); 18 Aug 2009 13:51:14 -0000 Date: 18 Aug 2009 13:51:14 -0000 To: "Adriano" <me@gmail.com> Subject: =?ISO-8859-1?Q?=C1ngel?= =?ISO-8859-1?Q?R=EDos?= escucha y sorprende MIME-Version: 1.0 From: {elided} X-Mailer: PHP X-Lista: 1290 X-ID: 48163 Content-Type: text/html; charset="ISO-8859-1" Content-Transfer-Encoding: quoted-printable Message-ID: <kokrte.rpq06m@example.com> ``` **EDIT:** The app uses an old version of Html Mime Mail to prepare messages, I'll try to upgrade to a newer version. Anyway, this is the function that encodes the subject: ``` /** * Function to encode a header if necessary * according to RFC2047 */ function _encodeHeader($input, $charset = 'ISO-8859-1') { preg_match_all('/(\w*[\x80-\xFF]+\w*)/', $input, $matches); foreach ($matches[1] as $value) { $replacement = preg_replace('/([\x80-\xFF])/e', '"=" . strtoupper(dechex(ord("\1")))', $value); $input = str_replace($value, '=?' . $charset . '?Q?' . $replacement . '?=', $input); } return $input; } ``` And here it's the code where the subject is encoded: ``` if (!empty($this->headers['Subject'])) { $subject = $this->_encodeHeader($this->headers['Subject'], $this->build_params['head_charset']); unset($this->headers['Subject']); } ``` **Wrap-up** The problem was that, indeed, the program wasn't encoding the space in the case mentioned. [The accepted answer](https://stackoverflow.com/questions/1294066/accented-words-in-email-subject-break-spacing-how-do-i-stop-this/1294391#1294391) solved my problem, after a slight modification (mentioned in the comments to that answer) because the installed version of PHP didn't support a particular implementation detail. **Final answer** Although the accepted answer did solve the problem, we found that it, combined with many thousands of emails, was chewing all the available memory on the server. I checked the website of the original developer of this email framework, and found that the function had been updated to the following: ``` function _encodeHeader($input, $charset = 'ISO-8859-1') { preg_match_all('/(\w*[\x80-\xFF]+\w*)/', $input, $matches); foreach ($matches[1] as $value) { $replacement = preg_replace('/([\x80-\xFF])/e', '"=" . strtoupper(dechex(ord("\1")))', $value); $input = str_replace($value, $replacement , $input); } if (!empty($matches[1])) { $input = str_replace(' ', '=20', $input); $input = '=?' . $charset . '?Q?' .$input . '?='; } return $input; } ``` which neatly solved the problem and stayed under the mem limit.
You need to encode the space in between as well (see [RFC 2047](https://www.rfc-editor.org/rfc/rfc2047#section-8)): > ``` > (=?ISO-8859-1?Q?a?= =?ISO-8859-1?Q?b?=) (ab) > ``` > > White space between adjacent 'encoded-word's is not displayed. > > […] > > ``` > (=?ISO-8859-1?Q?a_b?=) (a b) > ``` > > In order to cause a SPACE to be displayed within a portion of encoded text, the SPACE MUST be encoded as part of the 'encoded-word'. > > ``` > (=?ISO-8859-1?Q?a?= =?ISO-8859-2?Q?_b?=) (a b) > ``` > > In order to cause a SPACE to be displayed between two strings of encoded text, the SPACE MAY be encoded as part of one of the 'encoded-word's. So this should do it: ``` Subject: =?ISO-8859-1?Q?=C1ngel=20R=EDos?= escucha y sorprende ``` --- **Edit**    Try this function: ``` function _encodeHeader($str, $charset='ISO-8859-1') { $words = preg_split('/(\s+)/', $str, -1, PREG_SPLIT_NO_EMPTY | PREG_SPLIT_DELIM_CAPTURE); $func = create_function('$match', 'return $match[0] === " " ? "_" : sprintf("=%02X", ord($match[0]));'); $encoded = false; foreach ($words as $key => &$word) { if (!ctype_space($word)) { $tmp = preg_replace_callback('/[^\x21-\x3C\x3E-\x5E\x60-\x7E]/', $func, $word); if ($tmp !== $word) { if (!$encoded) { $word = '=?'.$charset.'?Q?'.$tmp; } else { $word = $tmp; if ($key > 0) { $words[$key-1] = preg_replace_callback('/[^\x21-\x3C\x3E-\x5E\x60-\x7E]/', $func, $words[$key-1]); } } $encoded = true; } else { if ($encoded) { $words[$key-2] .= '?='; } $encoded = false; } } } if ($encoded) { $words[$key] .= '?='; } return implode('', $words); } ```
add ``` $input = str_replace('?', '=3F', $input); ``` in this fragment: ``` if (!empty($matches[1])) { $input = str_replace('?', '=3F', $input); $input = str_replace(' ', '=20', $input); $input = '=?' . $charset . '?Q?' .$input . '?='; } ```
accented words in email subject break spacing - how do I stop this?
[ "", "php", "email", "character-encoding", "email-headers", "" ]
I have a script which generates queries in the following fashion (based on user input): ``` SELECT * FROM articles WHERE (articles.skeywords_auto ilike '%pm2%') AND spubid IN ( SELECT people.spubid FROM people WHERE (people.slast ilike 'chow') GROUP BY people.spubid) LIMIT 1; ``` The resulting data set: ``` Array ( [0] => Array ( [spubid] => A00603 [bactive] => t [bbatch_import] => t [bincomplete] => t [scitation_vis] => I,X [dentered] => 2009-07-24 17:07:27.241975 [sentered_by] => pubs_batchadd.php [drev] => 2009-07-24 17:07:27.241975 [srev_by] => pubs_batchadd.php [bpeer_reviewed] => t [sarticle] => Errata: PM2.5 and PM10 concentrations from the Qalabotjha low-smoke fuels macro-scale experiment in South Africa (vol 69, pg 1, 2001) [spublication] => Environmental Monitoring and Assessment [ipublisher] => [svolume] => 71 [sissue] => [spage_start] => 207 [spage_end] => 210 [bon_cover] => f [scover_location] => [scover_vis] => I,X [sabstract] => [sabstract_vis] => I,X [sarticle_url] => [sdoi] => [sfile_location] => [sfile_name] => [sfile_vis] => I [sscience_codes] => [skeywords_manual] => [skeywords_auto] => 1,5,69,2001,africa,assessment,concentrations,environmental,errata,experiment,fuels,low-smoke,macro-scale,monitoring,pg,pm10,pm2,qalabotjha,south,vol [saward_number] => [snotes] => ``` ) The problem is that I also need all the columns from the 'people' table (as referenced in the sub select) to come back as part of the data set. I haven't (obviously) done much with sub selects in the past so this approach is very new to me. How do I pull back all the matching rows/columns from the articles table AS WELL as the rows/column from the people table?
Are you familiar with joins? Using ANSI syntax: ``` SELECT DISTINCT * FROM ARTICLES t JOIN PEOPLE p ON p.spubid = t.spudid AND p.slast ILIKE 'chow' WHERE t.skeywords_auto ILIKE'%pm2%' LIMIT 1; ``` The DISTINCT saves from having to define a GROUP BY for every column returned from both tables. I included it because you had the GROUP BY on your subquery; I don't know if it was actually necessary.
Could you not use a join instead of a sub-select in this case? ``` SELECT a.*, p.* FROM articles as a INNER JOIN people as p ON a.spubid = p.spubid WHERE a.skeywords_auto ilike '%pm2%' AND p.slast ilike 'chow' LIMIT 1; ```
Pull back rows from multiple tables with a sub-select?
[ "", "sql", "database", "postgresql", "select", "" ]
I have the following code in a PHP file called via Ajax (note - the code is a stripepd down version of the actual code, but still produces an error: ``` <?php session_start(); $response = array( 'status'=>'abc', 'a'=>'abc', 'b'=>'abc', 'c'=>'abc' ); header('X-JSON: '.json_encode($response)); exit; ``` ?> There are absolutely no spaces or anything before the opening Yet when I call the script I get: > '[11-Aug-2009 14:56:13] PHP Notice: A > session had already been started - > ignoring session\_start() in > \test\action.php on line 2 > > [11-Aug-2009 14:56:13] PHP Warning: > Cannot modify header information - > headers already sent by (output > started at \test\action.php:2) in > \test\action.php on line 10' Line 2 is the session\_start() command and line 10 is the header() command. So PHP is telling me that it has ignored the session\_start, and then immediately telling me that actually that line produced output....whats going on?
Well, believe it or not, the criminal output is...... ``` '[11-Aug-2009 14:56:13] PHP Notice: A session had already been started - ignoring session_start() in \test\action.php on line 2 ``` Either suppress error messages (bad idea in development) Or find where you started the session (which is really how you need to solve this)
It may be related to this [question of mine](https://stackoverflow.com/questions/1122395/php-session-and-path-setting/1122646).
session_start() followed by header() causing error
[ "", "php", "" ]
I tried to use Link Checker to find any broken links, but the second one is not included, the displayedPage.html shows 404 error, but it will not show in the Link Checker's report. What is the difference between the two `<a></a>`? Why wasn't the second one being checked as a link? ``` <a href="showpage.go?page=mypage&room=yours"> <span>my own room</span> </a> ``` second: ``` <a onclick="javascript:window.open('my/displayedPage.html', '', 'width=590,height=450,scrollbars=no,resizable=no'); return true;" href="javascript:void(0)">Show Me</a> ```
The second one does not have an `href` attribute that can be checked with the link checker you are using. Presumably, the program you are using does not understand the `javascript:` protocol and/or ignores any other protocols than `http` and `ftp`.
It seems that your tool ignores javascript links. The second link is not a *pure* html link, it's a link created by calling javascript.
What is the difference between these two HTML anchors?
[ "", "javascript", "html", "hyperlink", "anchor", "broken-links", "" ]
I'm getting floating point values like (471, 326.723333333333) from calling `MouseEventArgs.GetPosition()` on my border control. While I can always round the numbers manually, I was wondering if there's some kind of setting to get whole numbers. I thought putting `SnapsToDevicePixels` to True on my border control would help but it doesn't, as it's probably unrelated. Any ideas? Thanks!
In WPF, you'll always have positioning returned in floating point values. If you want to round it, you'll need to do this manually. This is due to the resolution independence, and is by design.
Since WPF is device-independent and not all devices share the same pixel sizes passing MouseEvents with fractional coordinates makes sense (also you can put controls in 3D space where the whole coordinate system translation stuff won't happen with integral pixel values). SnapsToDevicePixels is just a property controlling how a control is displayed and that edges and straight lines preferrably lie on complete pixels than in between. It has nothing to do with how mouse events are handled (as you also have noticed). As a general rule of thumb, everything in WPF is measured in `double` values, so either live with it (you checks would likely be the same anyway) or continue rounding to `int` if you like so :-)
Getting whole numbers from MouseEventArgs.GetPosition()?
[ "", "c#", "wpf", "mouseevent", "" ]
I know this is probably a simple question, but I'm attempting a tweak in a plugin & js is not my expertise and I got stumped on how to do the following: I have an array that can contain a number of values ($fruit) depending on what a user has entered. I want to add another variable to the array that isn't determined by manual input. I know push should apply here but it doesn't seem to work, why does the following syntax not work? ``` var veggies = "carrot"; var fruitvegbasket = $('#fruit').val(); fruitvegbasket.push(veggies); ``` --- update: I got it working doing this: ``` var fruit = $('#fruit').val(); var veggies = "carrot"; fruitvegbasket = new Array(); fruitvegbasket.push(fruit+","+veggies); ``` not sure that's the best way to do it, but it works. thanks all!
Off the top of my head I think it should be done like this: ``` var veggies = "carrot"; var fruitvegbasket = []; fruitvegbasket.push(veggies); ```
jQuery is not the same as an array. If you want to append something at the end of a jQuery object, use: ``` $('#fruit').append(veggies); ``` or to append it to the end of a form value like in your example: ``` $('#fruit').val($('#fruit').val()+veggies); ``` In your case, `fruitvegbasket` is a string that contains the current value of `#fruit`, not an array. jQuery ([jquery.com](http://jquery.com)) allows for DOM manipulation, and the specific function you called `val()` returns the `value` attribute of an `input` element as a string. You can't push something onto a string.
Pushing value of Var into an Array
[ "", "javascript", "jquery", "arrays", "push", "" ]
I couldn't find any description of this on the site. What are some of the new features/improvements/changes in Eclipse Galileo from its previous version? Particularly for Java programming.
Open the Welcome Page (Help -> Welcome) and click the icon labeled "What's New" (a yellow star).
You can have a look of the main new features [here](http://download.eclipse.org/eclipse/downloads/drops/S-3.6M1-200908061400/eclipse-news-M1.html).
What are the new features in Eclipse Galileo?
[ "", "java", "eclipse", "" ]
I have a getJSON call which is inexplicably failing. The idea is, you click to submit a comment, a URL gets hit which determines if the comment is OK or has naughty words in it. The response is given in JSON form. Here's the paired down JS that generates the call. The comment and the URL are already on the page, it grabs them and hits the URL: FORM HTML: ``` <fieldset id="mg_comment_fieldset" class="inlineLabels"> <div class="ctrlHolder"> <textarea id="id_comment" rows="10" cols="40" name="comment"></textarea> </div> <div class="form_block"> <input type="hidden" name="next" value="" /> <input id="mg_comment_url" type="hidden" name="comment_url" value="" /> <input id="mg_comment_submit" type="submit" value="Remark" /> </div> </fieldset> ``` SPECIFIC JS BLOCK THAT SENDS/READS RESPONSE: ``` $('input#mg_comment_submit').click(function(){ var comment = $("textarea#id_comment").val(); var comment_url = $('input#mg_comment_url').val(); $.getJSON( comment_url+"?callback=?&comment="+comment+"&next=", function(data){ console.log(data); alert(data); }); }); ``` The JSON response: ``` [{"errors": {"comment": ["Weve detected that your submission contains words which violate our Terms and Conditions. Please remove them and resubmit test"]}}] ``` It's being returned as a mimetype of application/json. It validates in JSONLint. I also tried adding a couple AJAX functions to try to catch errors, and they're both silent. I can see the request going out in Firebug, and coming back as status 200 responses, which validate in JSONLint and which I can traverse just fine in the JSON tab of the response. If I put an alert before the getJSON, it runs; it's just that nothing inside of it runs. I also find that if I change .getJSON to .get, the alerts do run, suggesting it's something with the JSON. I'm out of ideas as to what the problem could be. Using Firefox 3.0.13.
The querystring parameter "`callback=?`" comes into play if you are using cross-site scripting or jsonp, if you are posting the same server, you don't need to use that. If you need or want to use that option, the server side code needs to come back with the callback function included in the json response. **Example:** ``` $jsonData = getDataAsJson($_GET['symbol']); echo $_GET['callback'] . '(' . $jsonData . ');'; // prints: jsonp1232617941775({"symbol" : "IBM", "price" : "91.42"}); ``` So either make a server side change if necessary or simple remove the "`callback=?`" parameter from the url. Here's [more info on jsonp](http://www.ibm.com/developerworks/library/wa-aj-jsonp1/)
Are you able to manually call your service without any errors? Have you tried using firebug and looked under XBR to see the post/response of the JSON payloads? I normally use .NET as my endpoints, and with .NET 3.5 I need to use content type "application/json; charset=utf-8". Here is an example of a working JSON call I use in .NET with jQuery 1.3.2 ``` $.ajax({ type: "POST", url: "WebService1.ASMX/HelloWorld", contentType: "application/json; charset=utf-8", dataType: "json", data: "{}", success: function(res) { // Do your work here. // Remember, the results for a ASMX Web Service are wrapped // within the object "d" by default. e.g. {"d" : "Hello World"} } }); ```
getJSON fails, JSON validates
[ "", "javascript", "jquery", "ajax", "" ]
I cannot find a way that easily lets me create a new file, treat it as an ini file (not php.ini or simiilar... a separate ini file for per user), and create/delete values using PHP. PHP seems to offer no easy way to create an ini file and read/write/delete values. So far, it's all just "read" - nothing about creating entries or manipulating keys/values.
Found following code snippet from the comments of the PHP documentation: ``` function write_ini_file($assoc_arr, $path, $has_sections=FALSE) { $content = ""; if ($has_sections) { foreach ($assoc_arr as $key=>$elem) { $content .= "[".$key."]\n"; foreach ($elem as $key2=>$elem2) { if(is_array($elem2)) { for($i=0;$i<count($elem2);$i++) { $content .= $key2."[] = \"".$elem2[$i]."\"\n"; } } else if($elem2=="") $content .= $key2." = \n"; else $content .= $key2." = \"".$elem2."\"\n"; } } } else { foreach ($assoc_arr as $key=>$elem) { if(is_array($elem)) { for($i=0;$i<count($elem);$i++) { $content .= $key."[] = \"".$elem[$i]."\"\n"; } } else if($elem=="") $content .= $key." = \n"; else $content .= $key." = \"".$elem."\"\n"; } } if (!$handle = fopen($path, 'w')) { return false; } $success = fwrite($handle, $content); fclose($handle); return $success; } ``` Usage: ``` $sampleData = array( 'first' => array( 'first-1' => 1, 'first-2' => 2, 'first-3' => 3, 'first-4' => 4, 'first-5' => 5, ), 'second' => array( 'second-1' => 1, 'second-2' => 2, 'second-3' => 3, 'second-4' => 4, 'second-5' => 5, )); write_ini_file($sampleData, './data.ini', true); ``` Good luck!
I can't vouch for how well it works, but there's some suggestions for implementing the opposite of [`parse_ini_file()`](http://www.php.net/manual/en/function.parse-ini-file.php) (i.e. `write_ini_file`, which isn't a standard PHP function) on the documentation page for `parse_ini_file`. You can use `write_ini_file` to send the values to a file, `parse_ini_file` to read them back in - modify the associative array that `parse_ini_file` returns, and then write the modified array back to the file with `write_ini_file`. Does that work for you?
create ini file, write values in PHP
[ "", "php", "configuration", "ini", "" ]
I have the following code, repeated on each `Form`, as part of the Update process. When the page loads the `BLL` returns a `DataSet`, say ``` _personInfo = ConnectBLL.BLL.Person.GetPerson(personID); ``` I store that `DataSet` in a `Form` level variable which I then use to check against for changes during the Validate/Update process. I pass a row at a time(*though there is never more then one row*) to a `Function` that takes the value in a control and compares it to its corresponding column value in the `DataSet`. If it finds it different then it sets that column = to the new value and adds the name to a `List` of what's changed. ``` // Load Person info using (var tmpPersonDT = tmpPersonDS.Tables[0]) { if (tmpPersonDT.Rows.Count > 0) { foreach (DataRow row in tmpPersonDT.Rows) { CheckPersonData(row); } } } // Snippet of the CheckPersonData() that is being called.... if (!object.Equals(row["ResidencyCountyID"], lkuResidenceCounty.EditValue)) { row["ResidencyCountyID"] = lkuResidenceCounty.EditValue; _whatChanged.Add("ResidencyCounty"); } if (!object.Equals(row["ResponsibilityCountyID"], lkuResponsibleCounty.EditValue)) { row["ResponsibilityCountyID"] = lkuResponsibleCounty.EditValue; _whatChanged.Add("ResponsibilityCounty"); } if (!object.Equals(row["HispanicOriginFlag"], chkHispanic.EditValue)) { row["HispanicOriginFlag"] = chkHispanic.EditValue; _whatChanged.Add("HispanicOriginFlag"); } if (!object.Equals(row["CitizenFlag"], chkCitizen.EditValue)) { row["CitizenFlag"] = chkCitizen.EditValue; _whatChanged.Add("CitizenFlag"); } if (!object.Equals(row["VeteranFlag"], chkVeteran.EditValue)) { row["VeteranFlag"] = chkVeteran.EditValue; _whatChanged.Add("VeteranFlag"); } ``` **What I am trying to get an answer to, is this really the most efficient way of going about this?** If nothing else I would like to create a function to do the comparing instead of repeating 30 times(per form varies) but I can't quite figure that out. I thought maybe I could use row[].ItemArray but that only has the values. I would have to *KNOW* ahead of time what order the items were in and bank on them not changing.... Am I missing something obvious for working with **DataSets/DataTables** in a **CRUD app**? --- [***juliandewitt's*** post below is fantastic!](https://stackoverflow.com/questions/1273197/a-more-effecient-way-to-work-with-datasets/1276314#1276314) I just, now, need some direction on how to use that in the above. Any links that anyone can point me to would be appreciated. Even better if you can post an example. Are there any drawbacks to using DataRows like that?
It looks like you're doing a lot of manual labor which could be alleviated by Databinding your controls directly to your DataSet/Table. Databinding plumbs together your datasource (your dataset/table in this case) with your UI. When the value changes in the UI it will update the datasource. DataBinding is a *BIG* topic that warrants researching and testing. There are some gotcha's with databinding to a DataTable/Set (the row changes don't get commited until the current row changes, which is annoying in your case of only working with a single row at a time--but there's workarounds). **Reworded:** Another thing to consider is using business objects to represent the data in in your Sets/Tables. ORM (object-relational-mappers) can handle this for you, but they are large and hugely powerful frameworks that aren't easy to master. It's an entirely different paradigm from working with DataSet's/Tables at the UI layer and is more true to object-oriented programming. DataSets and Tables are very good for working with tabular data, but they don't lend themselves too well to working with entities. For example, you would work against an instance of a *Person* object with properties like *IsHispanic* and *IsCitizen* rahtner than essentially working against cells in a table (no more \*myPersonTable[0]["HispanicOriginFlag"]....). **Further:** Unrelated to your question, but relevant to CRUD operations revolving around ADO.NET: it pays to become familiar with the state-tracking built into a DataTable/DataSet. There's lots build into ADO.NET to help make these apps easy to glue together, which would clean up *tons* of code like you've shown. As always RAD tools have the trade-off of giving up control for productivity--but writing them off without understanding them is guaranteeing that you will spend your days writing code like you've shown. **Even More:** To build further on my previous *Further*, when you discover the ability to combine Visual Studio's DataSet generator with the built-in rowstate tracking of DataTables, and change-tracking of DataSets it can be very easy to write a full CRUD system in little time. Here's a quick run-down on some of the steps involved: 1. Establish your database schema 2. In Visual Studio add a new DataSet item to a project 3. Find the Server Explorer (Under View) 4. Add your SQL Server as a Data Connection 5. Drag your table / stored proc / View into the DataSet's designer. 6. Right-click the "TableAdapter" that Visual Studio has generated for you; go to Configure 7. Configure the CRUD commands for the DataSet (the Select, Insert, Update, Delete commands) With that you've created a *Strongly-Typed* DataSet. The DataSet will contain a DataTable property named after the table / view / stored procedure used to generate the DataSet. That Table property will contain strongly-typed rows, which lets you access the cells within that row as properties rather than untyped items in an object array. So if you've generated a new DataSet named *MyDbTables*, with a table named *tblCustomer* which contains some columns like *CustomerId*, *Name*, etc... then you can work with it like this: **This is a variety examples rolled into one, showing some of the common methods used for CRUD work--look into the methods and particulary into the TableAdapter class** ``` public void MyDtDemo() { // A TableAdapter is used to perform the CRUD operations to sync the DataSet/Table and Database var myTa = new ClassLibrary4.MyDbTablesTableAdapters.tblCustomersTableAdapter(); var myDataSet = new MyDbTables(); // 'Fill' will execute the TableAdapter's SELECT command to populate the DataTable myTa.Fill(myDataSet.tblCustomers); // Create a new Customer, and add him to the tblCustomers table var newCustomer = myDataSet.tblCustomers.NewtblCustomersRow(); newCustomer.Name = "John Smith"; myDataSet.tblCustomers.AddtblCustomersRow(newCustomer); // Show the pending changes in the DataTable var myTableChanges = myDataSet.tblCustomers.GetChanges(); // Or get the changes by change-state var myNewCustomers = myDataSet.tblCustomers.GetChanges(System.Data.DataRowState.Added); // Cancel the changes (if you don't want to commit them) myDataSet.tblCustomers.RejectChanges(); // - Or Commit them back to the Database using the TableAdapter again myTa.Update(myDataSet); } ``` *Also, pay attention to the RejectChanges() and AcceptChanges() methods of both DataSets and DataTables. They essentially tell your dataset that it has no changes (either by rejecting all changes, or 'commiting' all changes), but be aware that calling AcceptChanges() and then trying to do an update will have no effect--the DataSet has lost track of any changes and assumes it is an accurate reflection of the Database.* ***And even more! Here's a reworked version of your example showing some of the rowstate tracking features, assuming you've followed my steps to create strongly-typed DataSets/Tables/Rows*** ``` public void CheckRows() { MyPersonDS tmpPersonDS = new MyPersonDS(); // Load Person info using (var tmpPersonDT = tmpPersonDS.PersonDT) { foreach (MyPersonRow row in tmpPersonDT.Rows) { CheckPersonData(row); } } } public void CheckPersonData(MyPersonRow row) { // If DataBinding is used, then show if the row is unchanged / modified / new... System.Diagnostics.Debug.WriteLine("Row State: " + row.RowState.ToString()); System.Diagnostics.Debug.WriteLine("Row Changes:"); System.Diagnostics.Debug.WriteLine(BuildRowChangeSummary(row)); // If not DataBound then update the strongly-types Row properties row.ResidencyCountyID = lkuResidencyCountyId.EditValue; } public string BuildRowChangeSummary(DataRow row) { System.Text.StringBuilder result = new System.Text.StringBuilder(); int rowColumnCount = row.Table.Columns.Count; for (int index = 0; index < rowColumnCount; ++index) { result.Append(string.Format("Original value of {0}: {1}\r\n", row.Table.Columns[index].ColumnName, row[index, DataRowVersion.Original])); result.Append(string.Format("Current value of {0}: {1}\r\n", row.Table.Columns[index].ColumnName, row[index, DataRowVersion.Current])); if (index < rowColumnCount - 1) { result.Append("\r\n"); } } return result.ToString(); } ```
Also the datarow automatically keeps track of changes.. ``` DataRow _personInfo = ConnectBLL.BLL.Person.GetPerson(personID); // _personInfo.RowState = DataRowState.Unchanged _personInfo["columnName"] = "value"; _personInfo["columnName2"] = "value2"; _personInfo.EndEdit(); // _personInfo.RowState = DataRowState.Modified ``` Now you can get the changed values by asking the rowstate and check values as follows ``` var org = fRow["columnName", DataRowVersion.Original]; var new = fRow["columnName",DataRowVersion.Current]; ``` To detect changes you can also listen to the columnchanged event. ``` fTable.ColumnChanged += new DataColumnChangeEventHandler(delegate(object sender, DataColumnChangeEventArgs e) { Console.WriteLine(e.Column.ColumnName); } ); ```
A more efficient way to work with DataSets
[ "", "c#", ".net", "winforms", "refactoring", "dataset", "" ]
Why can I not cast a `List<ObjBase>` as `List<Obj>`? Why does the following not work: ``` internal class ObjBase { } internal class Obj : ObjBase { } internal class ObjManager { internal List<Obj> returnStuff() { return getSomeStuff() as List<Obj>; } private List<ObjBase> getSomeStuff() { return new List<ObjBase>(); } } ``` Instead I have to do this: ``` internal class ObjBase { } internal class Obj : ObjBase { } internal class ObjManager { internal List<Obj> returnStuff() { List<ObjBase> returnedList = getSomeStuff(); List<Obj> listToReturn = new List<Obj>(returnedList.Count); foreach (ObjBase currentBaseObject in returnedList) { listToReturn.Add(currentBaseObject as Obj); } return listToReturn; } private List<ObjBase> getSomeStuff() { return new List<ObjBase>(); } } ``` I get the following error in Visual Studio 2008 (shortened for readability): > Cannot convert type 'List' to 'List' via a reference conversion, boxing conversion, unboxing conversion, wrapping conversion, or null type conversion Thanks.
Please look at the following questions: [.NET Casting Generic List](https://stackoverflow.com/questions/674715/net-casting-generic-list) [Why does this generic cast fail?](https://stackoverflow.com/questions/1133356/why-does-this-generic-cast-fail/1133389)
You can use `Cast` and `ToList` extension methods from System.Linq to have this in one line. Instead of ``` internal List<Obj> returnStuff() { return getSomeStuff() as List<Obj>; } ``` do this: ``` internal List<Obj> returnStuff() { return getSomeStuff().Cast<Obj>().ToList(); } ```
C# Casting a List<ObjBase> as List<Obj>
[ "", "c#", "generics", "inheritance", "list", "casting", "" ]
i rewrite the message for better understandnig how can i search for address only using google map api. if i will search for adderss i dont want google give a bussiness with the same name of the address ( i don't want to get bussiness at all) thanks for your help.
If you are using the AJAX Search API, there is an [argument](http://code.google.com/apis/ajaxsearch/documentation/reference.html#_fonje_local) for Local Search which allows you to exclude business listings: > mrt? - This optional argument > specifies which type of listing the > user is interested in. Valid values > include: > > * blended - request KML, Local Business Listings, and Geocode results > * kmlonly - request KML and Geocode results > * localonly - request Local Business Listings and Geocode results
Would the [GoogleBar](http://code.google.com/apis/maps/documentation/services.html#LocalSearch) work for you? It won't return business results unless they're searched for. Here's an [example](http://code.google.com/apis/maps/documentation/examples/control-googlebar.html). **UPDATE:** If you ONLY want to search for addresses - the Geocoder is really your only option.
search address on google map api
[ "", "c#", ".net", "api", "google-maps", "geocoding", "" ]
I'm trying to write some code that uses Numpy. However, I can't import it: ``` Python 2.6.2 (r262, May 15 2009, 10:22:27) [GCC 3.4.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named numpy ``` I tried the suggestions in [this question](https://stackoverflow.com/questions/233320/cannot-import-sqlite-with-python-2-6 "Cannot import sqlite with Python 2.6"): ``` >>> import sys >>> print sys.path ['', '/usr/intel/pkgs/python/2.6.2/lib/python26.zip', '/usr/intel/pkgs/python/2.6.2/lib/python2.6', '/usr/intel/pkgs/python/2.6.2/lib/python2.6/plat-linux2', '/usr/intel/pkgs/python/2.6.2/lib/python2.6/lib-tk', '/usr/intel/pkgs/python/2.6.2/lib/python2.6/lib-old', '/usr/intel/pkgs/python/2.6.2/lib/python2.6/lib-dynload', '/usr/intel/pkgs/python/2.6.2/lib/python2.6/site-packages'] ``` and I searched for files named `numpy` in that path: ``` $ find /usr/intel/pkgs/python/2.6.2/bin/python -iname numpy\* ``` But nothing came up. So... * Are there any other places in which Python modules are commonly installed? * How can I install numpy locally in my account, if it turns out that it isn't installed in the central areas?
Have you installed it? On debian/ubuntu: ``` aptitude install python-numpy ``` On windows: [`http://sourceforge.net/projects/numpy/files/NumPy/`](http://sourceforge.net/projects/numpy/files/NumPy/) On other systems: [`http://sourceforge.net/projects/numpy/files/NumPy/`](http://sourceforge.net/projects/numpy/files/NumPy/) ``` $ tar xfz numpy-n.m.tar.gz $ cd numpy-n.m $ python setup.py install ```
Your sys.path is kind of unusual, as each entry is prefixed with /usr/intel. I guess numpy is installed in the usual non-prefixed place, e.g. it. /usr/share/pyshared/numpy on my Ubuntu system. Try `find / -iname '*numpy*'`
Can't import Numpy in Python
[ "", "python", "import", "numpy", "" ]
What I want to do is have a page at `/Products/Details/{id}`, which routes to the action details on `ProductsController` and also an edit page at `/Products/Details/Edit/{id}`. I tried to do this using `[ActionName("Details/Edit")]` on the action but that doesn't work.
You can't have a slash in your action name. Why not have the following actions? * /Products/Details/{id} -For display * /Products/Edit/{id} -For edit My preference would be to do the following: * /Products/{id}/View -For display * /Products/{id}/Edit/ -For edit Hope that makes sense!
Add route like this ***BEFORE*** the default one: ``` routes.MapRoute( "DefaultWithDetails", "{controller}/Details/{action}/{id}"}, null); ```
ASP.NET MVC Routing: Can I have an action name with a slash in it?
[ "", "c#", "asp.net-mvc", "url", "routes", "" ]
I recognize that an email address can basically be indefinitely long so any size I impose on my varchar email address field is going to be arbitrary. However, I was wondering what the "standard" is? How long do you guys make it? (same question for Name field...) **update:** Apparently the max length for an email address is 320 (<=64 name part, <= 255 domain). Do you use this?
The theoretical limit is really long but do you really need worry about these long Email addresses? If someone can't login with a 100-char Email, do you really care? We actually prefer they can't. Some statistical data may shed some light on the issue. We analyzed a database with over 10 million Email addresses. These addresses are not confirmed so there are invalid ones. Here are some interesting facts, 1. The longest valid one is 89. 2. There are hundreds longer ones up to the limit of our column (255) but they are apparently fake by visual inspection. 3. The peak of the length distribution is at 19. 4. There isn't long tail. Everything falls off sharply after 38. We cleaned up the DB by throwing away anything longer than 40. The good news is that no one has complained but the bad news is not many records got cleaned out.
The following email address is only 94 characters: **i.have.a.really.long.name.like.seetharam.krishnapillai@AReallyLongCompanyNameOfSomeKind.com.au** * Would an organisation *actually give* you an email that long? * If they were stupid enough to, would you actually *use* an email address like that? * Would *anyone?* Of course not. Too long to type and too hard to remember. Even a 92-year-old technophobe would figure out how to sign up for a nice short gmail address, and just use that, rather than type this into your registration page. Disk space probably isn't an issue, but there are at least two problems with allowing user input fields to be many times longer than they need to be: * **Displaying them could mess up your UI** (at best they will be cut off, at worst they push your containers and margins around) * **Malicious users can do things with them you can't anticipate** (like those cases where hackers used a free online API to store a bunch of data) I like 50 chars: 123456789.123456789.123456789@1234567890123456.com If one user in a million has to use their other email address to use my app, so be it. (Statistics show that no-one actually enters more than about 40 chars for email address, see e.g.: ZZ Coder's answer <https://stackoverflow.com/a/1297352/87861>)
How long should SQL email fields be?
[ "", "sql", "varchar", "convention", "" ]
How do I disable assertions in Python? That is, if an assertion fails, I don't want it to throw an `AssertionError`, but to keep going. How do I do that?
> #How do I disable assertions in Python? There are multiple approaches that affect a single process, the environment, or a single line of code. I demonstrate each. ## For the whole process Using the `-O` flag (capital O) disables all assert statements in a process. For example: ``` $ python -Oc "assert False" $ python -c "assert False" Traceback (most recent call last): File "<string>", line 1, in <module> AssertionError ``` Note that by disable I mean it also does not execute the expression that follows it: ``` $ python -Oc "assert 1/0" $ python -c "assert 1/0" Traceback (most recent call last): File "<string>", line 1, in <module> ZeroDivisionError: integer division or modulo by zero ``` ## For the environment You can use an environment variable to set this flag as well. This will affect every process that uses or inherits the environment. E.g., in Windows, setting and then clearing the environment variable: ``` C:\>python -c "assert False" Traceback (most recent call last): File "<string>", line 1, in <module> AssertionError C:\>SET PYTHONOPTIMIZE=TRUE C:\>python -c "assert False" C:\>SET PYTHONOPTIMIZE= C:\>python -c "assert False" Traceback (most recent call last): File "<string>", line 1, in <module> AssertionError ``` Same in Unix (using set and [unset](https://stackoverflow.com/questions/6877727/how-do-i-delete-unset-an-exported-environment-variable) for respective functionality) ## Single point in code You continue your question: > if an assertion fails, I don't want it to throw an AssertionError, but to keep going. You can either ensure control flow does not reach the assertion, for example: ``` if False: assert False, "we know this fails, but we don't get here" ``` or if you want the assert expression to be exercised then you can catch the assertion error: ``` try: assert False, "this code runs, fails, and the exception is caught" except AssertionError as e: print(repr(e)) ``` which prints: ``` AssertionError('this code runs, fails, and the exception is caught') ``` and you'll keep going from the point you handled the `AssertionError`. ## References From [the `assert` documentation](https://docs.python.org/reference/simple_stmts.html#the-assert-statement): An assert statement like this: > ``` > assert expression #, optional_message > ``` Is equivalent to > ``` > if __debug__: > if not expression: raise AssertionError #(optional_message) > ``` And, > the built-in variable `__debug__` is `True` under normal circumstances, `False` when optimization is requested (command line option `-O`). and further > Assignments to `__debug__` are illegal. The value for the built-in variable is determined when the interpreter starts. From the usage docs: > [**-O**](https://docs.python.org/using/cmdline.html#cmdoption-O) > Turn on basic optimizations. This changes the filename extension for compiled (bytecode) files from .pyc to .pyo. See also PYTHONOPTIMIZE. and > [**PYTHONOPTIMIZE**](https://docs.python.org/using/cmdline.html#envvar-PYTHONOPTIMIZE) > If this is set to a non-empty string it is equivalent > to specifying the `-O` option. If set to an integer, it is equivalent to > specifying `-O` multiple times.
Call Python with the -O flag: test.py: ``` assert False print('Done') ``` Output: ``` C:\temp\py>C:\Python26\python.exe test.py Traceback (most recent call last): File "test.py", line 1, in <module> assert(False) AssertionError C:\temp\py>C:\Python26\python.exe -O test.py Done ```
Disable assertions in Python
[ "", "python", "debugging", "exception", "environment-variables", "assert", "" ]
How do you get a collection of all the types that inherit from a specific other type?
Something like: ``` public IEnumerable<Type> FindDerivedTypes(Assembly assembly, Type baseType) { return assembly.GetTypes().Where(t => baseType.IsAssignableFrom(t)); } ``` If you need to handle generics, that gets somewhat trickier (e.g. passing in the open `List<>` type but expecting to get back a type which derived from `List<int>`). Otherwise it's simple though :) If you want to exclude the type itself, you can do so easily enough: ``` public IEnumerable<Type> FindDerivedTypes(Assembly assembly, Type baseType) { return assembly.GetTypes().Where(t => t != baseType && baseType.IsAssignableFrom(t)); } ``` Note that this will also allow you to specify an interface and find all the types which implement it, rather than just working with classes as `Type.IsSubclassOf` does.
The following method will get a collection of types that inherit a type. **C#** ``` public IEnumerable<Type> FindSubClassesOf<TBaseType>() { var baseType = typeof(TBaseType); var assembly = baseType.Assembly; return assembly.GetTypes().Where(t => t.IsSubclassOf(baseType)); } ``` **VB.NET** ``` Public Function FindSubClasses(Of TBaseType)() As IEnumerable(Of Type) Dim baseType = GetType(TBaseType) Dim assembly = baseType.Assembly Return From t In assembly.GetTypes() Where t.IsSubClassOf(baseType) Select t End Function ``` If you need to include types that implement an interface see @Jon Skeet's answer.
How to find all the types in an Assembly that Inherit from a Specific Type C#
[ "", "c#", "reflection", "" ]
I have a table with rows that symbolize order dates: ``` 2009-05-15 13:31:47.713 2009-05-15 22:09:32.227 2009-05-16 02:38:36.027 2009-05-16 12:06:49.743 2009-05-16 16:20:26.680 2009-05-17 01:36:19.480 2009-05-18 09:44:46.993 2009-05-18 14:06:12.073 2009-05-18 15:25:47.540 2009-05-19 10:28:24.150 ``` I would like have query that returns the following: ``` 2009-05-15 2 2009-05-16 5 2009-05-17 6 2009-05-18 9 2009-05-19 10 ``` Basically it keeps a running total of all the orders placed by the end of the day of the date indicated. The orders are not the orders on that day but all the orders since the earliest dates in the table. This is MSSQL 2000 and the datatype in the first table is just datetime, in the second it could be datetime or string, it doesn't really matter for my purposes.
I got this to work on SQL Server 2005. I *think* it should work with 2000, as well. ``` SELECT dt, count(q2.YourDate) FROM (SELECT DISTINCT CONVERT(varchar,YourDate,101) dt FROM YourTable) t1 JOIN YourTable q2 ON DATEADD(d,-1,CONVERT(varchar,YourDate,101)) < dt GROUP BY dt ``` This will query the table twice, but at least gives correct output.
I recommend a 2 query solution. This is slow, but I use this method almost daily. The important thing is to NOT join the 2 tables in the first query. You want the duplication of each order for every date in your lookup table. You will need a Lookup table with 1 row for each date of the time period you're interested in. Let's call it dboDateLookup. Here's what it will look like: ``` DtIndex 2009-05-15 2009-05-16 2009-05-17 2009-05-18 2009-05-19 ``` Let's also assume the order table, dboOrders has 2 columns, ordernumber and orderdate. ``` ordernumber orderdate 2009-05-15 13:31:47.713 1 2009-05-15 22:09:32.227 2 2009-05-16 02:38:36.027 3 2009-05-16 12:06:49.743 4 2009-05-16 16:20:26.680 5 ``` **Query1:** ``` SELECT Format([ordernumber],"yyyy-mm-dd") AS ByDate, ordernumber, (If Format([orderdate],"yyyy-mm-dd")<=[DtIndex],1,0) AS NumOrdersBefore FROM [dboOrders], [dboDateLookUp]; ``` **Query2:** ``` Select [ByDate], sum([NumOrdersBefore]) as RunningTotal from [Query1]; ```
SQL select and count all items that have occured before
[ "", "sql", "count", "" ]
I have a PHP5 DOMDocument and I try to find the root node (not the root element). Example: ``` <test> <element> <bla1>x</bla1> <bla2>x</bla2> </element> <element> <bla1>y</bla1> <bla2>y</bla2> </element> <element> <bla1>z</bla1> <bla2>z</bla2> </element> </test> ``` I want to get the DOMNode of "test" so that I can call - for example - hasChildNodes. I can get the "documentElement", but that's a DOMElement. Maybe I can go from there? ``` $d = DOMDocument::loadXML($xml); // [... do some stuff here to find document's root node ...] if ($rootnode->hasChildNodes()) echo 'yayy!' ``` Who can fill the gap? I seem to be blind. (Obviously it's not only hasChildNodes I want to call - so NO, it doesn't help to find another method to find out if the document contains stuff. That's just for my simple example. I need a DOMNode at the end.)
According to the PHP docs [DOMElement is a subclass of DOMNode](https://www.php.net/manual/en/class.domelement.php), so it should inherit the `hasChildNodes()`-method.
**DOMElement** extends **DOMNode**. You get the **Root** **DOMElement** by $d->documentElement.
PHP5: Find Root Node in DOMDocument
[ "", "php", "xml", "dom", "root-node", "" ]
Guys, this probably will be fairly simple, but how am I able to find all the matches using regex of this occurrence in a load of text. ``` [[data in here]] ``` EG: ``` Blah blah blah [[find]] and maybe [[this]] ``` So I am able to find the occurrences and then replace them as urls. Does that make sense? I tried using ``` preg_match_all("/[[([^<]*)]]/", $data, $matches, PREG_OFFSET_CAPTURE); ``` But returns errors. Any ideas? :)
Try this: ``` preg_match_all("/\[\[(.+?)\]\]/", $data, $matches, PREG_OFFSET_CAPTURE); ```
Maybe .... ``` "/\[.*\]/" ``` ? Just a wild guess. I think your basic problem is that you need to escape the '[]' brackets, as they are special chars.
Another PHP regex question
[ "", "php", "regex", "" ]
I realise that FileSystemWatcher does not provide a Move event, instead it will generate a separate Delete and Create events for the same file. (The FilesystemWatcher is watching both the source and destination folders). However how do we differentiate between a true file move and some random creation of a file that happens to have the same name as a file that was recently deleted? Some sort of property of the FileSystemEventArgs class such as "AssociatedDeleteFile" that is assigned the deleted file path if it is the result of a move, or NULL otherwise, would be great. But of course this doesn't exist. I also understand that the FileSystemWatcher is operating at the basic Filesystem level and so the concept of a "Move" may be only meaningful to higher level applications. But if this is the case, what sort of algorithm would people recommend to handle this situation in my application? Update based on feedback: The FileSystemWatcher class seems to see moving a file as simply 2 distinct events, a Delete of the original file, followed by a Create at the new location. Unfortunately there is no "link" provided between these events, so it is not obvious how to differentiate between a file move and a normal Delete or Create. At the OS level, a move is treated specially, you can move say a 1GB file almost instantaneously. A couple of answers suggested using a hash on files to identify them reliably between events, and I will proably take this approach. But if anyone knows how to detect a move more simply, please leave an answer.
According to the [docs](http://msdn.microsoft.com/en-us/library/system.io.filesystemwatcher.aspx): > Common file system operations might > raise more than one event. For > example, when a file is moved from one > directory to another, several > OnChanged and some OnCreated and > OnDeleted events might be raised. > Moving a file is a complex operation > that consists of multiple simple > operations, therefore raising multiple > events. So if you're trying to be very careful about detecting moves, and having the same path is not good enough, you will have to use some sort of heuristic. For example, create a "fingerprint" using file name, size, last modified time, etc for files in the source folder. When you see any event that may signal a move, check the "fingerprint" against the new file.
As far as I understand it, the `Renamed` event is for files being moved...? My mistake - the docs specifically say that only files inside a moved folder are considered "renamed" in a cut-and-paste operation: > The operating system and FileSystemWatcher object interpret a cut-and-paste action or a move action as a rename action for a folder and its contents. If you cut and paste a folder with files into a folder being watched, the FileSystemWatcher object reports only the folder as new, but not its contents because they are essentially only renamed. It also says about moving files: > Common file system operations might raise more than one event. For example, when a file is moved from one directory to another, several OnChanged and some OnCreated and OnDeleted events might be raised. Moving a file is a complex operation that consists of multiple simple operations, therefore raising multiple events.
Detecting moved files using FileSystemWatcher
[ "", "c#", "filesystemwatcher", "" ]
I'm using PHP to make a simple caching system, but I'm going to be caching up to 10,000 files in one run of the script. At the moment I'm using a simple loop with ``` $file = "../cache/".$id.".htm"; $handle = fopen($file, 'w'); fwrite($handle, $temp); fclose($handle); ``` ($id being a random string which is assigned to a row in a database) but it seems a little bit slow, is there a better method to doing that? Also I read somewhere that on some operating systems you can't store thousands and thousands of files in one single directory, is this relevant to CentOS or Debian? Bare in mind this folder may well end up having over a million small files in it. Simple questions I suppose but I don't want to get scaling this code and then find out I'm doing it wrong, I'm only testing with chaching 10-30 pages at a time at the moment.
Remember that in UNIX, everything is a file. When you put that many files into a directory, something has to keep track of those files. If you do an :- ``` ls -la ``` You'll probably note that the '.' has grown to some size. This is where all the info on your 10000 files is stored. Every seek, and every write into that directory will involve parsing that large directory entry. You should implement some kind of directory hashing system. This'll involve creating subdirectories under your target dir. eg. /somedir/a/b/c/yourfile.txt /somedir/d/e/f/yourfile.txt This'll keep the size of each directory entry quite small, and speed up IO operations.
The number of files you can effectively use in one directory is not op. system but filesystem dependent. You can split your cache dir effectively by getting the md5 hash of the filename, taking the first 1, 2 or 3 characters of it and use it as a directory. Of course you have to create the dir if it's not exsists and use the same approach when retrieving files from cache. For a few tens of thousands, 2 characters (256 subdirs from 00 to ff) would be enough.
PHP writing large amounts of files to one directory
[ "", "php", "caching", "" ]
I'm writing an application in c++ and I was thinking to use an embedded simple web server that will be my gui, so i could set up my application port on localhost. What such web server would you recommend to use in c++/c? Thanks
If you are using boost then rolling your own in [boost:asio](http://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio/examples.html) is simple. I assume by **embedded** you mean a built in webserver not that you are running on some tiny embedded hardware. If you want something simpler look at [mongoose](http://code.google.com/p/mongoose/) - also see <https://stackoverflow.com/questions/738273/open-source-c-c-embedded-web-server>
I use [C++ Wt](http://www.webtoolkit.eu/wt) for these kind of interfaces
which embedded web server to use for my app GUI
[ "", "c++", "embeddedwebserver", "" ]
right now, i need to load huge data from database into a vector, but when i loaded 38000 rows of data, the program throw out OutOfMemoryError exception. What can i do to handle this ? I think there may be some memory leak in my program, good methods to detect it ?thanks
Provide more memory to your JVM (usually using `-Xmx`/`-Xms`) or don't load all the data into memory. For many operations on huge amounts of data there are algorithms which don't need access to all of it at once. One class of such algorithms are [divide and conquer algorithms](http://en.wikipedia.org/wiki/Divide_and_conquer_algorithm).
If you must have all the data in memory, try caching commonly appearing objects. For example, if you are looking at employee records and they all have a job title, use a HashMap when loading the data and reuse the job titles already found. This can dramatically lower the amount of memory you're using. Also, before you do anything, use a profiler to see where memory is being wasted, and to check if things that can be garbage collected have no references floating around. Again, String is a common example, since if for example you're using the first 10 chars of a 2000 char string, and you have used substring instead of allocating a new String, what you actually have is a reference to a char[2000] array, with two indices pointing at 0 and 10. Again, a huge memory waster.
How to handle huge data in java
[ "", "java", "database", "" ]
How should I parse the following `String` using Java to extract the file path? `?` stands for any number of random charaters `_` stands for any number of white spaces (no new line) ``` ?[LoadFile]_file_=_"foo/bar/baz.xml"? ``` Example: ``` 10:52:21.212 [LoadFile] file = "foo/bar/baz.xml" ``` should extract `foo/bar/baz.xml`
``` String regex = ".*\\[LoadFile\\]\\s+file\\s+=\\s+\"([^\"].+)\".*"; Matcher m = Pattern.compile(regex).matcher(inputString); if (!m.find()) System.out.println("No match found."); else String result = m.group(1); ``` The `String` in result should be your file path. (*assuming I didn't make any mistakes*) You should take a look at the [`Pattern`](http://www.google.com/url?sa=t&source=web&ct=res&cd=1&url=http%3A%2F%2Fjava.sun.com%2Fjavase%2F6%2Fdocs%2Fapi%2Fjava%2Futil%2Fregex%2FPattern.html&ei=HWJwSvTtMIP0NYDQ5dgI&usg=AFQjCNH0h0XmzRaUcuXCvjQPzquOfjdjUQ&sig2=FIjvzu7PrMn9L480P4stRQ) class for some regular expression help. They can be a very powerful string manipulation tool.
Short answer: use *subSequence()*. ``` if (line.contains("[LoadFile]")) result = line.subSequence(line.indexOf('"'), line.lastIndexOf('"')).toString(); ``` On my machine, this consistently takes less than 10,000 ns. I am taking "efficient" to mean faster. The *regex* option is considerably slower (about 9 or 10 times slower). The primary advantage of the regex option is that it might be easier for another programmer to figure out what you are doing (but then use comments to help them). To make the regex option more efficient, pre-compile it: ``` private static final String FILE_REGEX = ".*\\[LoadFile\\]\\s+file\\s+=\\s+\"([^\"].+)\".*"; private static final Pattern FILE_PATTERN = Pattern.compile(FILE_REGEX); ``` But this still leaves it slower. I record times between 80,000 and 100,000 ns. The StringTokenizer option is more efficient than the regex: ``` if (line.contains("[LoadFile]")) { StringTokenizer tokenizer = new StringTokenizer(line, "\""); tokenizer.nextToken(); result = tokenizer.nextToken(); } ``` This hovers around 40,000 ns for me, putting it in at 2-3 times faster than the regex. In this scenario, split() is also an option, which for me (using Java 6\_13) is just a little faster than the Tokenizer: ``` if (line.contains("[LoadFile]")) { String[] values = line.split("\""); result = values[1]; } ``` This averages times of 35,000 ns for me. Of course, none of this is checking for errors. Each option will get a little slower when you start factoring that in, but I think the *subSequnce()* option will still beat them all. You have to know the exact parameters and expectations to figure out how fault-tolerant each option needs to be.
What is an efficient way to parse a String in Java?
[ "", "java", "regex", "string", "parsing", "" ]
For example, I can catch the Delete event for various files in a folder tree, but how would I go about determining which user caused the delete to happen? I couldn't find anything obvious in the MSDN documentation for FileSystemWatcher, so maybe it is just not possible. I'd be curious if there is a solution however.
This isn't currently possible with the current implementations of the FileSystemWatcher as it does not receive this type of information when a file is deleted, or anything about a file changes.
It is possible using Folder Auditing (folder Properties > Security > Advanced Options > Auditing) and then looking up the Security Event Log after the FileSystemWatcher event fires. ``` string GetUser(string path) { object nowDate = Now; GetUser = "Unknown"; Threading.Thread.Sleep(1000); // # Search user in the security event log object secLog = new EventLog("Security", EVENTLOGSERVER); EventLogEntry entry; for (int i = (secLog.Entries.Count - 1); (i <= Math.Max((secLog.Entries.Count - 1500), 0)); i = (i + -1)) { entry = secLog.Entries(i); if (IsValidEntry(path, nowDate, entry)) { GetUser = entry.ReplacementStrings(11); break; } } } bool IsValidEntry(string path, DateTime nowDate, EventLogEntry entry) { return ((entry.EntryType == EventLogEntryType.SuccessAudit) && ((entry.InstanceId == 560) || (entry.InstanceId == 564)) && !entry.UserName.EndsWith("SYSTEM") && (Math.Abs(nowDate.Subtract(entry.TimeGenerated).TotalSeconds <= 20) && (entry.ReplacementStrings.GetUpperBound(0) >= 11) && (entry.ReplacementStrings(2).Length >= 4) && path.EndsWith(entry.ReplacementStrings(2).Substring(4))); } ```
Which user caused FileSystemWatcher events?
[ "", "c#", "filesystemwatcher", "" ]
Im using simplexml to get the twitter profile avatar url from the xml status page. this is the code im using ``` <? $username = twitter; $xml = simplexml_load_file("http://twitter.com/users/".$username.".xml"); echo $xml->user->profile_image_url; ?> ``` The xml page loads when i visit it, but for some reason nothing is being echoed. No errors. Nothing. When i visit it in a browser, i get this: ``` <?xml version="1.0" encoding="UTF-8"?> <user> <id>783214</id> <name>Twitter</name> <screen_name>twitter</screen_name> <location>San Francisco, CA</location> <description>Always wondering what everyone's doing.</description> <profile_image_url>http://a1.twimg.com/profile_images/75075164/twitter_bird_profile_normal.png</profile_image_url> <url>http://twitter.com</url>..... (the rest is long and irrelevant to the question) ``` The data is there, why wont it echo?
After loading the XML document, the root element `user` is represented by the `SimpleXMLElement` object saved in `$xml`. Therefore `$xml->user` does not exist. This one should work: ``` <? $username = "twitter"; // <-- You did not use quotes here?! Typo? $xml = simplexml_load_file("http://twitter.com/users/".$username.".xml"); echo $xml->profile_image_url; // <-- No $xml->user here! ?> ```
It is because the root element (in this case, `<user>`) is implied - you do not have to specify it. Try this: ``` echo $xml->profile_image_url; ```
Using SimpleXML to get twitter avatar url (weird)
[ "", "php", "xml", "twitter", "" ]
Simple question for all you pragmatic object-oriented fellas. I have read many times to avoid classes like "Processor", and "xxxxHandler" in order to agree to OO standards: and I believe it's a good measure for understandability of the system code. Let's assume we have a software that scans some file structure, let's say a bunch of specific CSV files. Let's say we have an independent module called CsvParser. ``` class CsvParser { public string GetToken(int position) { .. } public bool ReadLine() { .. } } class MyCsvFile { public string FullPath { get; } public void Scan() { CsvParser csvp(FullPath); while (csvp.ReadLine()) { /* Parse the file that this class represents */ } } } ``` This will save having a "FileScanner" class, which is a -Processor- type class. Something that will collect say, a bunch of files from a directory, and scan each. ``` class MyFileScan { public string[] Files { get; set; } public void GetFiles() { this.Files = Directory.GetFiles(..); } public void ScanFiles() { foreach (string thisFilePath in Files) { CsvParser csvp(thisFilePath); /* ... */ } } } ``` The OO approach dictates having the MyCsvFile class, and then a method representing the operation on the object. Any thoughts? What do you programmers think.
I think what you're describing is that objects should take care of operations that only require themselves, which is in general a good rule to follow. There's nothing wrong with a "processor" class, as long as it "processes" a few different (but related) things. But if you have a class that only processes one thing (like a CSV parser only parses CSVs) then really there's no reason for the thing that the processor processes not to do the processing on itself. However, there is a common reason for breaking this rule: usually you don't want to do things you don't have to do. For example, with your CSV class, if all you want is to find the row in the CSV where the first cell is "Bob" and get the third column in that row (which is, say, Bob's birth date) then you don't want to read in the entire file, parse it, and then search through the nice data structure you just created: it's inefficient, especially if your CSV has 100K lines and Bob's entry was on line 5. You could redesign your CSV class to do small-scale operations on CSV's, like skipping to the next line and getting the first cell. But now you're implementing methods that you wouldn't really speak of a CSV having. CSV's don't read lines, they store them. They don't find cells, they just have them. Furthermore, if you want to do a large-scale operation such as reading in the entire CSV and sorting the lines by the first cell, you'll wish you had your old way of reading in the entire file, parsing it, and going over the whole data structure you created. You could do both in the same class, but now your class is really two classes for two different purposes. Your class has lost cohesion and any instance of the class you create is going to have twice as much baggage, while you're only likely to use half of it. In this case, it makes sense to have a high-level abstraction of the CSV (for the large-scale operations) and a "processor" class for low-level operations. (The following is written in Java since I know that better than I know C#): ``` public class CSV { final private String filename; private String[][] data; private boolean loaded; public CSV(String filename) { ... } public boolean isLoaded() { ... } public void load() { ... } public void saveChanges() { ... } public void insertRowAt(int rowIndex, String[] row) { ... } public void sortRowsByColumn(int columnIndex) { ... } ... } public class CSVReader { /* * This kind of thing is reasonably implemented as a subclassable singleton * because it doesn't hold state but you might want to subclass it, perhaps with * a processor class for another tabular file format. */ protected CSVReader(); protected static class SingletonHolder { final public static CSVReader instance = new CSVReader(); } public static CSVReader getInstance() { return SingletonHolder.instance; } public String getCell(String filename, int row, int column) { ... } public String searchRelative(String filename, String searchValue, int searchColumn, int returnColumn) { ... } ... } ``` A similar well-known example of this is SAX and DOM. SAX is the low-level, fine-grained access, while DOM is the high-level abstraction.
I'd agree with your philospohy but if it was me I'd probably call the class CsvFile and have a Parse method in addition to the Scan one. In OO programming it's always desireable to make your classes represent "things" (nouns in English). That aside if I was asked to maintain your code I'd grasp what a CsvParser class is likely to be doing whereas MyFileScan would send me into fits of rage and cause me to have to read the code to work it out.
OO Style - Simple question
[ "", "c#", "oop", "" ]
At first when I saw the upcoming C++0x standard I was delighted, and not that I'm pessimistic, but when thinking of it now I feel somewhat less hopeful. Mainly because of three reasons: 1. a lot of boost bloat (which must cause hopeless compile times?), 2. the syntax seems lengthy (not as Pythonic as I initially might have hoped), and 3. I'm very interested in portability and other platforms (iPhone, Xbox, Wii, Mac), isn't there are very real risk that the "standard" will take long to get portable enough? I suppose #3 is less of a risk, lessons learned from templates in the previous decade; however Devil's in the details. **Edit 2 (trying to be less whimsy):** Would you say it's safe for a company to transition to C++0x in the first effective year of the standard, or will that be associated with great risk?
> Edit: do I (and others like me) have to keep a very close eye on build times, unreadable code and lack of portability and do massive prototyping to ensure that it's safe to move on with the new standard? Yes. But you have to do all these things with the *current* standard as well. I don't see that it is getting any worse with C++0x. C++ build times have *always* sucked. There's no reason why C++0x should be slower than it is today, though. As always, you only include the headers you need. And each header has not grown noticeably bigger, as far as I can tell. Of course Concepts was one of the big unknowns here, and it was feared that they would slow down compile-times dramatically. Which was one of the many reasons why they were cut. C++ easily becomes unreadable if you're not careful. Again, nothing new there. And again, C++0x offers a lot of tools to help minimize this problem. Lambdas aren't quite as concise as in, say, Python or SML, but they're a hell of a lot more readable than the functors we're having to define *today*. As for portability, C++ is a minefield already. There are no guarantees given for integer type sizes, nor for string encodings. In both cases, C++0x offers the tools to fix this (with Unicode-specific char types, and integers of a guaranteed fixed size) The upcoming standard nails down a number of issues that currently hinder portability. So overall, yes, the issues you mention are real. They exist today, and they will exist in C++0x. But as far as I can see, C++0x lessens the impact of these problems. It won't make them worse. You're right, it'll take a while for compliant standards to become available on all platforms. But I think it'll be a quicker process than it was with C++98. All the major compiler vendors seem very keen on C++0x support, which wasn't really the case last time around. (probably because back then, it was mostly a matter of adjusting and fixing the pre-standard features they already implemented, so it was easier to claim that your pre-standard compiler was "sort of almost nearly C++98-compliant". I think on the whole, the C++ community is much more standard-focused and forward-looking than a decade ago. If you want to sell your compiler, you're going to have to take C++0x seriously. But there's definitely going to be a period of several years from the standard is released until fully (or mostly) compliant compilers are available.
1. You pay for only what you use. If you don't need a complex template feature, don't #include the headers it's defined in, and you won't have to deal with it. 2. Lambda functions should reduce the verbosity of STL algorithms a good bit; and `auto` variables will help with code like `std::map<foo, std::shared_ptr<std::vector<bar> > >::const_iterator`... 3. Yes, it will take a while. Many of the new features are indeed in boost, and if you want portability that's what you should be using for at least a few years after the standard is implemented. Fortunately there are only two compilers that cover those platforms you mentioned: g++ and Microsoft's C++ compiler. Once they get support, it's just a matter of time before the embedded toolchains get rebuilt with the new versions. Unfortunately, possibly a lot of time...
C++0x noise, bloat and portability
[ "", "c++", "c++11", "portability", "" ]
I want to use a query as following, I am looking for exact information/link to escape strings BookTitle is NVARCHAR(200) SELECT \* FROM Books WHERE BookTitle IN ('Mars and Venus', 'Stack''s Overflow \r\n') Question: Does only "'" needs to be escaped or even \r\n needs to be escaped as well? MySql .Net Provider exposes a method to escape string values, is there any such function in Sql Server .Net Provider? I probably need C# equivalent escaping for string values. I am aware of Parameterized Command, but in order to minimize my server to client communication, and my values in IN clause are in number from 20 to 50, it becomes too much network expensive to run SELECTs for each value of BookTitle in one call. Rather running single query and returning all results cascaded helps saving network resources.
There are more things that have to be escaped than just quotes or new line characters. What if there's a binary input (by hacker)? Better use PreparedStatement (in java) or any other equivalent in the target language. Java sample: ``` PreparedStatement ps = con.prepareStatement("SELECT * FROM Books WHERE BookTitle IN (?, ?)"); ps.setString(1, "Mars and Venus"); ps.setString(2, "Stack's Overflow and "); ResultSet rs = ps.executeQuery(); .... ```
SQL Server won't recognise the `\r\n` sequence, whether it's escaped or not. You'll need to do something like this instead if you want to match the `\r\n` in `BookTitle`: ``` -- \r = CHAR(13) -- \n = CHAR(10) SELECT * FROM Books WHERE BookTitle IN ('Mars and Venus', 'Stack''s Overflow ' + CHAR(13) + CHAR(10)) ```
Correct String Escaping for T-SQL string literals
[ "", "c#", "sql-server", "t-sql", "" ]
Is there any real case where getUTCFullYear() differs from getFullYear() in javascript? The same goes for: getUTCMonth() vs getMonth() getUTCDate() vs getDate() Am I missing something here? EDIT: See [getUTCFullYear() documentation](http://docs.sun.com/source/816-6408-10/date.htm#1193940). **Is there any real case where getUTCFullYear() differs from getFullYear() in javascript?**
Yes, around new year. If your TZ is -12, the value differs between 31. December, 12:00am and midnight.
The getUTC...() methods return the date and time in the UTC timezone, while the other functions return the date and time in the local timezone of the computer that the script is running on. Sometimes it's convenient to have the date and time in the local timezone, and sometimes it's convenient to have the date and time in a timezone that's independent of the computer's local timezone. There are ways to convert between timezones in JavaScript, but they're cumbersome. So, I guess it's just for convenience.
What is the need for the javascript function getUTCFullYear()?
[ "", "javascript", "utc", "" ]
Could anyone please tell me the meaning of **API** in following paragraph, that's actually about **Transfer Object**: > If it's likely that a business service > might be asked to send or receive all > or most of its data in a big, > coarse-grained message, it's common > for that service to provide that > feature in its **API**. Thanks in advance.
"Application Programming Interface" - a set of functions that a programmer uses to communicate with a piece of software or a service.
API = Application Programming Interface. It is your formal statement of the programming interafce you offer to other components. If you are a serious service provider then you pay careful attention to the design of your API. The use of DTOs is often very appropriate in the provisision of a good interface. [Wikipedia Link](http://en.wikipedia.org/wiki/Application_programming_interface)
The meaning of API
[ "", "java", "api", "data-transfer-objects", "" ]
I have defined a C# enum as ``` public enum ORDER { ... unknown, partial01, partial12, partial23, } ``` and can use its value as a string as in: ``` string ss = ORDER.partial01.ToString(); ``` However when I try to use it in a case statement it fails to compile: ``` string value = ... switch (value) { case null: break; case "s": // OK break; case ORDER.partial01.ToString(): // compiler throws "a constant value is expected" break; ... ``` I thought enums were constants. How do I get around this? (I cannot parse the value into an enum as some of the values are outside the range)
The enum is a constant, but the result of .ToString() is not. As far as the compiler is concerned, it is a dynamic value. You probably need to convert your switch case into a series of if/else statements
Since C# 6, you can use: `case nameof(SomeEnum.SomeValue):` Nameof is evaluated at compile time, simply to a string that matches the (unqualified) name of the given variable, type, or member. Naturally, it changes right along should you ever rename the enum option name.
How can I use the string value of a C# enum value in a case statement?
[ "", "c#", "string", "enums", "" ]
Why would anybody use the "standard" random number generator from [System.Random](http://msdn.microsoft.com/en-us/library/system.random.aspx) at all instead of always using the cryptographically secure random number generator from [System.Security.Cryptography.RandomNumberGenerator](http://msdn.microsoft.com/en-us/library/system.security.cryptography.randomnumbergenerator.aspx) (or its subclasses because RandomNumberGenerator is abstract)? Nate Lawson tells us in his Google Tech Talk presentation "[Crypto Strikes Back](https://youtu.be/ySQl0NhW1J0?t=788)" at minute 13:11 not to use the "standard" random number generators from Python, Java and C# and to instead use the cryptographically secure version. I know the difference between the two versions of random number generators (see [question 101337](https://stackoverflow.com/questions/101337/what-is-the-difference-between-a-randomly-generated-number-and-secure-randomly-ge)). But what rationale is there to not always use the secure random number generator? Why use System.Random at all? Performance perhaps?
Speed and intent. If you're generating a random number and have no need for security, why use a slow crypto function? You don't need security, so why make someone else think that the number may be used for something secure when it won't be?
Apart from the speed and the more useful interface (`NextDouble()` etc) it is also possible to make a repeatable random sequence by using a fixed seed value. That is quite useful, amongst others during Testing. ``` Random gen1 = new Random(); // auto seeded by the clock Random gen2 = new Random(0); // Next(10) always yields 7,8,7,5,2,.... ```
Why use the C# class System.Random at all instead of System.Security.Cryptography.RandomNumberGenerator?
[ "", "c#", ".net", "cryptography", "random", "" ]
When using a list comprehension, is the order of the new list guaranteed in any way? As a contrived example, is the following behavior guaranteed by the definition of a list comprehension: ``` >> a = [x for x in [1,2,3]] >> a [1, 2, 3] ``` Equally, is the following equality guaranteed: ``` >> lroot = [1, 2, 3] >> la = [x for x in lroot] >> lb = [] >> for x in lroot: lb.append(x) >> lb == la True ``` Specifically, it's the ordering I'm interested in here.
Yes, the list comprehension preserves the order of the original iterable (if there is one). If the original iterable is ordered (list, tuple, file, etc.), that's the order you'll get in the result. If your iterable is unordered (set, dict, etc.), there are no guarantees about the order of the items.
Yes, a list is a sequence. Sequence order is significant.
Is the order of results coming from a list comprehension guaranteed?
[ "", "python", "list-comprehension", "" ]
I have a console application that's calling a web service method. In the web service method, I am using `System.Console.WriteLine`, but its getting ignored. How can I give feedback to the console application from within my web service method?
An alternative is to have the web service 'tell' the console app what it's doing; to keep a running log.
You can't because `System.Console.WriteLine` will write to the server console, whereas you want output on the client console. You can use a logging framework like the one in the [SixPack library](http://code.google.com/p/sixpack-library/), or [Log4Net](http://logging.apache.org/log4net/index.html) to write logging messages to a file for example. In order to be able to write on the client console, you need to put your `Console.WriteLine` statements in the client application.
Can Console Output in a Web Service Operation be Visible on the Client Console?
[ "", "c#", "web-services", "console", "" ]
I have an ASP.Net HTTPHandler that gets POSTed from a ColdFusion web page whose FORM looks something like: ``` <form name="sendToHandler" action="http://johnxp/FileServiceDemo2005/UploadHandler.ashx" method="post"> <input type="hidden" name="b64fileName" value="fileservice.asmx.xml" /> <input type="hidden" name="strDocument" value="Document" /> <input type="submit" name="submitbtn" value="Submit" /> ``` What is the best way for this .Net Handler to return a string to the POSTing ColdFusion page? EDIT update Aug 14, 2009: The solution I came up in my .ashx file involves saving the URL of the .cfm file that POSTed my handler and appending a querystring with the result string(s) that I want to communicate back to ColdFusion. My CF colleague uses the presence or absence of this querystring data to format the .cfm webpage accordingly: ``` public void ProcessRequest(HttpContext context) { string returnURL = context.Request.ServerVariables["HTTP_REFERER"]; // posting CFM page string message = UploadFile(context); // handles all the work of uploading a file StringBuilder msgReturn = new StringBuilder(returnURL); msgReturn.Append("?n="); msgReturn.Append(HttpUtility.UrlEncode(TRIMrecNumAssigned)); msgReturn.Append("&m="); // this is just a msg with performance data about the upload operation (elapsed time, size of file, etc.) msgReturn.Append(HttpUtility.UrlEncode(message)); context.Response.Redirect(msgReturn.ToString()); } ```
Just write the string directly to the response object in your ProcessRequest method. ``` public void ProcessRequest(System.Web.HttpContext context) { context.Response.Write(mystring); } ```
You can generate JSON from your HTTP Handler and use `jquery.post` to submit form data and get results in the ColdFusion page.
Best way to return a string from an HTTPHandler to a page that POSTs this .ashx file
[ "", "c#", "asp.net", "post", "coldfusion", "httphandler", "" ]
**Rewritten:** I could use some input, suggestions, and samples from some other minds on building a databindable collection. The collection needs to provide a databindable, **editable** tree of items, but with a small twist: the items need to be one of two types, with each type providing slightly different characteristics. The two types of items are Folder and TreeItem. A Folder contains it's own list of items (again, either of Folder or TreeItem type) and a TreeItem does not contain a list. My current approach is fairly close, but feels crufty. Essentially I have an abstract base class, TreeItemBase, which (in a round-about way) inherits from BindableList. Then I have two concrete derived types, Folder and TreeItem which both inherit from the abstract base class. The obvious flaw is that the TreeItem, which can't contain childitems, still inherits from BindingList; so it's up to some ugly hackery to pretend that it's not a collection. Is BindingList<> a poor choice to base this collection on? Some of the [DataBinding interfaces](http://msdn.microsoft.com/en-us/library/41e17s4b(loband).aspx) sound like they offer a higher degree of control over the databinding, but I haven't found one that's quite right. My *ideal* though is to provide a custom implementation that lets me control how databinding walks the collection and can inspect the concrete type of each element to determine if it contains a collection, or if it's a terminus in the tree. Here's a quick cut of the XML to help visualize what I'm trying to represent; it's easy for me to cleanly code the structure and rules in XSD--but I'm just having a hard time translating it to .NET and supporting databinding. ``` <Items> <TreeItem xsi:type="Folder" name="Root"> <TreeItem xsi:type="Folder" name="Sub1"> <TreeItem xsi:type="TreeItem" name="Humm"/> </TreeItem> <TreeItem xsi:type="TreeItem" name="Bleh"/> <TreeItem xsi:type="Folder" name="Sub2"> <TreeItem xsi:type="TreeItem" name="Boo!"/> </TreeItem> </TreeItem> </Items> ``` **Update:** I've been working more on my approach and have come close to what I'd like to do using an interface rather than a base-class for the items, but have hit a snag. The issue I've run into is covered on a [seperate question](https://stackoverflow.com/questions/1295000/net-winforms-databinding-bindinglistibindableitem-where-some-implementations). Ideally I'd like to use the abstract base-class approach so that the XML that's generated considers `Folder` and `TreeItem` to be complexTypes (without manual control of the serialization), but it's a negligible requirement.
Maybe my depth of knowledge isn't big enough for this question, but couldn't you do this: Have an interface, and 2 classes that implement the interface. ``` interface ITreeItem { IEnumerable<ITreeItem> GetChildren(); } class MyFolder : ITreeItem { public IEnumerable<ITreeItem> GetChildren() { // TODO: Return the list of children } } class MyITreeItem : ITreeItem { public IEnumerable<ITreeItem> GetChildren() { // TODO: Return the list of children } } ``` Then if your goal is to databind the collection to some list, you should be able to do so with the IEnumerable collections. In each call to databind the collections, you should be able to check to see which type the item is: ``` foreach (var node in root.GetChildren()) { if (node is MyFolder) { var folder = (MyFolder)node; // Bind fields from the folder object } else if(node is MyTreeItem) { var folder = (MyTreeItem)node; // Bind fields from the tree item object } } ``` I did something similar (I think) to this when I had a list nested inside another list. To display the data, I setup Nested ListView controls. Sorry if this isn't what you're looking for, but hope it helps!
SkippyFire's solution seems more elegant than mine, but I figured that I will show you how I solved the problem. The following solution shows what I did to build a collection that can be bound to a tree view, and you can determine which items have been selected. It does not implement any Bindable Lists or anything though. However, it is not clear from your post whether this is what you want. This is an example of my XML file: ``` <controls name="Parent" attribute="element"> <control name="Files" attribute="element"> <control name="Cashflow" attribute="element"> <control name="Upload" attribute="leaf"/> <control name="Download" attribute="leaf"/> </control> </control> <control name="Quotes" attribute="element"> <control name="Client Quotes" attribute="leaf"/> </control> </controls> ``` Then I have a class that represents each item. It contains a Name, a List of child nodes (1 level down), a reference to its parent, and a string that logs the attribute of the element. ``` public class Items { public string Name { get; set; } public List<Items> SubCategories { get; set; } public string IsLeaf { get; set; } public string Parent { get; set; } } ``` From there, I populate a list of Items as follows: ``` List<Items> categories = new List<Items>(); XDocument categoriesXML = XDocument.Load("TreeviewControls.xml"); categories = this.GetCategories(categoriesXML.Element("controls")); ``` This calls the GetCategories() method ``` private List<Items> GetCategories(XElement element) { return (from category in element.Elements("control") select new Items() { Parent = element.Attribute("name").Value, Name = category.Attribute("name").Value, SubCategories = this.GetCategories(category), IsLeaf = category.Attribute("attribute").Value }).ToList(); } ``` After the categories variable has been populated, I just assign the list as the treeview's ItemSource. ``` controltree.ItemsSource = categories; ``` And from there, if the choice changes in the tree, I check if the choice is a leaf node, and if so, I raise an event. ``` private void Selection_Changed(object sender, RoutedEventArgs e) { Items x = controltree.SelectedItem as Items; if (x.IsLeaf.Equals("leaf")) _parent.RaiseChange(x.Parent+","+x.Name); } ``` This solution works for any depth in the tree as well.
.NET Databinding - Custom DataSource for a recursive tree of folders and items
[ "", "c#", ".net", "data-binding", "tree", "" ]
Where can i get the technical manuals/details of how django internals work, i.e. i would like to know when a request comes in from a client; * which django function receives it? * what middleware get called? * how is the request object created? and what class/function creates it? * What function maps the request to the necessary view? * How does your code/view get called ? etc... Paul.G
Easiest way to understand the internals of django, is by reading a book specifically written for that. Read [Pro Django](https://rads.stackoverflow.com/amzn/click/com/1430210478). It provides you a good in depth understanding of the meta programming first and demonstrates how it is used in django models, to create them dynamically. It deals similarly with many other python concepts and how django uses it.
Besides reading the source, here's a few articles I've tagged and bookmarked from a little while ago: * [How Django processes a request](http://www.b-list.org/weblog/2006/jun/13/how-django-processes-request/) * [Django Request Response processing](http://uswaretech.com/blog/2009/06/django-request-response-processing/) * [Django internals: authentication](http://jacobian.org/writing/django-internals-authen/) * [How the Heck do Django Models Work](http://lazypython.blogspot.com/2008/11/how-heck-do-django-models-work.html) I've found James Bennet's [blog](http://www.b-list.org) to be a a great source for information about django workings. His book, [Practical Django Projects](http://apress.com/book/view/1590599969), is also a must read -- though it isn't focused on internals, you'll still learn about how django works.
Where can i get technical information on how the internals of Django works?
[ "", "python", "django", "" ]
Python has syntactically sweet list comprehensions: ``` S = [x**2 for x in range(10)] print S; [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] ``` In PHP I would need to do some looping: ``` $output = array(); $Nums = range(0,9); foreach ($Nums as $num) { $out[] = $num*=$num; } print_r($out); ``` to get: Array ( [0] => 0 [1] => 1 [2] => 4 [3] => 9 [4] => 16 [5] => 25 [6] => 36 [7] => 49 [8] => 64 [9] => 81 ) Is there anyway to get a similar list comprehension syntax in PHP? Is there anyway to do it with any of the new features in PHP 5.3? Thanks!
Maybe something like this? ``` $out=array_map(function($x) {return $x*$x;}, range(0, 9)) ``` This will work in PHP 5.3+, in an older version you'd have to define the callback for [array\_map](http://php.net/array_map) separately ``` function sq($x) {return $x*$x;} $out=array_map('sq', range(0, 9)); ```
PHP 5.5 may support list comprehensions - see the mailing list announcement: * [[PHP-DEV] List comprehensions and generator expressions for PHP](http://markmail.org/thread/uvendztpe2rrwiif) (28 Jun 2012) And further discussion: * [What Generators Can Do For You](http://blog.ircmaxell.com/2012/07/what-generators-can-do-for-you.html) (by ircmaxell; 23 Jul 2012) - has a Fibonacci example. * [What PHP 5.5 might look like](http://nikic.github.com/2012/07/10/What-PHP-5-5-might-look-like.html#generators) (by NikiC; 10 Jul 2012) * [Request for Comments: Generators](https://wiki.php.net/rfc/generators) (Wiki started 05 Jun 2012)
Does PHP have an equivalent to Python's list comprehension syntax?
[ "", "php", "arrays", "list-comprehension", "higher-order-functions", "language-comparisons", "" ]
I have setup a jQuery UI modal dialog to display when a user clicks a link. There are two textboxes (I only show the code for 1 for brevity) in that dialog div tag and it is changed to be a jQuery UI DatePicker textbox that reacts on focus. The problem is that the jQuery UI dialog('open') somehow triggers the first textbox to have focus, which then triggers the datepicker calendar to open immediately. So I am looking for a way to prevent the focus from happening automatically. ``` <div><a id="lnkAddReservation" href="#">Add reservation</a></div> <div id="divNewReservation" style="display:none" title="Add reservation"> <table> <tr> <th><asp:Label AssociatedControlID="txtStartDate" runat="server" Text="Start date" /></th> <td> <asp:TextBox ID="txtStartDate" runat="server" CssClass="datepicker" /> </td> </tr> </table> <div> <asp:Button ID="btnAddReservation" runat="server" OnClick="btnAddReservation_Click" Text="Add reservation" /> </div> </div> <script type="text/javascript"> $(document).ready(function() { var dlg = $('#divNewReservation'); $('.datepicker').datepicker({ duration: '' }); dlg.dialog({ autoOpen:false, modal: true, width:400 }); $('#lnkAddReservation').click(function() { dlg.dialog('open'); return false; }); dlg.parent().appendTo(jQuery("form:first")); }); </script> ```
[jQuery UI 1.10.0 Changelog](http://jqueryui.com/changelog/1.10.0/) lists [ticket 4731](http://bugs.jqueryui.com/ticket/4731) as being fixed. Looks like focusSelector was not implemented, but a cascading search for various elements was used instead. From the ticket: > Extend autofocus, starting with [autofocus], then :tabbable content, then buttonpane, then close button, then dialog So, mark an element with the `autofocus` attribute and that is the element that should get the focus: ``` <input autofocus> ``` In [the documentation](http://api.jqueryui.com/dialog/), the focus logic is explained (just under the table of contents, under the title 'Focus'): > Upon opening a dialog, focus is automatically moved to the first item > that matches the following: > > 1. The first element within the dialog with the `autofocus` attribute > 2. The first `:tabbable` element within the dialog's content > 3. The first `:tabbable` element within the dialog's buttonpane > 4. The dialog's close button > 5. The dialog itself
Add a hidden span above it, use ui-helper-hidden-accessible to make it hidden by absolute positioning. I know you have that class because you are using dialog from jquery-ui and it's in jquery-ui. ``` <span class="ui-helper-hidden-accessible"><input type="text"/></span> ```
Prevent jQuery UI dialog from setting focus to first textbox
[ "", "javascript", "jquery", "jquery-ui", "" ]
``` if ($row['active'] == 1) echo '<a href="prof?id=$id">'.htmlspecialchars($row['username']).'</a>'; else echo htmlspecialchars($row['username']); ``` Could I write this shorter and cleaner somehow?
``` echo $row['active'] == 1 ? '<a href="prof?id=$id">'.htmlspecialchars($row['username']).'</a>' : htmlspecialchars($row['username']); ``` explained a little here <http://www.addedbytes.com/php/ternary-conditionals/>
I'm assuming you made a mistake putting the $id in a single quoted string, and meant for php to put the value of $id in its place in there. ``` $name=htmlspecialchars($row['username']); if($row['active'] == 1) { echo "<a href='prof?id=$id'>$name</a>"; } else { echo $name; } ```
Beginner if statement help
[ "", "php", "" ]
If I compile an application using Java 5 code into bytecode, **are the resulting .class files going to be able to run under Java 1.4?** If the latter can work and I'm trying to use a Java 5 framework in my Java 1.4 application, is there **anything I should watch out for?**
Nope! `.class` files are forward compatible only. Java 5 introduced new classfile attributes and format changes, to handle Varargs, enums, and generics. Java 4 would just fail when processing these. However, there is the unsupported -target jsr14 option on javac that generates JDK 1.4-compatible bytecode for some Java 5 language features. Also, there are projects (e.g. [Retroweaver](http://retroweaver.sourceforge.net/), [Retrotranslator](http://retrotranslator.sourceforge.net/)) that convert Java 5 classfiles into Java 4 files. EDIT: I found this good resource: [Using Java 5 language features in earlier JDKs](http://www.ibm.com/developerworks/java/library/j-jtp02277.html)
No. You can only move upwards - Java 1.4 bytecode will work on Java 5 runtimes. However, if you aren't using any functionality not found in Java 1.4, you can compile using the `-target` and `-source` [options of `javac`](http://java.sun.com/j2se/1.5.0/docs/tooldocs/solaris/javac.html#options). If you want to use Java 5 to write an application that can be run on Java 4, you can't use any [features that weren't present before Java 5](http://java.sun.com/developer/technicalArticles/releases/j2se15/).
Is Java bytecode compatible with different versions of Java?
[ "", "java", "jvm", "compatibility", "bytecode", "java1.4", "" ]
I'm a complete Python noob. How can I remove two characters from the beginning of each line in a file? I was trying something like this: ``` #!/Python26/ import re f = open('M:/file.txt') lines=f.readlines() i=0; for line in lines: line = line.strip() #do something here ```
You were off to a good start. Try this in your loop: ``` for line in lines: line = line[2:] # do something here ``` The [2:] is called "[slice](http://docs.python.org/tutorial/introduction.html#strings)" syntax, it essentially says "give me the part of this sequence which begins at index 2 and continues to the end (since no end point was specified after the colon).
[String slicing](http://docs.python.org/tutorial/introduction.html#strings) will help you: ``` >>> a="Some very long string" >>> a[2:] 'me very long string' ```
How to remove two chars from the beginning of a line
[ "", "python", "" ]
I used a Backgroudworker to do some work in order to do some time consuming tasks. ``` public void ConnectDataProvider() { bgw = new BackgroundWorker(); bgw.DoWork += new DoWorkEventHandler(bgw_DoWork); bgw.RunWorkerCompleted += new RunWorkerCompletedEventHandler(bgw_RunWorkerCompleted); } ``` Another method starts the background worker: ``` public void StartPolling() { bgw.RunWorkerAsync(); } ``` Then I did the event handling: ``` void bgw_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { // do it over again StartPolling(); } void bgw_DoWork(object sender, DoWorkEventArgs e) { // do work WriteData(); } ``` As you can see, I started the worker over on completion. Now this works for a single backgroundworker. Now I want a collection, and each item should perform this task. However with the concept above it will just keep on running in the first worker started, as it starts the worker over. I'm thinking, maybe a Timer in combination could solve the situation to give the other worker threads way. Is the BackgroundWorker still a good choice? Is it common to reuse the BackgroundWorker like I did? **EDIT 1:** To clairify: The problem I'm facing is, that I need manage the collection each with their own BackgroundWorker. I was thinking about a timer, to set off request periodically from each item. This is where I'm stuck. **EDIT 2:** See my own answer, I didn't solve this issue, but found that I can go along with timers to get what I wanted. **EDIT 3:** To clarify (another try, I'm not good at that) what I wanted to achieve: I've got tracking objects, for gps tracking. I want to track a whole bunch of them, so one object per tracking device. They all need to be polled frequently. Ihad a BackgroundWorker set up for a single test object. I liked they way the Backgroundworker would tell me when it's done. But I couldn't get it working with all of the tracking objects. Now every tracking object has its own timer. This timer spawns a new thread and does the time consuming work (which I named DoWrite). No BackgroundWorker needed, as I dispose the timer and then create a new timer. That's all it does.
How many of these might there be? You should be careful of creating a 1:1 relationship between items in a collection and threads. Another coder might come along and expand this collection more than you planned. For things like this I usually like to use just the one thread, and a queue - so the events just put the work that needs to be done in a ConcurrentQueue, and the thread starts if not running, and churns through the queued work until it's out of things to do, and dies. The next time more work comes in the event will check if the thread is running and start it. It's cheaper because if there's a lot going on, you run just the one thread instead of stopping and starting a lot of them, or if there's very little going on, the one thread is almost never running.
Your solution would look more logical if you just put a while() loop in Dowork(), maybe with a little Sleep(). And then there would be no objection against running multiple Bgw's, although you may not see a great speed benefit.
BackgroundWorker used within collection items
[ "", "c#", "multithreading", "backgroundworker", "" ]
I've heard that coroutines are a good way to structure games (e.g., [PEP 342](http://www.python.org/dev/peps/pep-0342/): "Coroutines are a natural way of expressing many algorithms, such as simulations, games...") but I'm having a hard time wrapping my head around how this would actually be done. I see from this [article](http://www.ibm.com/developerworks/library/l-pygen.html) that coroutines can represent states in a state machine which transition to each other using a scheduler, but it's not clear to me how this applies to a game where the game state is changing based on moves from multiple players. Is there any simple example of a game written using coroutines available? Or can someone offer a sketch of how it might be done?
One way coroutines can be used in games is as light weight threads in an actor like model, like in [Kamaelia](http://www.kamaelia.org/Home). Each object in your game would be a Kamaelia 'component'. A component is an object that can pause execution by yielding when it's allowable to pause. These components also have a messaging system that allows them to safely communicate to each other asynchronously. All the objects would be concurrently doing their own thing, with messages sent to each other when interactions occur. So, it's not really specific to games, but anything when you have a multitude of communicating components acting concurrently could benefit from coroutines.
Coroutines allow for creating large amounts of very-lightweight "microthreads" with cooperative multitasking (i.e. microthreads suspending themselves willfully to allow other microthreads to run). Read up in Dave Beazley's [article](http://www.dabeaz.com/coroutines/) on this subject. Now, it's obvious how such microthreads can be useful for game programming. Consider a realtime strategy game, where you have dozens of units - each with a mind of its own. It may be a convenient abstraction for each unit's AI to run as such a microthread in your simulated multitasking environment. This is just one example, I'm sure there are more. The "coroutine game programming" search on Google seems to bring up interesting results.
Coroutines for game design?
[ "", "python", "coroutine", "" ]
I develop my project at ~/Dropbox/db/. I need to move the folder continuously to /var/www to be able to see the result in Firefox because it is the only place where I have Apache. My filetree is at ~/Dropbox/db/ ``` . |-- handlers | |-- login.php | |-- question.php | |-- question_answer.php | `-- register.php |-- index.php |-- forms | |-- login.php | |-- question.php | |-- question_answer.php | `-- register.php ``` I have for instance the following reference in the file ~/Dropbox/db/forms/login.php: ``` include '../handlers/login.php'; ``` which is awful, since everything breaks after a while. For instance, I start Firefox at index.php. This means that all PATHS are relative to index.php. This makes the command above meaningless. Perhaps, the only solution to the problem is to make all PATHs absolute in all my files. I have tried to avoid this because this would force me to use SED's replacement command. I would like to have a similar command as LaTeX's `\graphicspath` for forms and handlers such that I can simply say where to find the given files in one file. I found out that PHP has the following function which needs to be at the beginning of index.php. However, it does not seem to be enough, since I cannot extend it to forms and handlers, for instance. ``` $path_parts = pathinfo('/var/www/index.php'); ``` **How can you have a LaTeX-like PATH -system for PHP?**
If you always want your includes to be relative to the folder the current executing file is currently residing in, you can use the following: ``` include(dirname(__FILE__) . '/../hello.php'); ``` So, if that code is in `x.php` and you include `x.php` from `y.php`, it doesn't matter in which directory `y.php` is since the include is always relative to the directory `x.php` is in. I personally think it's the most effort-less way of achieving exactly what you want. You don't need any special configuration or setup, and all paths are "relative". From the [PHP Documentation](http://php.net/constants.predefined) > **`__FILE__`**: The full path and filename of the file. If used inside an include, the fill path and filename of the included file is returned.
# Approach 1 Is `include_path` what you're looking for? You can set it either in php.ini, or in your apache vhost: ``` <VirtualHost *:80> ... php_value include_path ".:/usr/share/php:/usr/share/pear:/usr/local/share/php" ... </VirtualHost> ``` Not quite as nice as kpathsea (what Τεχ uses), but as close as you can get. I prefer this approach, as it makes it easy to keep your include files outside your document root (so you don't have to worry about someone executing them directly). # Approach 2 You can get your document root using `$_SERVER['DOCUMENT_ROOT']`. So you can use that to write a full include path.
To have LaTeX-like PATH -system for PHP development
[ "", "php", "latex", "path", "" ]
I have some C++ code, in which the following enum is declared: ``` enum Some { Some_Alpha = 0, Some_Beta, Some_Gamma, Some_Total }; int array[Some_Total]; ``` The values of Alpha, Beta and Gamma are sequential, and I gladly use the following cycle to iterate through them: ``` for ( int someNo = (int)Some_Alpha; someNo < (int)Some_Total; ++someNo ) {} ``` This cycle is ok, until I decide to change the order of the declarations in the enum, say, making Beta the first value and Alpha - the second one. That invalidates the cycle header, because now I have to iterate from Beta to Total. So, what are the best practices of iterating through enum? I want to iterate through all the values without changing the cycle headers every time. I can think of one solution: ``` enum Some { Some_Start = -1, Some_Alpha, ... Some_Total }; int array[Some_Total]; ``` and iterate from (Start + 1) to Total, but it seems ugly and I have never seen someone doing it in the code. Is there any well-known paradigm for iterating through the enum, or I just have to fix the order of the enum values? (let's pretend, I really have some awesome reasons for changing the order of the enum values)...
``` enum Some { Some_first_ = 0, Some_Alpha = Some_first_, .... Some_last_ }; ``` Doing such you can grant first & last never changes order
You can define an `operator++()` for your `enum`. This has the advantage that it uses the well-known paradigm of the standard incrementation operators. `:)` Depending on whether your enums are contiguous, you can treat them as `int` or use a switch: ``` Some& operator++(Some& obj) { # if YOUR_ENUMS_ARE_CONTIGUOUS int i = obj; if( ++i > Some_Total ) i = Some_Alpha; return obj = static_cast<Some>(i); # else switch(obj) { case Some_Alpha : obj = Some_Beta; break; case Some_Beta : obj = Some_Gamma; break; case Some_Gamma : obj = Some_Total; break; case Some_Total : obj = Some_Alpha; break; default: assert(false); // changed enum but forgot to change operator } return obj; # endif } ``` Note that, if `operator++()` is defined, users will probably expect an `operator--()`, too.
Is there any well-known paradigm for iterating enum values?
[ "", "c++", "enums", "iteration", "" ]
I am developing an ecommerce website with CI that has product categories and products. I want to route the URL so that it will go to the products controller, then run the getCategoryByName function for the first segment, then run the getProductByName for the second segment. Here is what I have: ``` URL: products/docupen/rc805 ``` ``` routes.php: $route['products/([a-z]+)'] = "products/getCategoryByName/$1"; $route['products/([a-z]+)/([a-z0-9]+)'] = "products/$1/getProductByName/$2"; ``` But its not working. "docupen" is the category, and "rc805" is the product. Thanks in advance. --- Thank you all for your help. This is what I ended up with for what I needed. ``` $route['products/:any/:num'] = "products/getProductByID"; $route['products/:any/:any'] = "products/getProductByName"; $route['products/:any'] = "products/getCategoryByName"; ```
My answer builds a bit on Colin's answer. When I played around with the routes in CodeIgniter I came to the conclusion that the order of the routes was important. When it finds the first valid route it won't do the other routes in the list. If it doesn't find any valid routes then it will handle the default route. My routes that I played around with for my particular project worked as follows: ``` $route['default_controller'] = "main"; $route['main/:any'] = "main"; $route['events/:any'] = "main/events"; $route['search/:any'] = "main/search"; $route['events'] = "main/events"; $route['search'] = "main/search"; $route[':any'] = "main"; ``` If I entered "<http://localhost/index.php/search/1234/4321>" It would be routed to main/search and I can then use `$this->uri->segment(2);` to retrieve the `1234`. In your scenario I would try (order is very important): ``` $route['products/:any/:any'] = "products/getProductByName"; $route['products/:any'] = "products/getCategoryByName"; ``` I don't know enough to route the way you wanted (`products/$1/getProductByName/$2`), but I'm not sure how you would create a controller to handle this particular form of URI. Using the `$this->uri->segment(n);` statements as mentioned by Colin in your controller, you should be able to do what you want.
You should use the [URI class](http://codeigniter.com/user_guide/libraries/uri.html) to retrieve the "docupen" and "rc805" segments from your url. You can then use those values in whatever functions you need. For example, if your url is www.yoursite.com/products/docupen/rc805, you would use the following in your products controller to retrieve the segments and assign them to variables: ``` $category = $this->uri->segment(2); //docupen $product = $this->uri->segment(3); //rc805 ``` Then you can use $category and $product however you need to.
CodeIgniter Routing
[ "", "php", "regex", "http", "codeigniter", "routes", "" ]
I'm using Qt to parse an XML file which contains timestamps in UTC. Within the program, of course, I'd like them to change to local time. In the XML file, the timestamps look like this: "2009-07-30T00:32:00Z". Unfortunately, when using the QDateTime::fromString() method, these timestamps are interpreted as being in the local timezone. The hacky way to solve this is to add or subtract the correct timezone offset from this time to convert it to "true" local time. However, is there any way to make Qt realize that I am importing a UTC timestamp and then automatically convert it to local time?
Do it like this: ``` QDateTime timestamp = QDateTime::fromString(thestring); timestamp.setTimeSpec(Qt::UTC); // mark the timestamp as UTC (but don't convert it) timestamp = timestamp.toLocalTime() // convert to local time ```
try using the [setTime\_t](http://doc.trolltech.com/4.5/qdatetime.html#setTime_t) function.
What is the correct way to handle timezones in datetimes input from a string in Qt
[ "", "c++", "qt", "datetime", "timezone", "" ]
I have no: of arrays, and i like to take it to text file in specific format, for eg., 'present form' ``` a= [1 2 3 4 5 ] b= [ 1 2 3 4 5 6 7 8 ] c= [ 8 9 10 12 23 43 45 56 76 78] d= [ 1 2 3 4 5 6 7 8 45 56 76 78 12 23 43 ] ``` The 'required format' in a txt file, ``` a '\t' b '\t' d '\t' c 1 '\t' 1 2 '\t' 2 3 '\t' 3 4 '\t' 4 5 '\t' 5 6 7 8 ``` `'\t'`- 1 tab space problem is, I have the array in linear form[a],[b],[c],and d, i have to transpose('required format') and sort [a],[b],[d],and [c] and write it as a txt file
Just for fun with no imports: ``` a= [1, 2, 3, 4, 5] b= [1, 2, 3, 4, 5, 6, 7, 8] c= [8, 9, 10, 12, 23, 43, 45, 56, 76, 78] d= [1, 2, 3, 4, 5, 6, 7, 8, 45, 56, 76, 78, 12, 23, 43] fh = open("out.txt","w") # header line fh.write("a\tb\td\tc\n") # rest of file for i in map(lambda *row: [elem or "" for elem in row], *[a,b,d,c]): fh.write("\t".join(map(str,i))+"\n") fh.close() ```
``` from __future__ import with_statement import csv import itertools a= [1, 2, 3, 4, 5] b= [1, 2, 3, 4, 5, 6, 7, 8] c= [8, 9, 10, 12, 23, 43, 45, 56, 76, 78] d= [1, 2, 3, 4, 5, 6, 7, 8, 45, 56, 76, 78, 12, 23, 43] with open('destination.txt', 'w') as f: cf = csv.writer(f, delimiter='\t') cf.writerow(['a', 'b', 'd', 'c']) # header cf.writerows(itertools.izip_longest(a, b, d, c)) ``` Results on `destination.txt` (*`<tab>`s* are in fact real tabs on the file): ``` a<tab>b<tab>d<tab>c 1<tab>1<tab>1<tab>8 2<tab>2<tab>2<tab>9 3<tab>3<tab>3<tab>10 4<tab>4<tab>4<tab>12 5<tab>5<tab>5<tab>23 <tab>6<tab>6<tab>43 <tab>7<tab>7<tab>45 <tab>8<tab>8<tab>56 <tab><tab>45<tab>76 <tab><tab>56<tab>78 <tab><tab>76<tab> <tab><tab>78<tab> <tab><tab>12<tab> <tab><tab>23<tab> <tab><tab>43<tab> ``` Here's the izip\_longest function, if you have python < 2.6: ``` def izip_longest(*iterables, fillvalue=None): def sentinel(counter=([fillvalue]*(len(iterables)-1)).pop): yield counter() fillers = itertools.repeat(fillvalue) iters = [itertools.chain(it, sentinel(), fillers) for it in iterables] try: for tup in itertools.izip(*iters): yield tup except IndexError: pass ```
text file format from array
[ "", "python", "" ]
Assume I have an array/vector of numbers like 1,3,7,9 then I need to guess a number from this list randomly. Using Random class in Java it seems like not possible to do this. Could anyone kindly help me to tell a way to do this kind of thing. I have to change the list of numbers used to generate random number. I am trying to implement a strategy to play battleship game automatically as an assignment. Kindly help me to do this ?
Put the numbers in an ArrayList and use [Collections.shuffle(arrayList);](http://java.sun.com/javase/6/docs/api/java/util/Collections.html#shuffle(java.util.List))
If you just want to select *one* random number only, or want to select multiple random numbers with reinsertion (i.e. allow possibility of selecting the same number multiple times), you can generate a random index: ``` List<Integer> lst = ....; int index = new Random().nextInt(lst.size()); Integer randomeValue = lst.get(index); ``` You can use an array instead as well. This requires `O(1)` for each selection. If you need to select multiple distinct random numbers from the list, then using `Collections.shuffle()` and iterating through the list would be a better solution. This requires `O(n)` for all the queries.
How to generate a random number with Java from given list of numbers
[ "", "java", "arrays", "random", "" ]
I develop using Java 5 and 6 on Intellij IDEA 7 and 9M1. From time to time (during run/compile) my Mac (10.5.7) will die with a Bluescreen of death. Has anybody else had this issue and somehow managed to solve it? If so, how? Any suggestions other than running the IDE in a VM?
It could be specific to your hardware and/or software environment. I'd suggest performing a hardware test (faulty memory is the most likely cause). Java is a stress to the system, I've seen such situations before (other applications work fine, but Java is causing system crashes, in most cases such problems were related to memory, replacing the modules fixed them). For testing the memory you can use the [Memtest OS X](http://www.memtestosx.org/joomla/index.php). Another thing to try is Java for Mac OS X 10.5 Update 5 Developer Preview which is available via Apple Developer Connection.
I've heard people complaining about the latest MacOS Java updates in combination with IDEA. [Maybe check out this thread](http://www.jetbrains.net/devnet/thread/282406)
Bluescreen of death during Java development on a Leopard - any ideas how to solve this?
[ "", "java", "macos", "crash", "intellij-idea", "bsod", "" ]
First, I am throwing run time exceptions for all unrecoverable exceptions, this causes these exceptions to travel up to the container, where I currently use an error page (defined in web.xml). In this error page is a scriptlet that invokes the logger. The issue I am having with this is that the exception is no longer on the stack at this invocation. I have access to it from a request scope variable ("javax.servlet.error.message"). This string is the stack trace. I need this stack trace for logging purposes obviously, and on different app servers "javax.error\_message" can be turned off for security reasons....... So my question is, how can best log runtime exceptions from within Java EE apps without wrapping everything in this: ``` try {} catch (Exception e) {logger.log(...)} ``` ? I want some way to invoke the logger from the container maybe... right before the container catches the exception for example.
I found a solution. By adding a response filter and wrapping chain.doFilter(req, resp) like so: ``` try { chain.doFilter(req,resp); } catch (Exception e) { logger.error("", e); throw new RuntimeException(e); } ``` This works fine so far and isn't dependent on a particular framework or app server.
There is nothing I know of in Servlet API to accomplish this. However, you can do this in Tomcat with an instance listener. You can install a listener in context.xml like this, ``` <InstanceListener>myapp.MyListener</InstanceListener> ``` Tomcat fires `InstanceEvent.AFTER_SERVICE_EVENT` event right after the container catches the exception and before it throws the exception again. You can invoke the logger right there.
Catching every exception in Java EE web apps
[ "", "java", "logging", "jakarta-ee", "exception", "" ]
We use one big enum for message passing and there are optional parts of our code base that use specific messages that really only need to be added to the enum if those parts are to be compiled in. Is there a way to define a macro that could do this or something similar? I would like to be able to just add REGISTER\_MSG(MESSAGE\_NAME); to the beginning of a file in the optional code. I guess the real problem is that macros only replace the code at the location they are written.
I would rethink this design, but if you really want to do it this way you can construct the enum header at compile time using Make. ``` BigEnum.h: # First put in everything up to the opening brace, cat BigEnumTop > $@ # then extract all of the message names and add them (with comma) to BigEnum.h, cat $(OPTIONAL_SOURCES) | grep REGISTER_MSG | sed 's/.*(\(.*\))/\1/' >> $@ # then put in the non-optional message names, closing brace, and so on. cat BigEnumBottom >> $@ ```
Indeed, you can't have macros modifying code in other compilation units. And the enum should be the same for all the compilation units. (I wouldn't probably use enum with automatically assigned value for identifying messages in a protocol, BTW.)
Is there a way to use macros to add additional values to an enum from elsewhere at compile time?
[ "", "c++", "enums", "macros", "" ]
I have this snippet ``` private void westButton_click(object sender, EventArgs ea) { PlayerCharacter.Go(Direction.West); } ``` repeated for North, South and East. How can I declare a function that'd let me generate methods like ir programmatically? e.g., I'd like to be able to call ``` northButton.Click += GoMethod(Direction.North); ``` instead of defining the method, and then ``` northButton.Click += new EventHandler(northButton.Click); ```
``` northButton.Click += (s, e) => GoMethod(Direction.North); ``` (or...) ``` northButton.Click += (s, e) => PlayerCharacter.Go(Direction.North); ```
``` northButton.Click += (s,e) => GoMethod(Direction.North); ```
Returning a function in C#
[ "", "c#", "first-class", "" ]
I've just started working with pygame and I'm trying to make a semi-transparent sprite, and the sprite's source file is a non-transparent bitmap file loaded from the disk. I don't want to edit the source image if I can help it. I'm sure there's a way to do this with pygame code, but Google is of no help to me.
I may have not been clear in my original question, but I think I figured it out on my own. What I was looking for turns out to be Surface's set\_alpha() method, so all I had to do was make sure that translucent images were on their own surface. Here's an example with my stripped down code: ``` import pygame, os.path from pygame.locals import * class TranslucentSprite(pygame.sprite.Sprite): def __init__(self): pygame.sprite.Sprite.__init__(self, TranslucentSprite.container) self.image = pygame.image.load(os.path.join('data', 'image.bmp')) self.image = self.image.convert() self.image.set_colorkey(-1, RLEACCEL) self.rect = self.image.get_rect() self.rect.center = (320,240) def main(): pygame.init() screen = pygame.display.set_mode((640,480)) background = pygame.Surface(screen.get_size()) background = background.convert() background.fill((250,250,250)) clock = pygame.time.Clock() transgroups = pygame.sprite.Group() TranslucentSprite.container = transgroups """Here's the Translucency Code""" transsurface = pygame.display.set_mode(screen.get_size()) transsurface = transsurface.convert(screen) transsurface.fill((255,0,255)) transsurface.set_colorkey((255,0,255)) transsurface.set_alpha(50) TranslucentSprite() while 1: clock.tick(60) for event in pygame.event.get(): if event.type == QUIT: return elif event.type == KEYDOWN and event.key == K_ESCAPE: return transgroups.draw(transsurface) screen.blit(background,(0,0)) screen.blit(transsurface,(0,0)) pygame.display.flip() if __name__ == '__main__' : main() ``` Is this the best technique? It seems to be the most simple and straightforward.
After loading the image, you will need to enable an alpha channel on the `Surface`. that will look a little like this: ``` background = pygame.Display.set_mode() myimage = pygame.image.load("path/to/image.bmp").convert_alpha(background) ``` This will load the image and immediately convert it to a pixel format suitable for alpha blending onto the display surface. You could use some other surface if you need to blit to some other, off screen buffer in another format. You can set per-pixel alpha simply enough, say you have a function which takes a 3-tuple for rgb color value and returns some desired 4tuple of rgba color+alpha, you could alter the surface per pixel: ``` def set_alphas(color): if color == (255,255,0): # magenta means clear return (0,0,0,0) if color == (0,255,255): # cyan means shadow return (0,0,0,128) r,g,b = color return (r,g,b,255) # otherwise use the solid color from the image. for row in range(myimage.get_height()): for col in range(myimage,get_width()): myimage.set_at((row, col), set_alphas(myimage.get_at((row, col))[:3])) ``` There are other, more useful ways to do this, but this gives you the idea, I hope.
How to make translucent sprites in pygame
[ "", "python", "graphics", "sprite", "pygame", "" ]
Can I encrypt an assembly (using AES/DES) and deploy? I simply don't want people to use Reflector to view the code of my assembly.
Unless you are going to use reflection to load the assembly manually, after it's been unencrypted, no. It will still have to live in unencrypted mode in memory, and any memory dump will be able to get it. You can obfuscate it, which makes reflector mostly useless. What's the real worry behind this?
even if you could encrypt everything, since .net use [CIL](http://en.wikipedia.org/wiki/Common_Intermediate_Language), somewhere in the process, it will become unencrypted and from that point on, it can be de-assembled into source code.
Encrypt an assembly [c#]
[ "", "c#", "encryption", "" ]
using PHP and MySQL i have grabbed an array of facebook user ids from facebook. Now i want to find the corresponding username in my application for this array. Clearly in my application the user table contains unique username and unique fb\_uid values. my rudimentary understanding oof programming led me to 2 ways: 1) use a loop and run through the array of fb\_uid and find the username one by one. OR 2) create a monster query like select distinct(username) from users where fb\_uid = value1 OR fb\_uid = value2 ... so is there a better way out? Thank you.
Use SQL's `IN` operator instead: ``` select distinct(username) from users where fb_uid in (value1, value2, ...) ``` <http://www.w3schools.com/SQl/sql_in.asp>
If your list of fb\_uids is big *(say, more than 100 or 500 r 1000 hundred ids)*, I wouldn't go with the "or" way : too many "or" like this will hurt the DB, I think. But doing one query per id is not really good either... So what about a mix of those two ideas ? Doing one query per, say, 50 or 100 fb\_uids ? And, instead of using lots or OR, you can go with IN ; a bit like this : ``` select distinct(username) from users where fb_uid IN (id1, id2, id3, ...) ``` Not sure it'll change anything for the DB, but, at least, it's looking better ^^ Only thing is you shouldn't use too many ids in the IN ; so, doing a couple of queries, with between 50 and 500 ids each time might be the way to go. And to help you a bit further, you might need to take a look at [`array_slice`](http://php.net/array_slice), to extracts "slices" of your fb\_uids array...
how to optimize mysql query if i have too many OR operators in my query?
[ "", "php", "mysql", "query-optimization", "" ]
I have a mobile application that is suffering from slow-down over time. My hunch, (In part fed by [this article](http://developer.sonyericsson.com/site/global/techsupport/tipstrickscode/java/p_avoidingmemoryfragment.jsp),) is that this is due to fragmentation of memory slowing the app down, but I'm not sure. Here's a pretty graph of the app's memory use over time: [fraggle rock http://kupio.com/image-dump/fragmented.png](http://kupio.com/image-dump/fragmented.png) The 4 peaks on the graph are 4 executions of the exact same task on the app. I start the task, it allocates a bunch of memory, it sits for a bit (The flat line on top) and then I stop the task. At that point it calls System.gc(); and the memory gets cleaned up. As can be seen, each of the 4 runs of the exact same task take longer to execute. The low-points in the graph all return to the same level so there do not seem to be any memory leaks between task runs. What I want to know is, is memory fragmentation a feasible explanation or should I look elsewhere first, bearing in mind that I've already done a lot of looking? The low-points on the graph are relatively low so my assumption is that in this state the memory would not be very fragmented since there can't be a lot of small memory holes to be causing problems. I don't know how the j2me memory allocator works though, so I really don't know. Can anyone advise? Has anyone else had problems with this and recognises the memory profile of the app?
If you've got a little bit of time, you could test your theory by re-using the memory by using Memory Pool techniques: each run of the task uses the 'same' chunks of memory by getting them from the pool and returning them at release time. If you're still seeing the degrading performance after doing this investigation, it's not memory fragmentation causing the problem. Let us all know your results and we can help troubleshoot further.
Memory fragmentation would account for it... what is not clear is whether the Apps use of memory is causing paging? this would also slow things up.... and could cause the same issues.
What does fragmented memory look like?
[ "", "java", "memory", "memory-management", "java-me", "memory-fragmentation", "" ]
What I want to do is to force Maven to download the 'maven-metadata.xml' for each artifact that I have in my local repository. The default Maven behaviour is to download only metadata from remote repositories (see [this question](https://stackoverflow.com/questions/1158131/how-does-maven2-know-where-to-find-plugins/1158210#1158210)). **Why I want to do that:** Currently I have a remote repository running in a build machine. By remote repository I mean a directory located in the build machine that contains all dependencies that I need to build my Maven projects. Note that I'm not using a repository manager like Nexus, the repository is just a copy of a local repository that I have uploaded to my build machine. However, since my local repository did not contain the 'maven-metadata.xml' files, these metadata files are also missing in the build machine repository. If I could retrieve the metadata files from the central repository, then it would be possible to upload a working remote repository to my build machine.
You don't want to get the metadata from the public repositories, it will contain all the versions available of a given artifact, whereas your repository will have some subset of the releases. It's worth pointing out that you really would be better off with a repository manager. The following steps allow you to generate your artifact metadata once. But if your needs change, you'll have to repeat the process or update the files manually, so why not just use a manager? Nexus can run standalone and has a very small footprint. --- Even if you're not planning on using Nexus for a repository manager, you can still use it to generate your metadata. 1. First [install](http://www.sonatype.com/books/nexus-book/reference/install-sect-install.html) Nexus. 2. Locate the nexus work directory (by default ${user.home}/sonatype-work. 3. Copy your local repository contents to the nexus-work/releases sub-directory. 4. [Start Nexus](http://www.sonatype.com/books/nexus-book/reference/install-sect-running.html) and connect to the Nexus home page in the browser (by default <http://localhost:8081/nexus>) 5. Log in using the admin account (password admin123) 6. Click on the *repositories* link on the left hand side. 7. Select the Releases repository, right-click and click *Rebuild Metadata* In a few moments you should have the metadata generated for all the artifacts. You can then copy them to wherever you need them to be and uninstall Nexus.
The default repositories are defined in the [super pom.xml](http://maven.apache.org/guides/introduction/introduction-to-the-pom.html#Super_POM) that all poms inherit from by default. If by local you mean you want to only use ~/.m2/repos/\* then work in offline mode. Add `<offline>true</offline>` to your `settings.xml` If by local you mean your *local server*, you could install a repository manager like Nexus, modify your settings file to use nexus under "mirrors" like this: ``` <mirror> <id>central-proxy</id> <mirrorOf>central</mirrorOf> <url>my/local/nexus/server</url> </mirror> ``` And disable remote repositories you don't want in Nexus.
How to force Maven to download maven-metadata.xml from the central repository?
[ "", "java", "maven-2", "maven-metadata", "" ]
Program "" has more than one entry point defined: 'Class.Main()'. Compile with /main to specify the type that contains the entry point. I have searched and searched, and have only found the syntax to specify the class of the entry point, (/main:class) but not the type. Can anyone help? ``` static void Main() { } static void Main(string[] args) { } ```
You can't do this, basically. You can only specify that a *type* is an entry point, not which Main overload *within a type* should be the entry point. You could create a nested class containing one of them, if you want to keep the code within the same outer type: ``` using System; using System.IO; using System.Text.RegularExpressions; class Test { class Parameterless { static void Main() { } } static void Main(string[] args){} } ``` You then need to use either `/main:Test` or `/main:Test.Parameterless` depending on which one you want to call, or use the Application Entry Point in the project properties in Visual Studio.
I don't believe it's possible to overload main, for that exact reason: there can be only one entry point into your program! What the "/main" allows you to do is specify the type (i.e. the class or struct) that contains the main entry point, and not the signature (i.e. which of the overloading) so the compiler is left with ambiguity.
C# Compiler not recognizing overloaded Main
[ "", "c#", ".net", "" ]
So basically I have this relatively long stored procedure. The basic execution flow is that it `SELECTS INTO` some data into temp tables declared with the `#` sign and then runs a cursor through these tables to generate a 'running total' into a third temp table which is created using `CREATE`. Then this resulting temp table is joined with other tables in the DB to generated the result after some grouping etc. The problem is, this SP had been running fine until now returning results in 1-2 minutes. And now, suddenly, its taking 12-15 minutes. If I extract the query from the SP and executed it in management studio by manually setting the same parameters, it returns results in 1-2 minutes but the SP takes very long. Any idea what could be happening? I tried to generate the Actual Execution plans of both the query and the SP but it couldn't generate it because of the cursor. Any idea why the SP takes so long while the query doesn't?
This is the footprint of parameter-sniffing. See here for another discussion about it; [SQL poor stored procedure execution plan performance - parameter sniffing](https://stackoverflow.com/questions/1007397/sql-poor-stored-procedure-execution-plan-performance-parameter-sniffing) There are several possible fixes, including adding WITH RECOMPILE to your stored procedure which works about half the time. The recommended fix for most situations (though it depends on the structure of your query and sproc) is to *NOT* use your parameters directly in your queries, but rather store them into local variables and then use those variables in your queries.
its due to parameter sniffing. first of all declare temporary variable and set the incoming variable value to temp variable and use temp variable in whole application here is an example below. ``` ALTER PROCEDURE [dbo].[Sp_GetAllCustomerRecords] @customerId INT AS declare @customerIdTemp INT set @customerIdTemp = @customerId BEGIN SELECT * FROM Customers e Where CustomerId = @customerIdTemp End ``` try this approach
SP taking 15 minutes, but the same query when executed returns results in 1-2 minutes
[ "", "sql", "sql-server", "stored-procedures", "sql-server-2005", "parameter-sniffing", "" ]
I am pretty new to JS - so just wondering whether you know how to solve this problem. Current I have in my code ``` <a href='#' class="closeLink">close</a> ``` Which runs some JS to close a box. The problem I have is that when the user clicks on the link - the href="#" takes the user to the top of page when this happens. How to solve this so it doesn't do this ? i.e. I cant use someting like onclick="return false" as I imagine that will stop the JS from working ? Thanks
The usual way to do this is to return false from your javascript click handler. This will both prevent the event from bubbling up and cancel the normal action of the event. It's been my experience that this is typically the behavior you want. jQuery example: ``` $('.closeLink').click( function() { ...do the close action... return false; }); ``` If you want to simply prevent the normal action you can, instead, simply use [preventDefault](http://api.jquery.com/event.preventDefault/). ``` $('.closeLink').click( function(e) { e.preventDefault(); ... do the close action... }); ```
The easiest way to solve this problem is to just add another character after the pound symbol like this: ``` <a href='#a' class="closeLink">close</a> ``` Problem solved. Yes, it was that easy. Some may hate this answer, but they cannot deny that it works. Just make sure you don't actually have a section assigned to "a" or it will go to that part of the page. (I don't see this as often as I use to, though) "#" by itself, by default, goes to the top of the page.
Href="#" Don't Scroll
[ "", "javascript", "html", "" ]
How can I read HTTP stream with HTML page in page's encoding? Here is a code fragment I use to get the HTTP stream. *InputStreamReader* has the encoding optional argument, but I have no ideas about the way to obtain it. ``` URLConnection conn = url.openConnection(); InputStream is = conn.getInputStream(); BufferedReader d = new BufferedReader(new InputStreamReader(is)); ```
Retrieving a Webpage is a reasonably complicated process. That's why libraries such as [HttpClient](http://hc.apache.org/httpclient-3.x/) exist. My advice is that unless you have a really compelling reason otherwise, use HttpClient.
When the connection is establised thru URLConnection conn = url.openConnection(); you can get the encoding method name thru url.getContentEncoding() so pass this String to InputStreamReader() so the code looks like BufferedReader d = new BufferedReader(new InputStreamReader(is,url.getContentEncoding()));
How to retrieve HTML page in proper encoding using Java?
[ "", "java", "html", "http", "web-scraping", "" ]
I'm trying to dynamically set an enum based on the value in a string so far so good I don't know what I've been doing wrong. I have the following code: ``` public enum TagLabels : long { TurnLeft = 0x0000000000000000000030A38DB1, TurnRight = 0x00000000000000000000307346CC, LiftApproach = 0x0000000000000000000012107A8D } ``` TagLabels IDs; ``` string someID = "0x0000000000000000000012107A8D"; IDs = (TagLabels)Enum.Parse(typeof(TagLabels), someID ); //<== I get runtime error on this line ``` I cannot see what's wrong with what I'm doing.
``` IDs = (TagLabels)Convert.ToInt64(someID, 16); ``` EDIT: You have a string that is in hex format and not a direct number. So, it needs conversion to int first. If the Enum value exists, you can cast an int value to Enum type. EDIT2: Changed after Marc's suggestion from `Convert.ToInt32` to `Convert.ToInt64`
Enum.Parse is intended to convert a string representation of the symbolic name into an enum val, as in `Enum.Parse("TurnLeft")`. If what you have is a string giving the numeric value, then you should just parse the string as the corresponding integer type and cast it to the Enum val. ``` IDs = (TagLabels)long.Parse("0x0000000000000000000012107A8D"); ```
Why is my enum.Parse method failing?
[ "", "c#", ".net", "" ]
How would I do this without jQuery? ``` $('input[type=submit]').attr('disabled',true); ``` It doesn't have to be cross-browser compatible; a solution that only works in Firefox is OK.
``` var inputs = document.getElementsByTagName("INPUT"); for (var i = 0; i < inputs.length; i++) { if (inputs[i].type === 'submit') { inputs[i].disabled = true; } } ```
Have you tried ``` document.getElementsByTagName("input"); ``` then you could interrogate the DOM to find your submit button. [getElementsByTagName reference](http://www.w3schools.com/HTMLDOM/met_doc_getelementsbytagname.asp) A full sample ``` window.onload = function(e) { var forms = document.getElementsByTagName('form'); for (var i = 0; i < forms.length; i++) { var input = forms[i].getElementsByTagName('input'); for (var y = 0; y < input.length; y++) { if (input[y].type == 'submit') { input[y].disabled = 'disabled'; } } } } ```
How do I disable all input buttons without using jQuery?
[ "", "javascript", "dom", "" ]
My problem is if I use multi-thread on the same string sometime the string won't get replace.(I wrote this on notepad so syntax may be wrong) using System.Thread ... Others ofcourse ``` class .... { private static StringBuild container = new StringBuilder(); static void Main(...) { container.Append(Read From File(Kind of long)); Thread thread1 = new Thread(Function1); Thread thread2 = new Thread(Function2); thread1.Start(); thread2.Start(); //Print out container } static void Function1 { //Do calculation and stuff to get the Array for the foreach foreach (.......Long loop........) { container.Replace("this", "With this") } } //Same goes for function but replacing different things. static void Function2 { //Do calculation and stuff to get the Array for the foreach foreach (.......Long loop........) { container.Replace("this", "With this") } } } ``` Now sometime some element does not get replace. So my solution to this is calling container.Replace on a different method and do a "lock" which work but is it the right way? ``` private class ModiflyString { public void Do(string x, string y) { lock (this) { fileInput.Replace(x, y); } } } ```
You should lock the StringBuilder object itself (inside the replace functions): ``` lock (container) { container.Replace("this", "With this"); } ``` or create a separate lock object: ``` static object _stringLock = new object(); ... lock(stringLock) { container.Replace("this", "With this"); } ```
Your locking won't work when you create more than 1 ModifyString object and I'm guessing you do. A simple version: ``` public void Do(string x, string y) { lock (fileInput) { fileInput.Replace(x, y); } } ``` It may be better to create a separate object to do the locking on, but the above shows the principle better: all competing threads should lock on the same object. A standard approach would look like: ``` private static StringBuild container = new StringBuilder(); private static object syncLock = new object(); // simple object, 1-1 with container ``` and then you can (thread-)safely use: ``` lock(syncLock) { container.Replace(...); } ```
Accessing the same string(StringBuilder) using multi-thread
[ "", "c#", "string", "stringbuilder", "" ]
I have the following situation where a client class executes different behavior based on the type of message it receives. I'm wondering if there is a better way of doing this since I don't like the instanceof and the if statements. One thing I thought of doing was pulling the methods out of the client class and putting them into the messages. I would put a method like process() in the IMessage interface and then put the message specific behavior in each of the concrete message types. This would make the client simple because it would just call message.process() rather than checking types. However, the only problem with this is that the behavior contained in the conditionals has to do with operations on data contained within the Client class. Thus, if I did implement a process method in the concrete message classes I would have to pass it the client and I don't know if this really makes sense either. ``` public class Client { messageReceived(IMessage message) { if(message instanceof concreteMessageA) { concreteMessageA msg = (concreteMessageA)message; //do concreteMessageA operations } } if (message instanceof concreteMessageB) { concreteMessageb msg = (concreteMessageB)message; //do concreteMessageB operations } } ```
The simple way to avoid instanceof testing is to dispatch polymorphicly; e.g. ``` public class Client { void messageReceived(IMessage message) { message.doOperations(this); } } ``` where each message class defines an appropriate `doOperations(Client client)` method. EDIT: second solution which better matches the requirements. An alternative that replaces a sequence of 'instanceof' tests with a switch statement is: ``` public class Client { void messageReceived(IMessage message) { switch (message.getMessageType()) { case TYPE_A: // process type A break; case TYPE_B: ... } } } ``` Each IMessage class needs to define an `int getMessageType()` method to return the appropriate code. Enums work just as well ints, and are more more elegant, IMO.
One option here is a *handler chain*. You have a chain of handlers, each of which can handle a message (if applicable) and then *consume* it, meaning it won't be passed further down the chain. First you define the `Handler` interface: ``` public interface Handler { void handle(IMessage msg); } ``` And then the handler chain logic looks like: ``` List<Handler> handlers = //... for (Handler h : handlers) { if (!e.isConsumed()) h.handle(e); } ``` Then each handler can decide to handle / consume an event: ``` public class MessageAHandler implements Handler { public void handle(IMessage msg) { if (msg instanceof MessageA) { //process message //consume event msg.consume(); } } } ``` Of course, this doesn't get rid of the `instanceof`s - but it does mean you don't have a huge `if-elseif-else-if-instanceof` block, which can be unreadable
Avoiding instanceof when checking a message type
[ "", "java", "polymorphism", "message", "instanceof", "" ]
Is it advisable to directly jump onto C# with knowing just a mere bit of C (just some basics) or even may be without knowing C ?
If your goal is to learn your first language, and you aren't going to become a serious programmer, by all means learn the language you're going to use. If you are going to become a serious programmer, you really should get proficient in C sometime. I don't know which way would be harder, starting with C# or starting with C. C will be challenging no matter when you approach it. If you already know some languages, just not C or C#, go for C# now and pick up C later. The key is that C is a simpler language, but getting significant things done in it requires more complicated structures. Some things you can easily do in C# will be difficult in C, although C is the more widespread and versatile language.
C# and C are very different, they share syntax but the style of programming is quite different. It wouldn't hurt learning C but if your target is C# then start with that. Learning C will teach you more about how a computer works and give you a low level understanding. C# is a high level language with a lower learning curve to get a graphical interface. Joel and Jeff frequently discuss the value of learning C, [stackoverflow podcast #2](https://blog.stackoverflow.com/2008/04/podcast-2/) is one example
Start learning C# without knowing C?
[ "", "c#", "c", "" ]
What would be the best method to restrict access to my XMLRPC server by IP address? I see the class CGIScript in web/twcgi.py has a render method that is accessing the request... but I am not sure how to gain access to this request in my server. I saw an example where someone patched twcgi.py to set environment variables and then in the server access the environment variables... but I figure there has to be a better solution. Thanks.
When a connection is established, a factory's buildProtocol is called to create a new protocol instance to handle that connection. buildProtocol is passed the address of the peer which established the connection and buildProtocol may return None to have the connection closed immediately. So, for example, you can write a factory like this: ``` from twisted.internet.protocol import ServerFactory class LocalOnlyFactory(ServerFactory): def buildProtocol(self, addr): if addr.host == "127.0.0.1": return ServerFactory.buildProtocol(self, addr) return None ``` And only local connections will be handled (but all connections will still be accepted initially since you must accept them to learn what the peer address is). You can apply this to the factory you're using to serve XML-RPC resources. Just subclass that factory and add logic like this (or you can do a wrapper instead of a subclass). iptables or some other platform firewall is also a good idea for some cases, though. With that approach, your process never even has to see the connection attempt.
Okay, another answer is to get the ip address from the transport, inside any protocol: `d =` [`self.transport.getHost`](http://twistedmatrix.com/documents/current/api/twisted.internet.interfaces.ITCPTransport.html#getHost) `() ; print d.type, d.host, d.port` Then use the value to filter it in any way you want.
Python Twisted: restricting access by IP address
[ "", "python", "twisted", "" ]
I'm running into a strange problem while reading from an *InputStream* on the Android platform. I'm not sure if this is an Android specific issue, or something I'm doing wrong in general. The only thing that is Android specific is this call: ``` InputStream is = getResources().openRawResource(R.raw.myfile); ``` This returns an *InputStream* for a file from the Android assets. Anyways, here's where I run into the issue: ``` bytes[] buffer = new bytes[2]; is.read(buffer); ``` When the read() executes it throws an *IOException*. The weird thing is that if I do two sequential single byte reads (or any number of single byte reads), there is no exception. In example, this works: ``` byte buffer; buffer = (byte)buffer.read(); buffer = (byte)buffer.read(); ``` Any idea why two sequential single byte reads work but one call to read both at once throws an exception? The *InputStream* seems fine... `is.available()` returns over a million bytes (as it should). Stack trace shows these lines just before the `InputStream.read()`: ``` java.io.IOException at android.content.res.AssetManager.readAsset(Native Method) at android.content.res.AssetManager.access$800(AssetManager.java:36) at android.content.res.AssetManager$AssetInputStream.read(AssetManager.java:542) ``` Changing the buffer size to a single byte still throws the error. It looks like the exception is only raised when reading into a byte array. **If I truncate the file to 100,000 bytes (file is: 1,917,408 bytes originally) it works fine. Is there a problem with files over a certain size?** Any help is appreciated! Thanks!
(my post to android-developers isn't showing up, so I'll try reposting it here) IIRC, this problem comes from trying to access files that were compressed as part of building the APK. Hence, to work around the issue, give it a file extension that won't be compressed. I forget the list of what extensions are skipped, but file types known to already be compressed (e.g., mp3, jpg) may work.
Changing the file extension to *.mp3* to avoid the file compression does work, but the APK of the app is much bigger (in my case 2.3 MB instead of 0.99 MB). Is there any other way to avoid this issue ? Here is my answer: [Load files bigger than 1M from assets folder](https://stackoverflow.com/questions/2860157/load-files-bigger-than-1m-from-assets-folder)
IOException while reading from InputStream
[ "", "java", "android", "inputstream", "ioexception", "" ]
Is there a reliable way, using PHP, to determine if a request comes from the same host that the PHP script is hosted on? In other words, is there a way to determine if the client and server computer are the same?
Hmm, it sounds like a token mechanism that is commonly used to prevent [CSRF](http://en.wikipedia.org/wiki/Cross-site_request_forgery) (Cross-Site Request Forgery) might be a solution to your problem. This will allow you to be sure the request is coming from a source by comparing tokens, etc. Here are a couple links with more implementation details: [here](http://www.codewalkers.com/c/a/Miscellaneous/Stopping-CSRF-Attacks-in-Your-PHP-Applications/) and [here](http://www.serversidemagazine.com/php/php-security-measures-against-csrf-attacks)
You could define a secret token that only the server knows of that you can send together with the request.
Using PHP, how can I tell if a request originates on the hosting server?
[ "", "php", "security", "" ]
We can avoid serialising fields by using the `transient` keyword. Is there any other way of doing that?
<http://java.sun.com/javase/6/docs/platform/serialization/spec/security.html> > SUMMARY:Preventing Serialization of > Sensitive Data Fields containing > sensitive data should not be > serialized; doing so exposes their > values to any party with access to the > serialization stream. There are > several methods for preventing a field > from being serialized: > > 1. Declare the field as private transient. > 2. Define the serialPersistentFields > field of the class in question, and > omit the field from the list of > field descriptors. > 3. Write a class-specific serialization > method (i.e., writeObject or > writeExternal) which does not write > the field to the serialization > stream (i.e., by not calling > ObjectOutputStream.defaultWriteObject). Here are some links. [Declaring serialPersistenetFields.](http://onjava.com/pub/a/onjava/excerpt/JavaRMI_10/index.html?page=3) [Serialization architecture specification.](http://java.sun.com/j2se/1.5.0/docs/guide/serialization/spec/serial-arch.html) [Security in Object Serialization.](http://ftp.ida.liu.se/~TDDB32/kursbibliotek/BlueJ/docs/guide/serialization/spec/security.html)
If for some reason transient doesn't suit, you can do the serialization directly by overriding the [writeObject and readObject](http://java.sun.com/javase/6/docs/api/java/io/Serializable.html) methods. Then you can include or omit any fields you need.
Can we deny a java object from serialization other than giving transient keyword
[ "", "java", "serialization", "java-custom-serialization", "" ]
I am trying to customise fancybox so that when one of the 4 images displayed on the page is clicked, this is the one that loads up in the fancybox window. To do this I want to use the jquery .attr function to pass the image src (as a variable) to the main image holder. My current jquery code is: ``` jQuery(document).ready(function($) { $("a.group").click(function() { var image = $(this).attr("name"); $("#largeId").attr({ src: image}); $("a.group").fancybox({ 'frameWidth':966, 'frameHeight': 547, 'hideOnContentClick': false, 'overlayOpacity': 0.85, 'callbackOnShow': function() { $("#container ul#thumbnails li a").click(function(){ var largePath = $(this).attr("title"); $("#largeId").fadeOut("fast").hide(); $("#largeId").attr({ src: largePath }); $("#largeId").fadeIn("slow");return false; }); $("#container ul#thumbnails li a").click(function(){ $('.active').removeClass('active'); $(this).addClass("active"); }); } }); }); }); ``` The HTML for the main page images is: ``` <ul id="images"> <li><a id="one_image" class="group" href="#hidden" title="images/1_large.jpg"><img src="Images/1.jpg" alt="MOMA NY #1" title="MOMA NY #1" /></a></li> <li><a class="group" href="#hidden" title="images/2_large.jpg"><img src="Images/2.jpg" alt="MOMA NY #2" title="MOMA NY #2" /></a></li> <li><a class="group" href="#hidden" title="images/3_large.jpg"><img src="Images/3.jpg" alt="MOMA NY #3" title="MOMA NY #3" /></a></li> <li><a class="group" href="#hidden" title="images/4_large.jpg"><img src="Images/4.jpg" alt="MOMA NY #4" title="MOMA NY #4" /></a></li> </ul> ``` For the Fancybox window: ``` <div id="main_image"> <img id="largeId" src="" alt="" title="" /> </div> ``` -------EDIT---------- just so you know, this mostly works if I get rid of the click function at the start, the functions within the fancybox call all work fine.
I think its getting overly complicated. ``` jQuery(document).ready(function($) { $("a.group").fancybox({ 'frameWidth': 300, 'frameHeight': 300 }); }); ``` That should be all the javascript you need. Then you should move the title and the grouping onto the a tag. ``` <ul id="images"> <li><a class="group" rel="group" href="images/2_large.jpg" title="MOMA NY #1"><img src="Images/3.jpg" alt="MOMA NY #1"/></a></li> <li><a class="group" rel="group" href="images/1_large.jpg" title="MOMA NY #2" ><img src="Images/3.jpg" alt="MOMA NY #2"/></a></li> <li><a class="group" rel="group" href="images/3_large.jpg" title="MOMA NY #3" ><img src="Images/3.jpg" alt="MOMA NY #3"/></a></li> <li><a class="group" rel="group" href="images/4_large.jpg" title="MOMA NY #4" ><img src="Images/4.jpg" alt="MOMA NY #4"/></a></li> </ul> ``` Is that what you were looking for?
I've never used fancybox before, but from just looking at your code, I think the line that calls fancybox needs to be fixed from ``` $("a.group").fancybox({ ``` to this: ``` $(this).fancybox({ ``` Sorry, I haven't tested this... but I believe that was your problem.
calling fancybox using jquery .click function
[ "", "javascript", "jquery", "html", "jquery-plugins", "fancybox", "" ]
Is there a way to automatically resize a figure to properly fit contained plots in a matplotlib/pylab image? I'm creating heatmap (sub)plots that differ in aspect ratio according to the data used. I realise I could calculate the aspect ratio and manually set it, but surely there's an easier way?
Use **bbox\_inches='tight'** ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.cm as cm X = 10*np.random.rand(5,3) fig = plt.figure(figsize=(15,5),facecolor='w') ax = fig.add_subplot(111) ax.imshow(X, cmap=cm.jet) plt.savefig("image.png",bbox_inches='tight',dpi=100) ``` ...only works when saving images though, not showing them.
Another way of doing this is using the matplotlib tight\_layout function ``` import matplotlib.pyplot as plt fig,(ax) = plt.subplots(figsize=(8,4), ncols=1) data = [0,1,2,3,4] ax.plot(data) fig.tight_layout() fig.show() ```
Resize a figure automatically in matplotlib
[ "", "python", "matplotlib", "" ]
One of the projects I am working on includes a website that is hosted on a cheap shared hosting server. Whenever I upload to the server some updated files, they don't necessarily become available immediately. It can take from 15 to 30 minutes before the server actually starts using the new files instead of the old ones and in some cases I even need to re-re-upload the updated files. Some more info: - C# webforms files (.aspx and .aspx.cs) - If there was no previous file with that name on the server then the file always become immediately available - **But** if I first delete the older file and refresh the page I get immediately a "file not found" error but if I then upload the newer file the "file not found error" stops immediately but I get back the older file again. I understand that the server isn't actually serving the .aspx page but rather using the compiled to dll version that it has made (right?) so maybe this is a compiling problem on the server somehow? I'm not sure if this would be better on serverfault.com but as a programmer SO is where I usually come. Any idea why this is happenning and preferably some solution on how to fix this behavior so that when I upload an updated page we can start using it immediately? Thank you.
Usually, touching your web.config file will recycle the web server - if you do that, you should flush any caches. Just upload a new web.config with a trivial change and see if that helps.
If you are using .NET 2.0 websites, you can have problems with the .dlls in the bin folder. Changing to a Web application should solve your problem permanently. <http://webproject.scottgu.com/>
Webserver not using the latest files
[ "", "c#", ".net", "hosting", "" ]
The situation: I was using HttpWebRequest.BeginGetResponse as documented in msdn. I had a timer sending the request every ten seconds. I received xml-structured information, when I tested it. The result: Being at the customers place and having that tool running I received incomplete (and therefore unparsable) xmls (each about 4KB). I could check in a browser and see it completely (obviously a synchronous request through the browser?!). I used the header information about the content length to size my receiving buffer. What caused it? I don't know. The data is fairly small. Still I used the `ThreadPool.RegisterWaitForSingleObject` approach described at [developer fusion](http://www.developerfusion.com/code/4654/asynchronous-httpwebrequest/) to define a timeout, I chose ten seconds as well for the timeout. Maybe that wasn't a smart decision, it probably should be smaller that the timer interval. The thing is, I cannot test it again under those conditions. It was at a production site, where I had not insight to the network setup. The reguests ran just fine at the same time from home. I'm not very experienced in that field, but what happens when a timer triggers a new request before the response stream has been fully received, because e.g. the timeout time is equal to the timer interval? Any other hints what could be the bottle neck here?
How are you receiving data? Are you reading data through a stream? And are you using the contentsize returned as an input parameter to Stream.Read? A feature of Stream.Read that is not completely obvious is that it is not guaranteed to return the amount of data that you requested. When you call the following function ``` public abstract int Read(byte[] buffer, int offset, int count ) ``` it will return how much data was actually read. So you may ask it to read 1000 and it return 400, then there is still 600 bytes left to read. That means that you have to continue calling Read until it returns 0 (which means that there is no more data in the stream). I would also say that you should not use the content length header information to size your buffer. Instead you should create a dynamically sized buffer (e.g. by using a MemoryStream object) and read from the response stream until it returns 0. At least, that is how I would do it. Then your solution will continue to work, if the server changes implementation so it no longer sends that response header. Or even better, since you are loading XML, create an XmlDocument, and ask it to load directly from the Http response stream.
The solution is simple. Only start the timer after you've finished processing the response.
Asynchronous HttpWebRequest Issue with Timer
[ "", "c#", ".net", "timer", "" ]
See i have a situation like this... ``` object myRoledata = List<Roles>() --> (some list or Ienumerable type) ``` Now i have a generic method which creates an XML object from `List<T>` - Something like this.. ``` public string GetXML<T>(object listdata) { List<T> objLists = (List<T>)Convert.ChangeType(listData, typeof(List<T>)); foreach(var obj in listdata) { //logic to create xml } } ``` Now in order to run this method I have to do like this: ``` string xml = GetXML<Roles>(myRoledata); ``` Now i dont know what `Type` may come to me to be passed to `GetXML` method. I have a method which will call `GetXML` for different `Type`s e.g. `Roles`, `Users` etc now i can get the `Type` within the `List<>` like this ``` Type genericType = obj.GetType().GetGenericArguments()[0]; ``` but cannot pass it like this ``` string xml = GetXML<genericType>(myRoledata); ``` Is there anyway in which i can pass any `genericTypes` to `GetXML` method?
This is a problem you probably want to *avoid* solving. It *is* possible, via reflection, to call methods dynamically without statically resolving them - but it kind of defeats the whole point of the type-annotations. Either do this: ``` public string GetXML(IEnumerable listdata) { foreach(object obj in listdata) //logic to create xml } ``` ... which you now can call with any `IEnumerable`, or write it the "modern" way as: ``` public string GetXML(IEnumerable<object> listdata) { foreach(object obj in listdata) //logic to create xml } ``` ... which you can call with any `IEnumerable` via `GetXML(someEnumerable.Cast<object>())` and in C# 4.0 even directly by covariance. If you need the type of an element runtime, you can get it using `.GetType()` on each element, or you can just pass it in as a parameter (and provide an override for backwards-compatibility): ``` public string GetXML(Type elementType, IEnumerable<object> listdata) { foreach(object obj in listdata) //logic to create xml } public string GetXML<T>(IEnumerable<T> listdata) { return GetXML(typeof(T),listdata.Cast<object>()); } ``` Incidentally, if you're constructing XML, a string is probably a less robust return-type choice: if possible, you could work with something like an `XElement` instead - and get xml-validity guarantee's to boot.
To do that, you need to use reflection; ``` typeof(SomeClass).GetMethod("GetXML").MakeGenericMethod(genericType) .Invoke(inst, new object[] {myRoleData}); ``` where `inst` is `null` if it is a static method, `this` for the current instance (in which case you can also use `GetType()` instead of `typeof(SomeClass)`), or the target object otherwise.
Pass Type dynamically to <T>
[ "", "c#", "generics", "list", "" ]
In other words, is there a faster, more concise way of writing the following code: ``` //Create an object for performing XSTL transformations XslCompiledTransform xslt = new XslCompiledTransform(); xslt.Load(HttpContext.Current.Server.MapPath("/xslt/" + xsltfile.Value), new XsltSettings(true, false), new XmlUrlResolver()); //Create a XmlReader object to read the XML we want to format //XmlReader needs an input stream (StringReader) StringReader sr = new StringReader(node.OuterXml); XmlReader xr = XmlReader.Create(sr); //Create a StringWriter object to capture the output from the XslCompiledTransform object StringWriter sw = new StringWriter(); //Perform the transformation xslt.Transform(xr, null, sw); //Retrieve the transformed XML from the StringWriter object string transformedXml = sw.ToString(); ``` UPDATE (thanks for all the answers so far!): Sorry for my vagueness: by "faster" and more "concise" I mean, **am I including any unnecessary steps?** Also, **I would love a more "readable" solution** if someone has one. I use this code in a small part of a web application I'm developing, and I'm about to move it to a large part of the application, so I want to make sure it's as neat as can be before I make the move. Also, I get the XML from a static class (in a separate data access class library) which communicates with a database. I also manipulate the transformed XML string before shipping it off to a web page. I'm not sure if the input/response streams are still viable in this case. One more thing: the XML and the XSLT supplied may change (users of the application can make changes to both), so I think I would be forced to compile each time.
Here's code I did for my ASP.NET, which is very similar to yours: ``` XDocument xDoc = XDocument.Load("output.xml"); XDocument transformedDoc = new XDocument(); using (XmlWriter writer = transformedDoc.CreateWriter()) { XslCompiledTransform transform = new XslCompiledTransform(); transform.Load(XmlReader.Create(new StreamReader("books.xslt"))); transform.Transform(xDoc.CreateReader(), writer); } // now just output transformedDoc ```
If you have a large XSLT you can save the overhead of compiling it at runtime by compiling the XSLT into a .NET assembly when you build your project (e.g. as a post-build step). The compiler to do this is called [**`xsltc.exe`**](http://msdn.microsoft.com/en-us/library/bb399405.aspx) and is part of Visual Studio 2008. In order to load such a **pre-compiled XSLT** you will need .NET Framework 2.0 SP1 or later installed on your server (the feature was introduced with SP1). For an example check Anton Lapounov's blog article: > [**XSLTC — Compile XSLT to .NET Assembly**](http://blogs.msdn.com/antosha/archive/2007/05/28/xsltc-compile-xslt-to-.net-assembly.aspx) If pre-compiling the XSLT is not an option you should consider **caching** the `XslCompiledTransform` after it is loaded so that you don't have to compile it everytime you want to execute the transform.
What's the most streamlined way of performing a XSLT transformation in ASP.NET?
[ "", "c#", "asp.net", "xml", "xslt", "" ]
I'm writing a blog post that uses multiple videos from YouTube and Yahoo Video, but I'm not happy with how long it takes the page to render. Apart from using an ajax-y method to load the videos, are there any tricks that would make the page load quicker with multiple videos from different sources?
Your "ajax-y method"s will be the only way to speed this up. Large sites are going to be using a [CDN](http://en.wikipedia.org/wiki/Content_delivery_network) and have good caching. There is no way around large files taking a long time... Keeping the object or video tag out of the HTML and then adding it after page load, will improve perceived page load performance. Perhaps swap out an screengrab image that is the same size as the eventual video... It's early days for the video tag, but it's possible that eventually it's initialization time will be faster than Flash, since it's part of the browser and not a 3rd party plugin. The bulk of the video load time depends on how the video was encoded/transferred which is out of your control, it sounds like.
The problem with embedded YouTube videos is that the player itself needs to load. You could add "controls=2" in the URL of the embedding code, but that would make only AS2/3 players to load the player after clicking. The workaround Google+ has for this issue is not to load the player at all. Instead, it loads a thumbnail image with a play button superimposed. On clicking, the image is replaced with the actual YouTube player iframe embed code and it loads and auto-plays. This can be done in any site actually using the simple javascript below. <https://skipser.googlecode.com/files/gplus-youtubeembed.js>
Is there a way to load embedded YouTube videos faster on my website?
[ "", "javascript", "optimization", "video", "" ]
Is anybody aware of any libraries for working with MHT files ([Multi-Part MIME files](https://www.rfc-editor.org/rfc/rfc2557)) in .NET? I need to programmatically extract the contents from an existing MHT file containing a Flash website. I haven't been able to locate any such libraries. Also, if there's a native way in .NET that I'm not aware of, please feel free to let me know. **EDIT:** I know that the [MailMessage](http://msdn.microsoft.com/en-us/library/system.net.mail.mailmessage.aspx) class supports multi-part MIME messages through the [AlternateViews property](http://msdn.microsoft.com/en-us/library/system.net.mail.mailmessage.alternateviews.aspx). The [AlternateView](http://msdn.microsoft.com/en-us/library/system.net.mail.alternateview.aspx) class represents the alternative views in a multi-part MIME message. I'd like to believe that it's possible to use this knowledge to build something using code native to the .NET framework. I just haven't been able to find out the right combination to make it work, so I'm starting to loose faith. Does anybody out there know if it's possible to extract the contents of a MHT file through the AlternateView and other related classes? For example, it would be nice if it were possible to create an instance of the MailMessage class from a Stream.
You might be interessed in my MIME parsing project at github (written in C#) <https://github.com/smithimage/MIMER/> Also has a Nuget package: <https://nuget.org/packages/MIMER/>
<http://www.lumisoft.ee/lswww/ENG/Products/Mail_Server/mail_index_eng.aspx?type=info> This is open source email server which has good Mime Parser.
Extracting Content from MHT Document
[ "", "c#", ".net", "mhtml", "" ]
Sessions are started via `session_start()`, I realize that much, but to make sessions persistent, they need an ID. Now, the php.ini file has a setting: ``` session.use_cookies = 1 ``` So I don't have to pass the ID around. But there's another setting: ``` ; Lifetime in seconds of cookie or, if 0, until browser is restarted. session.cookie_lifetime = 0 ``` Am I to understand that if I implement this and go to my website, login, do what I wanna do, shut the browser down and start it again some time later, that I won't be logged in anymore when I go back to my site? EDIT: So to stay logged in, I will have to combine this with client-side cookies. I'm guessing I'll need 2 database fields. 1 for the sessions ID, 1 for the ID I give to the cookie.
Nope, you won't be logged in anymore. See my answer here: [How do I Keep a user logged in for 2 weeks?](https://stackoverflow.com/questions/1290837/how-do-i-keep-a-user-logged-in-for-2-weeks/1290861#1290861) (See <http://www.drupal.org/node/31506> for more information about sheduled tasks, if you want people to be signed out after an amount of time). It might help you. Check the user agent string (just for security. If an hacker found out a key in some way... he can send a fake cookie and be logged in automatically. For people who switch a browser one time, they can just sign in again once after copying cookies. However, this would be a disaster for people who change or update browser nonstop).
I think you understand it correctly, the PHP manual says: > session.cookie\_lifetime specifies the > lifetime of the cookie in seconds > which is sent to the browser. The > value 0 means "until the browser is > closed." Defaults to 0. <http://php.net/session.configuration#ini.session.cookie-lifetime>
Quick question about sessions in PHP
[ "", "php", "session", "persistence", "" ]
I often find myself doing something along the following lines in sql server 2005 : Step1: ``` create view view1 as select count(*) as delivery_count, clientid from deliveries group by clientid; ``` Step2: ``` create view view2 as select count(*) as action_count, clientid from routeactions group by clientid; ``` Step3 : ``` select * from view1 inner join view2 on view1.clientid = view2.clientid ``` Is it possible to obtain the same final result in only one statement, avoiding the creation of the views ?
Sure, use nested queries: ``` select * from (select count(*) as delivery_count, clientid from deliveries group by clientid) AS view1 inner join (select count(*) as action_count, clientid from routeactions group by clientid) AS view2 on view1.clientid = view2.clientid ``` Or with the new CTE syntax you can have: ``` WITH view1 AS ( select count(*) as delivery_count, clientid from deliveries group by clientid ), view2 AS ( select count(*) as action_count, clientid from routeactions group by clientid ) select * from view1 inner join view2 on view1.clientid = view2.clientid ```
I can't think of any way off the top of my head, unless there's some sort of relationship between routeactions and deliveries that you haven't mentioned. Without that (an FK from one to the other, most likely), you can't do a join without distorting the numbers on one or both tables.
Request joining the results of two other requests with GROUP BY clause in SQL Server 2005
[ "", "sql", "sql-server", "sql-server-2005", "" ]
How can i find out if an Object is Wrapped by jQuery. ``` var obj = $('div'); if(obj is a jQuery wrapped object) { then do something } ``` I am quite new in the Javascript World. Thanks in advance.
Here you go: ``` var isJQuery = obj instanceof jQuery; // or obj instanceof $; ```
You can test like this: ``` if(obj instanceof jQuery) { // ... } ``` However, it's not entirely correct to say that the HTML element is "wrapped" in a jQuery object, rather the jQuery object is a collection of zero or more HTML elements. So, if you really want to be careful you could test first whether it contains any elements at all, as follows: ``` if(obj instanceof jQuery && obj.length > 0) { var element = obj[0]; // do something with element } ```
How To find Out if Element is Wrapped by jQuery?
[ "", "javascript", "jquery", "" ]
Ive been looking a long time for this, but can't seem to find it. When I add a menu strip in vb .net, it looks like this: [http://img19.imageshack.us/img19/4341/menu1sbo.jpg http://img19.imageshack.us/img19/4341/menu1sbo.jpg](http://img19.imageshack.us/img19/4341/menu1sbo.jpg) and I want it to look like the WinRar, Calculator, Notepad etc menus like this: [http://img8.imageshack.us/img8/307/menu1a.jpg http://img8.imageshack.us/img8/307/menu1a.jpg](http://img8.imageshack.us/img8/307/menu1a.jpg) From what I gathered, in vb 6 you could create a mainmenu and do it this way, but in vb .net it seems like all there is is ugly menustrip. Thanks
You may have to get dirty and create a CustomRenderer([ToolStripProfessionalRenderer](http://msdn.microsoft.com/en-us/library/system.windows.forms.toolstripprofessionalrenderer.aspx)) to apply to the [ToolStripManager](http://msdn.microsoft.com/en-us/library/system.windows.forms.toolstripmanager.aspx) Without rehashing to much, [this doc looks like a nice overview](http://windowsclient.net/Samples/Go%20To%20Market/Tool%20Strips/ToolStrip%20GTM.doc) or you can always opt for the [Microsoft tutorial](http://msdn.microsoft.com/en-us/library/dy4ys6z6.aspx) menustrip is derived from toolstrip
You may need to enable XP theme support in your project settings. To do this, go to My Project in your Solution Explorer, and make sure "Enable XP Visual Styles" is checked under the Windows application framework properties group down near the bottom of the Application tab. If this doesn't work, you might need to create an application manifest as described in this [MSDN article](http://msdn.microsoft.com/en-us/library/aa289524%28VS.71%29.aspx).
C++ style menu bar in VB.NET?
[ "", "c++", "vb.net", "menubar", "" ]
I have a sharepoint server and I am playing around with it programmatically. One of the applications that I am playing with is [Microsoft's Call Center](http://www.microsoft.com/downloads/details.aspx?FamilyId=4EAB254B-F775-476D-83A7-A232DD3A21F4&displaylang=en). I queried the list of customers: ``` if (cList.Title.ToLower().Equals("service requests")) { textBox1.Text += "> Service Requests" + Environment.NewLine; foreach (SPListItem item in cList.Items) { textBox1.Text += string.Format(">> {0}{1}", item.Title, Environment.NewLine); } } ``` One of the properties in `item` is XML. Here is value of one: ``` <z:row xmlns:z='#RowsetSchema' ows_ID='1' ows_ContentTypeId='0x0106006324F8B638865542BE98AD18210EB6F4' ows_ContentType='Contact' ows_Title='Mouse' ows_Modified='2009-08-12 14:53:50' ows_Created='2009-08-12 14:53:50' ows_Author='1073741823;#System Account' ows_Editor='1073741823;#System Account' ows_owshiddenversion='1' ows_WorkflowVersion='1' ows__UIVersion='512' ows__UIVersionString='1.0' ows_Attachments='0' ows__ModerationStatus='0' ows_LinkTitleNoMenu='Mouse' ows_LinkTitle='Mouse' ows_SelectTitle='1' ows_Order='100.000000000000' ows_GUID='{37A91B6B-B645-446A-8E8D-DA8250635DE1}' ows_FileRef='1;#Lists/customersList/1_.000' ows_FileDirRef='1;#Lists/customersList' ows_Last_x0020_Modified='1;#2009-08-12 14:53:50' ows_Created_x0020_Date='1;#2009-08-12 14:53:50' ows_FSObjType='1;#0' ows_PermMask='0x7fffffffffffffff' ows_FileLeafRef='1;#1_.000' ows_UniqueId='1;#{28A223E0-100D-49A6-99DA-7947CFC38B18}' ows_ProgId='1;#' ows_ScopeId='1;#{79BF21FE-0B9A-43B1-9077-C071B61F5588}' ows__EditMenuTableStart='1_.000' ows__EditMenuTableEnd='1' ows_LinkFilenameNoMenu='1_.000' ows_LinkFilename='1_.000' ows_ServerUrl='/Lists/customersList/1_.000' ows_EncodedAbsUrl='http://spvm:3333/Lists/customersList/1_.000' ows_BaseName='1_' ows_MetaInfo='1;#' ows__Level='1' ows__IsCurrentVersion='1' ows_FirstName='Mickey' ows_FullName='Mickey Mouse' ows_Comments='&lt;div&gt;&lt;/div&gt;' ows_ServerRedirected='0' /> ``` Can I create an XMLnode or some other sort of XML object so that I can easily parse it and pull certain values (these certainties are unknowns right now, since I am just testing right now)? Thanks SO!
If the XML is valid you could use XmlDocument.LoadXMl like so: ``` XmlDocument doc = new XmlDocument(); doc.LoadXml(validxmlstring); ```
You can do this and it should work fine (although I would use the XML document approach [Colin mentions](https://stackoverflow.com/questions/1268500/can-i-import-this-value-into-a-xml-structure-and-parse-it-c/1270475#1270475), or even better LINQ). You may also find the LINQ extensions in [SharePoint Extensions Lib](http://spexlib.codeplex.com/) useful. However, I'm wondering why you would approach it this way instead of using the [SPListItem.Item](http://msdn.microsoft.com/en-us/library/ms464204.aspx) property? It's much simpler to use and very clear. For example: ``` var title = listItem["Title"]; // Returns title of item var desc = listItem["Description"]; // Returns value of description field ``` The only trap is the unusual case of a list that contains a field with an internal name equal to another field's display name. This will always return the value of the field with the internal name first. Just curious if you have a requirement to go the XML route.
Can I 'import' this value into a XML structure and parse it? C#
[ "", "c#", "xml", "sharepoint", "" ]
Newbie in the Linq to XML arena... I have a Linq query with results, and I'd like to transform those results into XML. I'm guessing there must be a relatively easy way to do it, but I can't find it... Thanks!
An Example. You should get the idea. ``` XElement xml = new XElement("companies", from company in db.CustomerCompanies orderby company.CompanyName select new XElement("company", new XAttribute("CompanyId", company.CompanyId), new XElement("CompanyName", company.CompanyName), new XElement("SapNumber", company.SapNumber), new XElement("RootCompanyId", company.RootCompanyId), new XElement("ParentCompanyId", company.ParentCompanyId) ) ); ```
Your Linq query is going to return some kind of object graph; Once you have the results, you can use any method to translate it to XML that you could with standard objects. Linq to XML includes new XML classes that present one way of creating XML (see rAyt's answer for this), but you can also use an XmlSerializer and put attributes on your class/properties to control exact XML output.
How do you transform a Linq query result to XML?
[ "", "c#", "linq", "linq-to-xml", "" ]
How can I get PHP to evaluate a static variable in double quotes? I want to do something like this: ``` log("self::$CLASS $METHOD entering"); ``` I've tried all sorts of `{}` combos to get the variable value of `self::$CLASS`, but nothing has worked. I've currently settled with string concatenation but it is a pain to type: ``` log(self::$CLASS . " $METHOD entering"); ```
Sorry, you can't do that. It only works for simple expressions. See [here](http://www.php.net/manual/en/language.types.string.php#language.types.string.parsing.complex).
Unfortunately there is no way how to do this yet. Example in one of answers here will not work, because `{${self::$CLASS}}` will not returns content of `self::$CLASS`, but will returns content of variable with name in `self::$CLASS`. Here is an example, which does not returns `myvar`, but `aaa`: ``` $myvar = 'aaa'; self::$CLASS = 'myvar'; echo "{${self::$CLASS}}"; ```
PHP static variables in double quotes
[ "", "php", "variables", "static", "double", "quotes", "" ]
I have a huge array of ints that I need to sort. The catch here is that each entry in the list has a number of other associated elements in it that need to follow that int around as it gets sorted. I've kind of solved this problem by changing the sorting to sort doubles instead of ints. I've tagged each number before it was sorted with a fractional part denoting that value's original location before the sort, thus allowing me to reference it's associated data and allowing me to efficiently rebuild the sorted list with all the associated elements. My problem is that I want to sort the double values by ints using the function stable\_sort(). I'm referring to this web page: <http://www.cplusplus.com/reference/algorithm/stable_sort/> However, since I'm a new programmer, i don't quite understand how they managed to get the sort by ints to work. What exactly am i supposed to put into that third argument to make the function work? (i know i can just copy and paste it and make it work, but i want to learn and understand this too). Thanks, -Faken Edit: Please note that I'm a new programmer who has had no formal programming training. I'm learning as i go so please keep your explanations as simple and as rudimentary as possible. In short, please treat me as if i have never seen c++ code before.
Since you say you're not familiar with vectors (you really should learn STL containers ASAP, though), I assume you're playing with arrays. Something along these lines: ``` int a[] = { 3, 1, 2 }; std::stable_sort(&a[0], &a[3]); ``` The third optional argument `f` of `stable_sort` is a *function object* - that is, anything which can be called like a function by following it with parentheses - `f(a, b)`. A function (or rather a pointer to one) is a function object; other kinds include classes with overloaded `operator()`, but for your purposes a plain function would probably do. Now you have your data type with `int` field on which you want to sort, and some additional data: ``` struct foo { int n; // data ... }; foo a[] = { ... }; ``` To sort this (or anything, really), `stable_sort` needs to have some way of comparing any two elements to see which one is greater. By default it simply uses operator `<` to compare; if the element type supports it directly, that is. Obviously, `int` does; it is also possible to overload `operator<` for your struct, and it will be picked up as well, but you asked about a different approach. This is what the third argument is for - when it is provided, `stable_sort` calls it every time it needs to make a comparison, passing two elements as the arguments to the call. The called function (or function object, in general) must return `true` if first argument is less than second for the purpose of sorting, or `false` if it is greater or equal - in other words, it must work like operator `<` itself does (except that you define the way you want things to be compared). For `foo`, you just want to compare `n`, and leave the rest alone. So: ``` bool compare_foo_n(const foo& l, const foo& r) { return l.n < r.n; } ``` And now you use it by passing the pointer to this function (represented simply by its name) to `stable_sort`: ``` std::stable_sort(&a[0], &a[3], compare_foo_n); ```
You need to pass the comparison function. Something like this: ``` bool intCompare(double first, double second) { return static_cast<int>(first) < static_cast<int>(second); } int main() { std::vector<double> v; v.push_back(1.4); v.push_back(1.3); v.push_back(2.1); v.push_back(1.5); std::stable_sort(v.begin(), v.end(), intCompare); return 0; } ``` Inside the sort algorithm, to compare the values the comparison function passed by you is used. If you have a more complex data structure and want to sort on a particular attribute of the data structure then you can use this user-defined function to compare the values.
Using stable_sort() to sort doubles as ints
[ "", "c++", "sorting", "" ]
I created two simple functions which get template parameters and an empty struct defining a type: ``` //S<T>::type results in T& template <class T> struct S { typedef typename T& type; }; //Example 1: get one parameter by reference and return it by value template <class A> A temp(typename S<A>::type a1) { return a1; } //Example 2: get two parameters by reference, perform the sum and return it template <class A, class B> B temp2(typename S<A>::type a1, B a2)//typename struct S<B>::type a2) { return a1 + a2; } ``` The argument type is applied to the struct S to get the reference. I call them with some integer values but the compiler is unable to deduce the arguments: ``` int main() { char c=6; int d=7; int res = temp(c); int res2 = temp2(d,7); } ``` > Error 1 error C2783: 'A > temp(S::type)' : could not deduce > template argument for 'A' > > Error 2 error C2783: 'B > temp2(S::type,B)' : could not > deduce template argument for 'A' --- Why is this happening? Is it that hard to see that the template arguments are *char* and *int* values?
Just as first note, typename name is used when you mention a *dependent* name. So you don't need it here. ``` template <class T> struct S { typedef T& type; }; ``` Regarding the template instantiation, the problem is that `typename S<A>::type` characterizes a *nondeduced* context for A. When a template parameter is used only in a nondeduced context (the case for A in your functions) it's not taken into consideration for template argument deduction. The details are at section 14.8.2.4 of the C++ Standard (2003). To make your call work, you need to explicitly specify the type: ``` temp<char>(c); ```
It is looks like nondeduced context. According to C++ Standard 14.8.2.4/4: > The nondeduced contexts are: > > * The *nested-name-specifier* of a type that was specified using a *qualified-id*. > * A type that is a *template-id* in which one or more of the *template-arguments* is an expression that references a *template-parameter*. > > When a type name is specified in a way that includes a nondeduced context, all of the types that comprise that type name are also nondeduced. However, a compound type can include both deduced and nondeduced types. [*Example*: If a type is specified as `A<T>::B<T2>`, both `T` and `T2` are nondeduced. Likewise, if a type is specified as `A<I+J>::X<T>`, `I`, `J`, and `T` are nondeduced. If a type is specified as `void f(typename A<T>::B, A<T>)`, the `T` in `A<T>::B` is nondeduced but the `T` in `A<T>` is deduced. ]
Why is the template argument deduction not working here?
[ "", "c++", "templates", "template-argument-deduction", "" ]