instruction
stringlengths
21
27.8k
chosen
stringlengths
18
28.2k
rejected
stringlengths
18
33.6k
__index_level_0__
int64
0
50k
<p>I have some data. I want to go through that data and change cells (for example - Background color), if that data meets a certain condition. Somehow, I've not been able to figure it out how to do this seemingly easy thing in Silverlight.</p>
<p>This is slightly old code (from before RTM), but does something like what you're looking for. It checks some data on an object in a row and then sets the colour of the row accordingly.</p> <p><strong>XAML:</strong></p> <pre><code>&lt;my:DataGrid x:Name="Grid" Grid.Row="1" Margin="5" GridlinesVisibility="None" PreparingRow="Grid_PreparingRow"&gt; &lt;my:DataGrid.Columns&gt; &lt;my:DataGridTextBoxColumn DisplayMemberBinding="{Binding Cheese}" Header="Cheese"&gt;&lt;/my:DataGridTextBoxColumn&gt; &lt;my:DataGridTextBoxColumn DisplayMemberBinding="{Binding Biscuit}" Header="Biscuit"&gt;&lt;/my:DataGridTextBoxColumn&gt; &lt;/my:DataGrid.Columns&gt; &lt;/my:DataGrid&gt; </code></pre> <p><strong>Code:</strong></p> <pre><code>this.Grid.AlternatingRowBackground = null; private void Grid_PreparingRow(object sender, DataGridRowEventArgs e) { CheesyClass c = e.Row.DataContext as CheesyClass; if (c != null &amp;&amp; c.Cheese == "cheddar") { e.Row.Background = new System.Windows.Media.SolidColorBrush(System.Windows.Media.Color.FromArgb(255, 255, 125, 125)); } } </code></pre>
<p>I've generally written custom ValueConverters for each data type being bound that return Visibility, Colour, etc.</p> <p>This gives a single point where the customisation rules are defined and I've found works very well.</p> <p>Robin's second link describes writing a custom ValueConverter.</p>
49,360
<p>I am trying to creating an optional association between a couple of tables. I have one table called Invoice. The Invoice table has a FK reference to the Customer table through the CustomerId field. The Invoice table also has a not enforced FK reference to the Project able through the ProjectId field.</p> <p>Is there anyway to set up my Linq-To-Sql classes to have an optional association between the Invoice and the Project table?</p> <p>I'd like to be able to pick off the Name of Project through this association, if one is available.</p> <p><strong>Clarification</strong></p> <p>Is it possible to set this up in the dbml file? I have a partial class that extends the Invoice entity and would like to be able to create a property called ProjectName that wrap Invoice.Project.Name.</p> <p>Just wondering if this is even possible or if I would have to return the left join value in the query.</p>
<p>I'm not sure how you'd do it without enormous, horrible redundancy. The standard way to handle a many-to-many relationship between two tables is via a third table that contains two primary key values, one for the first table, one for the second table, with a unique constraint (read 'index') on the combination of all those columns, and possibly with one or two duplicate (non-unique) indexes on the separate primary keys. In outline:</p> <pre><code>CREATE TABLE Table1 (pkcol1 ... PRIMARY KEY, ...); CREATE TABLE Table2 (pkcol2 ... PRIMARY KEY, ...); CREATE TABLE MtoM_Table1_Table2 ( pkcol1 ... REFERENCES Table1, pkcol2 ... REFERENCES Table2, PRIMARY KEY (pkcol1, pkcol2) ); -- CREATE INDEX fk1_mtom_table1_table2 ON MtoM_Table1_Table2(pkcol1); -- CREATE INDEX fk2_mtom_table1_table2 ON MtoM_Table1_Table2(pkcol2); </code></pre> <p>If your DBMS is intelligent, you can skip the separate index on the leading column of the primary key since the index on the primary key can also be used when searching for the just the leading value.</p> <p>Suppose Table1 and Table2 are the same table (so, in fact, we have just Table1), as in the question; this would normally still require the MtoM_Table1_Table1 mapping table - a separate table from the main table. The mapping table must have separate names for the PK (primary key) column, but both columns (or sets of columns) in the mapping table will refer to the PK column(s) in Table1.</p> <pre><code>CREATE TABLE Table1 (pkcol1 ... PRIMARY KEY, ...); CREATE TABLE MtoM_Table1_Table1 ( pkcol1 ... REFERENCES Table1(pkcol1), akcol1 ... REFERENCES Table1(pkcol1), PRIMARY KEY (pkcol1, akcol1) ); -- CREATE INDEX fk1_mtom_table1_table1 ON MtoM_Table1_Table1(pkcol1); -- CREATE INDEX fk2_mtom_table1_table1 ON MtoM_Table1_Table1(akcol1); </code></pre> <p>If you wanted to eliminate the mapping table too, then you would have to have a second column in Table1 to hold the other PK value - call it FKcol1 (for foreign key column). This then leaves you with a quandary: what is the primary key? It has to be the combination of PKCol1 and FKCol1. But FKCol1 is supposed to reference the primary key of another row -- so you have a contradiction. Even supposing you managed to avoid that as a problem (how, exactly?), to have 'many rows' in the referencing side of the many-to-many relationship, you must have multiple rows in the master table with the same data in all columns except FKcol, but these will reference a number (more than one, in general) other rows in the table. This is a contradiction plus a nightmare of redundancy, plus you've lost your simple primary key, plus it would be horrible to work out what the heck is going on.</p> <pre><code>CREATE TABLE Table1 ( pkcol1 ... /* PRIMARY KEY */, fkcol1 ... /* FOREIGN KEY REFERENCES Table1(pkcol1) */, ... ); -- CREATE INDEX fk1_table1 ON Table1(pkcol1); -- CREATE INDEX fk2_table1 ON Table1(fkcol1); </code></pre> <p>So, I'm convinced that the only sane answer is "No - you cannot represent both ends of a many-to-many relationship in the same table; you must use a mapping table to retain much chance of anything working 'as usual' in the system".</p>
<p>If you are using hibernate-annotations, there is a @ManyToMany, not sure about the XML equivalent. It should appear in the API documentation of your distribution</p>
46,268
<p>We have a stored procedure that runs nightly that in turn kicks off a number of other procedures. Some of those procedures could logically be run in parallel with some of the others. </p> <ul> <li>How can I indicate to SQL Server whether a procedure should be run in parallel or serial &mdash; ie: kicked off of asynchronously or blocking? </li> <li>What would be the implications of running them in parallel, keeping in mind that I've already determined that the processes won't be competing for table access or locks- just total disk io and memory. For the most part they don't even use the same tables.</li> <li>Does it matter if some of those procedures are the <em>same</em> procedure, just with different parameters?</li> <li>If I start a pair or procedures asynchronously, is there a good system in SQL Server to then wait for both of them to finish, or do I need to have each of them set a flag somewhere and check and poll the flag periodically using <code>WAITFOR DELAY</code>?</li> </ul> <p>At the moment we're still on SQL Server 2000.</p> <p>As a side note, this matters because the main procedure is kicked off in response to the completion of a data dump into the server from a mainframe system. The mainframe dump takes all but about 2 hours each night, and we have no control over it. As a result, we're constantly trying to find ways to reduce processing times.</p>
<p>I had to research this recently, so found this old question that was begging for a more complete answer. Just to be totally explicit: <strong>TSQL does <em>not</em></strong> (by itself) <strong>have the ability to launch other TSQL operations asynchronously</strong>.</p> <p>That doesn't mean you don't still have a lot of options (some of them mentioned in other answers):</p> <ul> <li><strong>Custom application</strong>: Write a simple custom app in the language of your choice, using asynchronous methods. Call a SQL stored proc on each application thread.</li> <li><strong>SQL Agent jobs</strong>: Create multiple SQL jobs, and start them asynchronously from your proc using <code>sp_start_job</code>. You can check to see if they have finished yet using the undocumented function <code>xp_sqlagent_enum_jobs</code> as described in <a href="http://www.databasejournal.com/features/mssql/article.php/10894_3491201_2/Detecting-The-State-of-a-SQL-Server-Agent-Job.htm" rel="noreferrer">this excellent article</a> by Gregory A. Larsen. (Or have the jobs themselves update your own JOB_PROGRESS table as Chris suggests.) You would literally have to create separate job for each parallel process you anticipate running, even if they are running the same stored proc with different parameters.</li> <li><strong>OLE Automation</strong>: Use <code>sp_oacreate</code> and <code>sp_oamethod</code> to launch a new process calling the other stored proc as described in <a href="http://www.databasejournal.com/features/mssql/article.php/3427581/Submitting-A-Stored-Procedure-Asynchronously.htm" rel="noreferrer">this article</a>, also by Gregory A. Larsen.</li> <li><strong>DTS Package</strong>: Create a DTS or SSIS package with a simple branching task flow. DTS will launch tasks in individual spids. </li> <li><strong>Service Broker</strong>: If you are on SQL2005+, look into using <a href="http://technet.microsoft.com/en-us/library/ms166104.aspx" rel="noreferrer">Service Broker</a></li> <li><strong>CLR Parallel Execution</strong>: Use the CLR commands <code>Parallel_AddSql</code> and <code>Parallel_Execute</code> as described in <a href="http://www.codeproject.com/KB/database/asynchronousTSQL.aspx" rel="noreferrer">this article</a> by Alan Kaplan (SQL2005+ only).</li> <li><strong>Scheduled Windows Tasks</strong>: Listed for completeness, but I'm not a fan of this option.</li> </ul> <p>I don't have much experience with Service Broker or CLR, so I can't comment on those options. If it were me, I'd probably use multiple Jobs in simpler scenarios, and a DTS/SSIS package in more complex scenarios.</p> <p><strong>One final comment</strong>: SQL already attempts to parallelize individual operations whenever it can*. This means that running 2 tasks at the same time instead of after each other is no guarantee that it will finish sooner. Test carefully to see whether it actually improves anything or not. </p> <p>We had a developer that created a DTS package to run 8 tasks at the same time. Unfortunately, it was only a 4-CPU server :)</p> <p>*Assuming default settings. This can be modified by altering the server's Maximum Degree of Parallelism or Affinity Mask, or by using the MAXDOP query hint.</p>
<p>Create a couple of SQL Server agent jobs where each one runs a particular proc.</p> <p>Then from within your master proc kick off the jobs.</p> <p>The only way of waiting that I can think of is if you have a status table that each proc updates when it's finished. </p> <p>Then yet another job could poll that table for total completion and kick off a final proc. Alternatively, you could have a trigger on this table.</p> <p>The memory implications are completely up to your environment..</p> <p><strong>UPDATE:</strong> If you have access to the task system.. then you could take the same approach. Just have windows execute multiple tasks, each responsible for one proc. Then use a trigger on the status table to kick off something when all of the tasks have completed.</p> <p><strong>UPDATE2:</strong> Also, if you're willing to create a new app, you could house all of the logic in a single exe...</p>
45,543
<p>Python Decimal doesn't support being constructed from float; it expects that you have to convert float to a string first.</p> <p>This is very inconvenient since standard string formatters for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as <code>Decimal(&quot;%.15f&quot; % my_float)</code>, which will give you garbage at the 15th decimal place if you also have any significant digits before decimal (<code>Decimal(&quot;%.15f&quot; % 100000.3) == Decimal('100000.300000000002910')</code>).</p> <p>Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported?</p>
<h3>Python &lt;2.7</h3> <pre><code>"%.15g" % f </code></pre> <p>Or in Python 3.0:</p> <pre><code>format(f, ".15g") </code></pre> <h3>Python 2.7+, 3.2+</h3> <p>Just pass the float to <code>Decimal</code> constructor directly, like this:</p> <pre><code>from decimal import Decimal Decimal(f) </code></pre>
<p>You can use JSON to accomplish it</p> <pre><code>import json from decimal import Decimal float_value = 123456.2365 decimal_value = json.loads(json.dumps(float_value), parse_float=Decimal) </code></pre>
40,833
<p>Anyone know the <code>&lt;%= %&gt;</code> or <code>Response.Write()</code> code for getting the version of .Net my web app is running on?</p>
<p><code>System.Environment.Version</code> will give you that.</p>
<pre><code>System.Runtime.InteropServices.RuntimeEnvironment.GetSystemVersion </code></pre>
46,176
<p>We have literally 100's of Access databases floating around the network. Some with light usage and some with quite heavy usage, and some no usage whatsoever. What we would like to do is centralise these databases onto a managed database and retain as much as possible of the reports and forms within them.</p> <p>The benefits of doing this would be to have some sort of usage tracking, and also the ability to pay more attention to some of the important decentralised data that is stored in these apps.</p> <p>There is no real constraints on RDBMS (Oracle, MS SQL server) or the stack it would run on (LAMP, ASP.net, Java) and there obviously won't be a silver bullet for this. We would like something that can remove the initial grunt work in an automated fashion.</p>
<p>We upsize (either using the upsize wizard or by hand) users to SQL server. It's usually pretty straight forward. Replace all the access tables with linked tables to the sql server and keep all the forms/reports/macros in access. The investment in access isn't lost and the users can keep going business as usual. You get reliability of sql server and centralized backups. Keep in mind - we’ve done this for a few large access databases, not hundreds. I'd do a pilot of a few dozen and see how it works out.</p> <p>UPDATE: I just found this, the sql server migration assitant, it might be worth a look: <a href="http://www.microsoft.com/sql/solutions/migration/default.mspx" rel="nofollow noreferrer">http://www.microsoft.com/sql/solutions/migration/default.mspx</a></p> <p>Update: Yes, some refactoring will be necessary for poorly designed databases. As for how to handle access sprawl? I've run into this at companies with lots of technical users (engineers esp., are the worst for this... and excel sprawl). We did an audit - (after backing up) deleted any databases that hadn't been touched in over a year. "Owners" were assigned based the location &amp;/or data in the database. If the database was in "S:\quality\test_dept" then the quality manager and head test engineer had to take ownership of it or we delete it (again after backing it up).</p>
<p>So? Dedicate a server to your Access databases. </p> <p>Now you have the benefit of some sort of usage tracking, and also the ability to pay more attention to some of the important decentralised data that is stored in these apps.</p> <p>This is what you were going to do anyway, only you wanted to use a different database engine instead of NTFS. </p> <p>And now you have to force the users onto your server.</p> <p>Well, you can encourage them by telling them that you aren't going to overwrite their data with old backups anymore, because now you will own the data, and you won't do that anymore. </p> <p>Also, you can tell them that their applications will run faster now, because you are going to exclude the folder from on-access virus scanning (you don't do that to your other databases, which is why they are full of sql-injection malware, but these databases won't be exposed to the internet), and planning to turn packet signing off (you won't need that on a dedicated server: it's only for people who put their file-share on their domain-server).</p> <p>Easy upgrade path, improved service to users, greater centralization and control for IT. Everyone's a winner.</p>
6,944
<p>When designing a ASP.net WebForm application what are some important steps to take (or hacks if you like to use the term) to ensure the best possible performance (in terms of speed, stability, and scalability)?</p>
<p>Here's a good Best Practices written by microsoft. Chapter 6 is specific to ASP.NET, but the whole document is .NET in general.</p> <ul> <li><a href="http://msdn.microsoft.com/en-us/library/ms998530.aspx" rel="noreferrer">http://msdn.microsoft.com/en-us/library/ms998530.aspx</a></li> </ul>
<p>1 - Turn on content compression on iis</p> <p>2 - Cache everything you can</p> <p>This will do for the majority of webapps out there.</p>
46,063
<p>I am using Delphi 7 and ICS components to communicate with php script and insert some data in mysql database...</p> <p>How to post unicode data using http post ?</p> <p>After using utf8encode from tnt controls I am doing it to post to PHP script </p> <pre><code>&lt;?php echo "Note = ". $_POST['note']; if($_POST['action'] == 'i') { /* * This code will add new notes to the database */ $sql = "INSERT INTO app_notes VALUES ('', '" . mysql_real_escape_string($_POST['username']) . "', '" . mysql_real_escape_string($_POST['note']) . "', NOW(), '')"; $result = mysql_query($sql, $link) or die('0 - Ins'); echo '1 - ' . mysql_insert_id($link); ?&gt; </code></pre> <p>Delphi code : </p> <pre><code> data := Format('date=%s&amp;username=%s&amp;password=%s&amp;hash=%s&amp;note=%s&amp;action=%s', [UrlEncode(FormatDateTime('yyyymmddhh:nn',now)), UrlEncode(edtUserName.Text), UrlEncode(getMd51(edtPassword.Text)), UrlEncode(getMd51(dataHash)),UrlEncode(Utf8Encode(memoNote.Text)),'i' ]); // try function StrHtmlEncode (const AStr: String): String; from IdStrings HttpCli1.SendStream := TMemoryStream.Create; HttpCli1.SendStream.Write(Data[1], Length(Data)); HttpCli1.SendStream.Seek(0, 0); HttpCli1.RcvdStream := TMemoryStream.Create; HttpCli1.URL := Trim(ActionURLEdit.Text); HttpCli1.PostAsync; </code></pre> <p>But when I post that unicode value is totally different then original one that I see in Tnt Memo</p> <p>Is there something I am missing ?!</p> <p>Also anybody knows how to do this with Indy?</p> <p>Thanks.</p>
<p>Your example code shows your data coming from a TNT Unicode control. That value will have type <code>WideString</code>, so to get UTF-8 data, you should call <code>Utf8Encode</code>, which will return an <code>AnsiString</code> value. Then call <code>UrlEncode</code> on that value. Make sure <code>UrlEncode</code>'s input type is <code>AnsiString</code>. So, something like this:</p> <pre><code>var data, date, username, passhash, datahash, note: AnsiString; date := FormatDateTime('yyyymmddhh:nn',now); username := Utf8Encode(edtUserName.Text); passhash := getMd51(edtPassword.Text); datahash := getMd51(data); note := Utf8Encode(memoNote.Text); data := Format('date=%s&amp;username=%s&amp;password=%s&amp;hash=%s&amp;note=%s&amp;action=%s', [UrlEncode(date), UrlEncode(username), UrlEncode(passhash), UrlEncode(datahash), UrlEncode(note), 'i' ]); </code></pre> <p>There should be no need to UTF-8-encode the MD5 values since MD5 string values are just hexadecimal characters. However, you should double-check that your <code>getMd51</code> function accepts <code>WideString</code>. Otherwise, you may be losing data before you ever send it anywhere.</p> <p>Next, you have the issue of receiving UTF-8 data in PHP. I expect there's nothing special you need to do there or in MySQL. Whatever you store, you should get back identically later. Send that back to your Delphi program, and decode the UTF-8 data back into a <code>WideString</code>.</p> <p>In other words, your Unicode data <em>will</em> look different in your database because you're storing it as UTF-8. In your database, you're seeing UTF-8-encoded data, but in your TNT controls, you're seeing the regular Unicode characters.</p> <p>So, for instance, if you type the character "ش" into your edit box, that's Unicode character U+0634, Arabic letter sheen. As UTF-8, that's the two-byte sequence 0xD8 0xB4. If you store those bytes in your database, and then view the raw contents of the field, you may see characters interpreted as though those bytes are in some ANSI encoding. One possible interpretation of those bytes is as the two-character sequence "Ø´", which is the Latin capital letter o with stroke followed by an acute accent.</p> <p>When you load that string back out of your database, it's still encoded as UTF-8, just as it was when you stored it, so you will need to decode it. As far as I can tell, neither PHP nor MySQL does any massaging of your data, so whatever UTF-8 character you give them will be returned to you as-is. If you are using the data in Delphi, then call <code>Utf8Decode</code>, which is the complement to the <code>Utf8Encode</code> function that you called previously. If you are using the data in PHP, then you might be interested in PHP's <code>utf8_decode</code> function, although that converts to ISO-8859-1, which doesn't include our example Arabic character. Stack Overflow already has a few questions related to using UTF-8 in PHP, so I won't attempt to add to them here. For example:</p> <ul> <li><a href="https://stackoverflow.com/questions/140728/best-practices-in-php-and-mysql-with-international-strings">Best practices in PHP and MySQL with international strings</a></li> <li><a href="https://stackoverflow.com/questions/279170/utf-8-all-the-way-through">UTF-8 all the way through…</a></li> </ul>
<p>I would expect (without knowing for sure) that you'd have to output them as &amp;#nnnnn entities (with the number in decimal rather than hex ... I think)</p>
39,253
<p>I have to preface this with the fact that I love jQuery as a JavaScript language extension and YUI as a rich set of free controls. So here is my question, is there going to be any problems down the line if I mix jQuery and YUI together in an MVC app I am working on.</p> <p>I want to use jQuery for the heavy lifting on the DOM and I want to use YUI for the rich user control set. This is an internal application and will never see the light of day outside of the organization I work for. </p> <p>If anybody has experience with mixing these two frameworks I would love to hear what your experience has been.</p>
<p>Speaking from some experience in developing a small tool myself, I've used YUI's rich control set with Prototype for DOM manipulation in the past and experienced no issues. Admittedly, this was a small tool that didn't use a wide array of the controls.</p> <p>Even so, I'm always hesitant to use multiple frameworks on my web projects; however, if you're only using jQuery's DOM functionality and YUI's control functionality, then I think you're fine - there's not really a conflict of interest there. Plus, with jQuery's noConflict() mode and YUI's namespacing, the two frameworks really shouldn't trump one another.</p>
<p>I think it shouldn't be a problem because YUI is all contained in "namespaces", so it shouldn't break anything (I don't know if jquery break things around YUI, but as long as it doesn't extend basic object there shouldn't be any problem). I remember Dustin Diaz come out with a library similar to jquery and builded around YUI: <a href="http://www.dustindiaz.com/introducing-ded-chain/" rel="nofollow noreferrer">http://www.dustindiaz.com/introducing-ded-chain/</a> I tried to download it now, but the site is now offline.</p>
24,644
<p>I have just set up my new 3D printer. However, the build plate keeps slipping while printing. I tried to use binder clips to keep the plate in place, but this makes the build plate unlevel, messing up my prints. What are some other ways to prevent the build plate from slipping. I was considering using duct tape, but am not sure whether this would work.</p>
<p>I use kapton tape to fixate the glass plates to the heated beds on two of my printers, one a Prusa i3 clone, the other a CoreXY. The tape is able to withstand higher temperatures, and is very thin, so it doesn't have the drawbacks of limiting the print area or high chance the nozzle hitting the clips.</p> <p><a href="https://i.stack.imgur.com/4Mjuj.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Mjuj.jpg" alt="enter image description here" /></a></p>
<p>I'm not sure what &quot;the build plate keeps slipping while printing. I tried to use binder clips to keep the plate in place&quot; means, but if you have a build surface and a bottom heated bed, you can look for &quot;silicone thermal pad 0.5mm&quot;.</p> <p>You can put it between build surface and lower bed, so that it will increase friction and there won't be any slipping, even without clips.</p> <p><a href="https://i.stack.imgur.com/ppiOT.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ppiOT.jpg" alt="enter image description here" /></a></p> <p>You can also look for &quot;thermal conductive rubber silicone cloth 0.3mm&quot; which is fiberglass reinforced and thinner, but with a maybe worse thermal conductivity. It is however more tear resistant, which helps when you remove the build surface.</p> <p><a href="https://i.stack.imgur.com/JTBCY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JTBCY.jpg" alt="enter image description here" /></a></p>
1,899
<p>I have several databases where the transaction log (.LDF) is many times larger than the database file (.MDF).</p> <p>What can I do to automatically shrink these or keep them from getting so large?</p>
<p>That should do the job</p> <pre><code>use master go dump transaction &lt;YourDBName&gt; with no_log go use &lt;YourDBName&gt; go DBCC SHRINKFILE (&lt;YourDBNameLogFileName&gt;, 100) -- where 100 is the size you may want to shrink it to in MB, change it to your needs go -- then you can call to check that all went fine dbcc checkdb(&lt;YourDBName&gt;) </code></pre> <p><strong>A word of warning</strong></p> <p>You would only really use it on a test/development database where you do not need a proper backup strategy as dumping the log will result in losing transactions history. In live systems you should use solution sugested by <a href="https://stackoverflow.com/questions/163098/how-do-i-shrink-the-transaction-log-on-ms-sql-2000-databases/163117#163117">Cade Roux</a></p>
<p>try sp_force_shrink_log which you can find here <a href="http://www.rectanglered.com/sqlserver.php" rel="nofollow noreferrer">http://www.rectanglered.com/sqlserver.php</a></p>
19,722
<p>My question about the reconfiguration delay when switching between Access 2003 and 2007 the comment was made:</p> <p><em>Btw, you can't avoid the reconfiguration between Access 2007 and earlier versions. Access 2007 uses some of the same registry keys as earlier versions and they have to be rewritten when opening Access 2007.</em></p> <p>If this is so then is it actually safe to be running/developing databases in both versions at the same time? Do the registry changes affect the operation of Access once it has started up. For example recompiling/saving changes to objects?</p>
<p>It works most of the time but it's not perfectly safe, which is why Microsft refuses to support multiple installations of Microsoft Office on the same pc. The recommended solution is to install a virtual machine and install the second Microsoft Office version on the virtual machine. Then you can switch from one version of Access to the other without them interfering with one another (and no switching time wait!)</p> <p>Microsoft offers a free download of Virtual PC 2007 in both 32 bit and 64 bit versions:</p> <p><a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=04d26402-3199-48a3-afa2-2dc0b40a73b6&amp;DisplayLang=en" rel="nofollow noreferrer">http://www.microsoft.com/downloads/details.aspx?FamilyID=04d26402-3199-48a3-afa2-2dc0b40a73b6&amp;DisplayLang=en</a></p> <p>Here's the service pack:</p> <p><a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=28c97d22-6eb8-4a09-a7f7-f6c7a1f000b5&amp;DisplayLang=en" rel="nofollow noreferrer">http://www.microsoft.com/downloads/details.aspx?FamilyID=28c97d22-6eb8-4a09-a7f7-f6c7a1f000b5&amp;DisplayLang=en</a></p>
<p>It seems to me that the instance of Access you open will inherit the registry settings at the time it is open. So, if you open A2K7, you'll get the registry settings that it writes in its "configuring Office" procedures. If while A2K7 is still open, you open A2K3, it will reconfigure the registry settings and inherit those for its session. This will have no effect on the already-running instance of A2K7.</p> <p>The only possible exception would be if there are some registry keys that the "configuring..." process changes that Access doesn't read upon opening, but later in the session. I have strong doubts that MS would ever design things that way. Professional Access developers been dealing with this kind of thing since MS introduced the MS Installer (first seen by most people with Office 2000), and the A2K7 issues are only slightly worse than with previous versions (though on Vista, it's more complex because of the way Vista handles registry changes). The fact that MS gets the vapors over contemplating multiple versions of Access on a single PC does not mean that it's actually dangerous -- it shows only that they don't want to devote resources to supporting that scenario.</p>
18,957
<p>In C#, what makes a field different from a property, and when should a field be used instead of a property?</p>
<p>Properties expose fields. Fields should (almost always) be kept private to a class and accessed via get and set properties. Properties provide a level of abstraction allowing you to change the fields while not affecting the external way they are accessed by the things that use your class.</p> <pre><code>public class MyClass { // this is a field. It is private to your class and stores the actual data. private string _myField; // this is a property. When accessed it uses the underlying field, // but only exposes the contract, which will not be affected by the underlying field public string MyProperty { get { return _myField; } set { _myField = value; } } // This is an AutoProperty (C# 3.0 and higher) - which is a shorthand syntax // used to generate a private field for you public int AnotherProperty { get; set; } } </code></pre> <p>@Kent points out that Properties are not required to encapsulate fields, they could do a calculation on other fields, or serve other purposes.</p> <p>@GSS points out that you can also do other logic, such as validation, when a property is accessed, another useful feature.</p>
<p>After reading all answers I didn't find an answer about concurrent access.</p> <p>Let's say you have an API endpoint that can be accessed asynchronous and you are using the static field to store data and need to have exclusive access to the static field at all.</p> <p>To reproduce this sample you will need a load test to do much access to the endpoint simultaneous.</p> <p>When using a <strong>static int counter field</strong> the endpoint got the same value in two or more access. <a href="https://i.stack.imgur.com/QAZlM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QAZlM.png" alt="enter image description here" /></a></p> <p>When using a <strong>static int counter property</strong> the endpoint handles the concurrence and always get a new value of the counter.</p> <p><a href="https://i.stack.imgur.com/LMRxJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LMRxJ.png" alt="enter image description here" /></a></p> <p>This does not answer the question at all but this behaviour must be taken into account when using one or the other.</p>
37,696
<p>I have a simple Word to Pdf converter as an MSBuild Task. The task takes Word files (ITaskItems) as input and Pdf files (ITaskItems) as output. The script uses a Target transform for conversion:</p> <pre><code>&lt;Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5"&gt; &lt;UsingTask AssemblyFile="$(MSBuildExtensionsPath)\MyTasks.dll" TaskName="MyTasks.DocToPdf" /&gt; &lt;Target Name="Build" DependsOnTargets="Convert" /&gt; &lt;Target Name="Convert" Inputs="@(WordDocuments)" Outputs="@(WordDocuments-&gt;'%(FileName).pdf')"&gt; &lt;DocToPdf Inputs="@(WordDocuments)" Outputs="%(FileName).pdf"&gt; &lt;Output TaskParameter="ConvertedFiles" ItemName="PdfDocuments" /&gt; &lt;/DocToPdf&gt; &lt;/Target&gt; &lt;ItemGroup&gt; &lt;WordDocuments Include="One.doc" /&gt; &lt;WordDocuments Include="SubDir\Two.doc" /&gt; &lt;WordDocuments Include="**\*.doc" /&gt; &lt;/ItemGroup&gt; &lt;/Project&gt; </code></pre> <p>What's happening is that SubDir\Two.doc gets converted on every incremental build, One.doc does not (ie MSBuild correctly skips that file because it was already converted). If I use the recursive items spec (the third one above), I get the same behaviour (ie. One.doc only gets converted if the PDF is out of date or missing, but all documents in subdirectories always get converted regardless).</p> <p>What am I doing wrong here?</p>
<p>I found the problem. It turns out that I had some logic in the Task that would turn any relative path specified for a PDF file into an absolute path. Once I removed that and changed the script to this:</p> <pre><code> &lt;Target Name="Convert" Inputs="@(WordDocuments)" Outputs="@(WordDocuments-&gt;'%(RelativeDir)%(FileName).pdf')"&gt; &lt;DocToPdf Inputs="%(WordDocuments.Identity)" Outputs="%(RelativeDir)%(FileName).pdf"&gt; &lt;Output TaskParameter="ConvertedFiles" ItemName="PdfDocuments" /&gt; &lt;/DocToPdf&gt; &lt;/Target&gt; </code></pre> <p>I got the behaviour I expected.</p>
<p>Here's my example of a task that performs incremental builds on items found recursively through subdirectories:</p> <pre><code> &lt;Target Name="Build" Inputs="@(RequestTextFiles)" Outputs="@(RequestTextFiles -&gt; '%(Rootdir)%(Directory)%(Filename).out')"&gt; &lt;DoSomething SourceFiles="@(RequestTextFiles)" /&gt; &lt;/Target&gt; </code></pre> <p>This maps 1:1 with an input file, and an output file with the same name, that outputs to the same path with a different extension, namely 'out' in this case.</p>
22,778
<p>Since Rails is not multithreaded (yet), it seems like a threaded web framework would be a better choice for a Facebook application. (reason being is cuz each Rails process can only handle one request at a time, and facebook actions tend to be slow, because there is a lot of network communication between your app and facebook)</p> <p>Has anyone used Merb to write a Facebook application? Is there a port of Facebooker (the Facebook plugin for Rails) to Merb?</p>
<p>We've used merb_facebooker in one of our projects (<a href="http://www.rockthevote.com/" rel="nofollow noreferrer">Rock the Vote</a>), and it worked out pretty well. Testing Facebook apps is quite annoying, as you don't have control of the middleware, so watch out for your expectations of the FB API and make sure you validate as much of them as possible early in the development stages (not trying out all the things we needed to do with fbML early on brought a few headaches).</p>
<p>Have you looked at <a href="http://rubyforge.org/projects/starling/" rel="nofollow noreferrer">Starling</a>? It's the server used by twitter to handle their messages. It's a persistent queue server that allows you to delegate jobs to workers. </p>
11,493
<p>I am currently working on a leave application (which is a subset of my e-scheduler project) and I have my database design as follows:</p> <pre><code>event (event_id, dtstart, dtend... *follows icalendar standard*) event_leave (event_id*, leave_type_id*, total_days) _leave_type (leave_type_id, name, max_carry_forward) _leave_allocation (leave_allocation_id, leave_type_id*, name, user_group_id, total_days, year) _leave_carry_forward(leave_carry_forward_id, leave_type_id*, user_id, year) </code></pre> <p>Does anyone here in stackoverflow also working on an e-leave app? mind to share your database design as I am looking for a better design than mine. The problem with my current design only occurs at the beginning of the year when the system is calculating the number of days that can be carried forward. </p> <p>In total I would have to run 1 + {$number_of users} * 2 queries (the first one to find out the number of allocation rules and the maximum carry forward quota. Then for each user, I need to find out the balance, and then to insert the balance to the database)</p>
<p>I'm not following the schema very well (it looks like each leave_type would have a carry forward? There's no user on the event* tables?) but you should be able to dynamically derive the balance at any point in time - including across years. </p> <p>AAMOF, normalization rules would require you to be able to <em>derive</em> the balance. If you then chose to <em>denormalize</em> for performance is up to you, but the design should support the calculated query. Given that, then calculating the year end carryforward is a single set based query.</p> <p>Edit: I had to change the schema a bit to accommodate this, and I chose to normalize to make the logic easier - but you can insert denormalization along the way for performance if you need to:</p> <p>First the tables that are important for this scenario...hopefully my pseudo-syntax will make sense:</p> <pre><code>User { User_Id (PK) } // Year may be a tricky business logic issue here...Do you charge the Start or End year // if the event crosses a year boundary? Or do you just do 2 different events? // You want year in this table, though, so you can do a FK reference to Leave_Allocation // Some RDBMS will let you do a FK from a View, though, so you could do that Event { Event_Id (PK), User_Id, Leave_Type_Id, Year, DtStart, DtEnd, ... // Ensure that events are charged to leave the user has FK (User_Id, Leave_Type_Id, Year)-&gt;Leave_Allocation(User_Id, Leave_Type_Id, Year) } Leave_Type { Leave_Type_Id, Year, Max_Carry_Forward // Max_Carry_Forward would probably change per year PK (Leave_Type_Id, Year) } // Starting balance for each leave_type and user, per year // Not sure the name makes the most sense - I think of Allocated as used leave, // so I'd probably call this Leave_Starting_Balance or something Leave_Allocation { Leave_Type_Id (FK-&gt;Leave_Type.Leave_Type_Id), User_Id (FK-&gt;User.User_Id), Year, Total_Days PK (Leave_Type_Id, User_Id, Year) // Ensure that leave_type is defined for this year FK (Leave_Type_Id, Year)-&gt;Leave_Type(Leave_Type_Id, Year) } </code></pre> <p>And then, the views (which is where you may want to apply some denormalization):</p> <pre><code>/* Just sum up the Total_Days for an event to make some other calcs easier */ CREATE VIEW Event_Leave AS SELECT Event_Id, User_Id, Leave_Type_Id, DATEDIFF(d, DtEnd, DtStart) as Total_Days, Year FROM Event /* Subtract sum of allocated leave (Event_Leave.Total_Days) from starting balance (Leave_Allocation) */ /* to get the current unused balance of leave */ CREATE VIEW Leave_Current_Balance AS SELECT Leave_Allocation.User_Id, Leave_Allocation.Leave_Type_Id, Leave_Allocation.Year, Leave_Allocation.Total_Days - SUM(Event_Leave.Total_Days) as Leave_Balance FROM Leave_Allocation LEFT OUTER JOIN Event_Leave ON Leave_Allocation.User_Id = Event_Leave.User_Id AND Leave_Allocation.Leave_Type_Id = Event_Leave.Leave_Type_Id AND Leave_Allocation.Year = Event_Leave.Year GROUP BY Leave_Allocation.User_Id, Leave_Allocation.Leave_Type_Id, Leave_Allocation.Year, Leave_Allocation.Total_Days </code></pre> <p>Now, our Leave CarryForward query is just the minimum of current balance or maximum carryforward as of midnight on 1/1.</p> <pre><code> SELECT User_Id, Leave_Type_Id, Year, /* This is T-SQL syntax...your RDBMS may be different, but should be able to do the same thing */ /* If not, you'd do a UNION ALL to Max_Carry_Forward and select MIN(BalanceOrMax) */ CASE WHEN Leave_Balance &lt; Max_Carry_Forward THEN Leave_Balance ELSE Max_Carry_Forward END as Leave_Carry_Forward FROM Leave_Current_Balance JOIN Leave_Type ON Leave_Current_Balance.Leave_Type_Id = Leave_Type.Leave_Type_Id /* This assumes max_carry_forward is how much you can carry_forward into the next year */ /* eg,, a max_carry_forward of 300 hours for year 2008, means I can carry_forward up to 300 */ /* hours into 2009. Otherwise, you'd join on Leave_Current_Balance.Year + 1 if it's how much */ /* I can carry forward into *this* year. */ AND Leave_Current_Balance.Year = Leave_Type.Year </code></pre> <p>So, at the end of the year, you'd insert the CarryForward balances back into LeaveAllocation with the new year.</p>
<p>There is always a better design!! </p> <p>Does your current design work? How many users do you expect (ie does it matter you would have to run x thousand queries).</p> <p>If the problem of the current design is only at the beginning of the year then perhaps you could live with it!</p> <p>Cheers</p> <p>NZS</p>
21,982
<p>i’m trying to create a url string that works like this:</p> <pre><code>/app/process/example.com/index.html </code></pre> <p>so in other words,</p> <pre><code>/app/process/$URL </code></pre> <p>i then retrieve the url with </p> <pre><code>$this-&gt;uri-&gt;segment(3); </code></pre> <p>the forward slashes in the URL will of course be a problem accessing uri segments, so i’ll go ahead and url encode the URL portion:</p> <pre><code>/app/process/example.com%2Findex.html </code></pre> <p>.. but now I just get a 404 saying ...</p> <pre><code>Not Found The requested URL /app/process/example.com/index.html was not found on this server. </code></pre> <p>it appears that my url encoding of forward slashes breaks CI’s URI parser.</p> <p>what can i do to get around this problem?</p>
<p>I think the error message you are getting is not from codeigniter but from your web server. </p> <p>I replicated this using Apache2 without even using CodeIgniter: I created a file index.php, and then accessed <code>index.php/a/b/c</code> - it worked fine. If I then tried to access <code>index.php/a/b/c%2F</code> I got a 404 from Apache.</p> <p>I solved it by adding to my Apache configuration:</p> <p><code>AllowEncodedSlashes On</code> </p> <p>See <a href="http://httpd.apache.org/docs/2.2/mod/core.html#allowencodedslashes" rel="noreferrer">the documentation</a> for more information</p> <p>Once you've done this you might need to fiddle around with <code>$config['permitted_uri_chars']</code> in codeigniter if it is still not working - you may find the slashes get filtered out</p>
<p>Change your permitted_uri_chars index in config file</p> <pre><code>$config['permitted_uri_chars'] = 'a-z 0-9~%.:_\-'; </code></pre>
40,180
<p>I have an old Excel 4 macro that I use to run monthly invoices. It is about 3000 lines and has many Excel 5 Dialog Box sheets (for dialog boxes). I would like to know what the easiest way would be to change it into VBA and if it is worth it. Also, if once I have converted it to VBA, how to create a standalone application out of it?</p>
<p>I have attempted this before and in the end you do need to rewrite it as Biri has said.</p> <p>I did quite a bit of this work when our company was upgrading from Windows NT to Windows XP. I often found that it is easier not to look at the old code at all and start again from scratch. You can spend so much time trying to work out what the Excel 4 did especially around the "strange" dialog box notation. In the end if you know what the inputs are and the outputs then it often more time effective and cleaner to rewrite.</p> <p>Whether to use VBA or not is in some ways another question but VBA is rather powerful and extensible so although I would rather use other tools like .NET in many circumstances it works well and is easy to deploy. </p> <p>In terms of is it worth it? If you could say that you were never ever going to need to change your Excel 4 macro again then maybe not. But in Business there is always something that changes eg tax rates, especially end of year things. Given how hard it is to find somone to support Excel 4 and even find documentation on it I would say it is risky not to move to VBA but that is something to balance up.</p>
<p>Here is a good artikel about this topic: <a href="http://msdn.microsoft.com/en-us/library/aa192490.aspx" rel="nofollow noreferrer">http://msdn.microsoft.com/en-us/library/aa192490.aspx</a></p> <p>You can download VB2008-Express for free at: <a href="http://www.microsoft.com/express/default.aspx" rel="nofollow noreferrer">http://www.microsoft.com/express/default.aspx</a></p>
11,617
<p>Being stuck with a legacy database schema that no longer reflects your data model is every developer's nightmare. Yet with all the talk of refactoring code for maintainability I have not heard much of refactoring outdated database schemas. </p> <p>What are some tips on how to transition to a better schema without breaking all the code that relies on the old one? I will propose a specific problem I am having to illustrate my point but feel free to give advice on other techniques that have proven helpful - those will likely come in handy as well.</p> <hr> <p>My example:</p> <p>My company receives and ships products. Now a product receipt and a product shipment have some very different data associated with them so the original database designers created a separate table for receipts and for shipments. </p> <p>In my one year working with this system I have come to the realization that the current schema doesn't make a lick of sense. After all, both a receipt and a shipment are basically a transaction, they each involve changing the amount of a product, at heart only the +/- sign is different. Indeed, we frequently need to find the total amount that the product has changed over a period of time, a problem for which this design is downright intractable. </p> <p>Obviously the appropriate design would be to have a single Transactions table with the Id being a foreign key of either a ReceiptInfo or a ShipmentInfo table. Unfortunately, the wrong schema has already been in production for some years and has hundreds of stored procedures, and thousands of lines of code written off of it. How then can I transition the schema to work correctly?</p>
<p>Here's a whole catalogue of database refactorings:</p> <p><a href="http://databaserefactoring.com/" rel="noreferrer">http://databaserefactoring.com/</a></p>
<p>Is all data access limited to stored procedures? If not, the task could be nearly impossible. If so, you just have to make sure your data migration scripts work well transitioning from the old to the new schema, and then make sure your stored procedures honor theur inputs and outputs. </p> <p>Hopefully none of them have "select *" queries. If they do, use 'sp_help tablename' to get the complete list of columns, copy that out and replace each * with the complete column list, just to make sure you don't break client code.</p> <p>I would recommend making the changes gradually, and do lots of integration testing. It's hard to do a significant remodel without introducing a few bugs.</p>
12,998
<p>Is it possible to read damaged media (cd, hdd, dvd,...) even if windows explorer bombs out?</p> <p>What I mean to ask is, whether there is a set of APIs or something that can access the disk at a very low level (below explorer?) and read whatever can be retrieved even if it is only partial, especially if you can still see the file is there from explorer, but can't do anything with it because it is damaged somehow (scratch on cd, etc)?</p>
<p>The main problem with Windows Explorer is that it doesn't support resuming copying after a read error. Most superficially scratched CDs, for example, will fail on different areas of the disk every time you eject and reinsert them.</p> <p>Therefore, with a utility that supports resuming copy operations, it is possible to read the entire contents of a damaged CD with by doing "eject/reload/resume" a few times.</p> <p>In fact, this is what a <a href="http://aib.ftuff.com/get.php?file=filecopy.exe" rel="nofollow noreferrer">utility I wrote</a> does, and I've never needed anything fancier to read scratched disks. (It simply uses ReadFile and WriteFile.)</p> <p>One step lower would be opening the raw partition (i.e. disk image) by passing a string such as "\.\F:" (note: slashes are literal here) to CreateFile. It would allow you to read raw sectors from a drive, but reconstructing files from that data would be hard.</p> <p>In fact, the "\.\" syntax allows you to open devices in the "\GLOBAL??" branch of the Windows Object Manager namespace as if they were files. It's not unlike calling dd with /dev/x as a parameter. There is also a "\Device" branch, but that's only accessible via DeviceIoControl() (i.e. ioctl()), meaning there's no simple ReadFile()/WriteFile() interface.</p> <p>Anything lower level than that would be device-specific, I guess; like reading raw CD-ROM data (including ECC bits) the way some CD-burning programs do. You'd have to do some research on the specific media (CD, flash, DVD) and what your hardware allows you to do on them.</p> <p>Note: The backslashes seem to get lost on the way to the web page; you need to pass "backslash backslash dot backslash DeviceName" to CreateFile. You need to escape them, too, of course.</p>
<p>I don't know what layer exists between Windows Explorer and the Win32 APIs. You can try to write a program with the Win32 File I/O stuff. If that doesn't work, then you have to write your own device driver to get any lower.</p>
24,150
<p>I have a function inside a loop inside a function. The inner function acquires and stores a large vector of data in memory (as a global variable... I'm using "R" which is like "S-Plus"). The loop loops through a long list of data to be acquired. The outer function starts the process and passes in the list of datasets to be acquired.</p> <pre><code>for (dataset in list_of_datasets) { for (datachunk in dataset) { &lt;process datachunk&gt; &lt;store result? as vector? where?&gt; } } </code></pre> <p>I programmed the inner function to store each dataset before moving to the next, so all the work of the outer function occurs as side effects on global variables... a big no-no. Is this better or worse than collecting and returning a giant, memory-hogging vector of vectors? Is there a superior third approach?</p> <p>Would the answer change if I were storing the data vectors in a database rather than in memory? Ideally, I'd like to be able to terminate the function (or have it fail due to network timeouts) without losing all the information processed prior to termination.</p>
<p>use variables in the outer function instead of global variables. This gets you the best of both approaches: you're not mutating global state, and you're not copying a big wad of data. If you have to exit early, just return the partial results.</p> <p>(See the "Scope" section in the R manual: <a href="http://cran.r-project.org/doc/manuals/R-intro.html#Scope" rel="noreferrer">http://cran.r-project.org/doc/manuals/R-intro.html#Scope</a>)</p>
<p>It's tough to say definitively without knowing the language/compiler used. However, if you can simply pass a pointer/reference to the object that you're creating, then the size of the object itself has nothing to do with the speed of the function calls. Manipulating this data down the road could be a different story.</p>
10,511
<p>We have a highly specialized DAL which sits over our DB. Our apps need to use this DAL to correctly operate against this DB.</p> <p>The generated DAL (which sits on some custom base classes) has various 'Rec' classes (Table1Rec, Table2Rec) each of which represents the record structure of a given table.</p> <p>Here is a sample Pseudo-class...</p> <pre><code>Public Class SomeTableRec Private mField1 As String Private mField1isNull As Boolean Private mField2 As Integer Private mField2isNull As Boolean Public Sub New() mField1isNull = True mField2isNull = True End Sub Public Property Field1() As String Get Return mField1 End Get Set(ByVal value As String) mField1 = value mField1isNull = False End Set End Property Public ReadOnly Property Field1isNull() As Boolean Get Return mField1isNull End Get End Property Public Property Field2() As Integer Get Return mField2 End Get Set(ByVal value As Integer) mField2 = value mField2isNull = False End Set End Property Public ReadOnly Property Field2isNull() As Boolean Get Return mField2isNull End Get End Property End Class </code></pre> <p>Each class has properties for each of the fields... Thus I can write...</p> <pre><code>Dim Rec as New Table1Rec Table1Rec.Field1 = "SomeString" Table2Rec.Field2 = 500 </code></pre> <p>Where a field can accept a NULL value, there is an additional property which indicates if the value is currently null.</p> <p>Thus....</p> <pre><code>Dim Rec as New Table1Rec Table1Rec.Field1 = "SomeString" If Table1Rec.Field1Null then ' This clearly is not true End If If Table1Rec.Field2Null then ' This will be true End If </code></pre> <p>This works because the constructor of the class sets all NULLproperties to True and the setting of any FieldProperty will cause the equivalent NullProperty to be set to false.</p> <p>I have recently had the need to expose my DAL over the web through a web service (which I of course intend to secure) and have discovered that while the structure of the 'Rec' class remains intact over the web... All logic is lost..</p> <p>If someone were to run the previous piece of code remotely they would notice that neither condition would prove true as there is no client side code which sets null to true.</p> <p><strong>I get the feeling I have architected this all wrong, but cannot see how I should improve it.</strong></p> <p><strong>What is the correct way to architect this?</strong></p>
<p>Not sure if I fully understand the question, but you can have nullable data types in XML. </p> <p>So this...</p> <pre><code>Imports System.Web Imports System.Web.Services Imports System.Web.Services.Protocols &lt;WebService(Namespace:="http://tempuri.org/")&gt; _ &lt;WebServiceBinding(ConformsTo:=WsiProfiles.BasicProfile1_1)&gt; _ &lt;Global.Microsoft.VisualBasic.CompilerServices.DesignerGenerated()&gt; _ Public Class Testing Inherits System.Web.Services.WebService &lt;WebMethod()&gt; _ Public Function GetObjects() As Generic.List(Of TestObject) Dim list As New Generic.List(Of TestObject) list.Add(New TestObject(Nothing, "Empty ID Object")) list.Add(New TestObject(1, "Full ID Object")) list.Add(New TestObject(2, Nothing)) Return list End Function Public Class TestObject Public Sub New() _name = String.Empty _id = Nothing End Sub Public Sub New(ByVal id As Nullable(Of Integer), ByVal name As String) _name = name _id = id End Sub Private _name As String Public Property Name() As String Get Return _name End Get Set(ByVal value As String) _name = value End Set End Property Private _id As Nullable(Of Integer) Public Property ID() As Nullable(Of Integer) Get Return _id End Get Set(ByVal value As Nullable(Of Integer)) _id = value End Set End Property End Class End Class </code></pre> <p>outputs this (with nullable areas)</p> <pre><code>&lt;?xml version="1.0" encoding="utf-8" ?&gt; &lt;ArrayOfTestObject xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://tempuri.org/"&gt; &lt;TestObject&gt; &lt;Name&gt;Empty ID Object&lt;/Name&gt; &lt;ID xsi:nil="true" /&gt; &lt;/TestObject&gt; &lt;TestObject&gt; &lt;Name&gt;Full ID Object&lt;/Name&gt; &lt;ID&gt;1&lt;/ID&gt; &lt;/TestObject&gt; &lt;TestObject&gt; &lt;ID&gt;2&lt;/ID&gt; &lt;/TestObject&gt; &lt;/ArrayOfTestObject&gt; </code></pre>
<p>Web services are designed to expose operation(methods) &amp; data contracts but not internal implementation logic. This is a "good thing" in the world of service-oriented architecture. The scenario you describe is a remote/distributed object architecture. Web services will not support what you are trying to do. Please see this <a href="https://stackoverflow.com/questions/187006/how-to-expose-objects-through-wcf#187079" title="Using WCF">post</a> for more information.</p>
28,977
<p>I recently installed Windows 2008 Server to replace a crashed hard drive on a web server with a variety of web pages including several classic ASP applications. One of these makes extensive use of file uploads using a com tool that has worked for several years.</p> <p>More information: </p> <p>My users did not provide good information in that very small zips (65K) work once I tested it myself, but larger ones do not. I did not test for the cut-off, but 365K fails. And it is not only zip files after all. A 700K doc file failed also. ErrorCode 800a0035.</p>
<p>There is a size limit that you will probably need to set - what's the 500 error?</p>
<p>There is a size limit that you will probably need to set - what's the 500 error?</p>
9,519
<p>I'm looking for solutions that will allow me to have a Live Video feed setup at one location and, via the internet, watch the video stream at a remote location.</p> <p>The goal is to have a live high quality video training session that remote users can watch from their own locations.</p> <p>Any technology will work. High Quality is a must. I'm most familiar with C# and Microsoft solutions.</p> <p>Here is how I understand it might work:</p> <p><strong>For Provider</strong></p> <ol> <li>Get Camera (I currently have a high definition video camera)</li> <li>Plug Camera into computer (How, video capture card?)</li> <li>Use Software to Capture video (What software?)</li> <li>Use Software to stream to client (What software?)</li> </ol> <p><strong>For Client</strong></p> <ol start="5"> <li>Use Software to point to Video Source (What Software?)</li> <li>Plug Computer into Projector</li> </ol>
<p>I think the soultion for you is DaCast, a self-service platform for live streaming. When you sign up an account, you are 20 minutes away from your first streaming. As long as you have a webcam, you can easily stream live throughout the internet. You can somply send a link of your DaCast channel page to all viewers, they can watch the stream at any location. Also, you can embed the code to your own web site or blog, then your viewers can watch it, too!</p>
<p>Have you tried Video Lan Client (VLC) They have ability to stream/capture video from either a video camera or video file.</p> <p><a href="http://www.videolan.org/vlc/" rel="nofollow noreferrer">http://www.videolan.org/vlc/</a></p> <p>(there is a checkbox at the bottom of the new dialog for streaming/saving options)</p> <p><a href="http://ashishware.com/Video.shtml" rel="nofollow noreferrer">http://ashishware.com/Video.shtml</a></p>
44,339
<p>I am new with Linq and I would like to sort some data that are in the BindingList. Once I did my Linq query, I need to use back the BindingList collection to bind my data.</p> <pre><code> var orderedList = //Here is linq query return (BindingList&lt;MyObject&gt;)orderedList; </code></pre> <p>This compiled but fails in execution, what is the trick?</p>
<pre><code>new BindingList&lt;MyObject&gt;(orderedList.ToList()) </code></pre>
<p>That above only works when your linq query's select projection is explicitly typed as MyObject rather than select new which creates an instance of an anonymous object. In such cases the typeof(orderedList.ToList()) winds up as something akin to this: System.Collections.Generic.List&lt;&lt;>f__AnonymousType1></p> <p>ie: this should work:</p> <pre><code>var result = (from x in MyObjects where (wherePredicate( x )) select new MyObject { Prop1 = x.Prop1, Prop2 = x.Prop2 }).ToList(); return new BindingList&lt;MyObject&gt;( result ); </code></pre> <p>this will not: </p> <pre><code>var result = from x in db.MyObjects where(Predicate(x)) select new { Prop1 = x.Prop1 Prop2 = x.Prop2 }; return new BindingList&lt;MyObject&gt;(result.ToList()) //creates the error: CS0030 "Cannot convert type 'AnonymousType#1' to 'MyObject' </code></pre> <p>In the second case they typeof(result) is: System.Collections.Generic.List&lt;&lt;>f__AnonymousType2> (the type params match the properties set in your select projection) </p> <p>reference: <a href="http://blogs.msdn.com/swiss_dpe_team/archive/2008/01/25/using-your-own-defined-type-in-a-linq-query-expression.aspx" rel="nofollow noreferrer">http://blogs.msdn.com/swiss_dpe_team/archive/2008/01/25/using-your-own-defined-type-in-a-linq-query-expression.aspx</a></p>
41,185
<p>Are there any O/R mappers out there that will automatically create or modify the database schema when you update the business objects? After looking around it seems that most libraries work the other way by creating business object from the database schema.</p> <p>The reason I'd like to have that capability is that I am planning a product that stores its data in a database on the customer's machine. So I may have to update the database schema when a new version comes out.</p> <p>Another requirement is that the mapper supports a file based database like SQLite or JET, not only SQL server.</p> <p>I know XPO from Developer Express has that capability but I was wondering if there are any alternatives out there.</p> <p>Thanks</p>
<p><a href="http://www.hibernate.org/343.html" rel="nofollow noreferrer">NHibernate</a> can generate the database schema from the business objects for you.</p>
<p>Can NHibernate update the schema or does it drop and recreate it?</p>
48,576
<p>If I create a UserControl and add some objects to it, how can I grab the HTML it would render?</p> <p>ex.</p> <pre><code>UserControl myControl = new UserControl(); myControl.Controls.Add(new TextBox()); // ...something happens return strHTMLofControl; </code></pre> <p>I'd like to just convert a newly built UserControl to a string of HTML.</p>
<p>You can render the control using <code>Control.RenderControl(HtmlTextWriter)</code>.</p> <p>Feed <code>StringWriter</code> to the <code>HtmlTextWriter</code>.</p> <p>Feed <code>StringBuilder</code> to the <code>StringWriter</code>.</p> <p>Your generated string will be inside the <code>StringBuilder</code> object.</p> <p>Here's a code example for this solution:</p> <pre><code>string html = String.Empty; using (TextWriter myTextWriter = new StringWriter(new StringBuilder())) { using (HtmlTextWriter myWriter = new HtmlTextWriter(myTextWriter)) { myControl.RenderControl(myWriter); html = myTextWriter.ToString(); } } </code></pre>
<p>Call its <code>.RenderControl()</code> method.</p>
36,712
<p>As the subject says I want to insert an image into the 2nd column of grid defined with 2 columndefintions.</p> <p>Programmatically that is???</p> <p>I cannot see how to select the column using grid.Children.insert(1, img) does not work.</p> <p>Malcolm</p>
<pre><code>Image imgControl = new Image(); Grid.SetColumn(imgControl, 1); gridContainer.Children.Add(imgControl); </code></pre> <p>Objects contained in a grid are positioned based on the attached dependency properties Column Row ColumnSpan and RowSpan which are set as shown above.</p>
<p>The row/column index on an element in WPF is an attached property. You set it using a static method on Grid, like this:</p> <pre><code>Grid.SetColumn(img, 1); </code></pre> <p>More info <a href="http://msdn.microsoft.com/en-us/library/system.windows.controls.grid.setcolumn.aspx" rel="noreferrer">here</a>, and more about attached properties <a href="http://msdn.microsoft.com/en-us/library/ms749011.aspx" rel="noreferrer">here</a>.</p>
45,040
<p>This might be an odd question, but when I scale my image in C# I need it to be pixelated and not anti-aliased. Just like in MSpaint when you scale.</p> <p>I hope images anti-alias by default in C#, or else I changed something I didn't want to.</p> <p>I've tried playing around with the <code>Graphics.InterpolationMode</code> but no luck there. I'm using a Bitmap object to hold the image and it's being constructed like so:</p> <pre><code>// A custom control holds the image this.m_ZoomPanPicBox.Image = new Bitmap(szImagePath); </code></pre> <p>And a brief synapsis of the custom control:</p> <pre><code>class ZoomPanPicBox : ScrollableControl { Image m_image; float m_zoom = 1.0f; InterpolationMode m_interpolationMode; ... //////////////////////////////////////////////////////// public ZoomPanPicBox() { //Double buffer the control this.SetStyle(ControlStyles.AllPaintingInWmPaint | ControlStyles.UserPaint | ControlStyles.ResizeRedraw | ControlStyles.UserPaint | ControlStyles.DoubleBuffer, true); this.AutoScroll=true; } //////////////////////////////////////////////////////// protected override void OnPaint(PaintEventArgs e) { //if no image, don't bother if(m_image==null) { base.OnPaintBackground(e); return; } //Set up a zoom matrix Matrix mx = new Matrix(m_zoom,0,0,m_zoom,0,0); //now translate the matrix into position for the scrollbars mx.Translate(this.AutoScrollPosition.X / m_zoom, this.AutoScrollPosition.Y / m_zoom); //use the transform e.Graphics.Transform = mx; //and the desired interpolation mode e.Graphics.InterpolationMode = m_interpolationMode; //Draw the image ignoring the images resolution settings. e.Graphics.DrawImage(m_image,new Rectangle(0,0,this.m_image.Width,this.m_image.Height),0,0,m_image.Width, m_image.Height,GraphicsUnit.Pixel); base.OnPaint(e); } </code></pre> <p>Any ideas? Thanks.</p>
<p>Actually, you're right with InterpolationMode, as <a href="http://msdn.microsoft.com/en-us/library/k0fsyd4e.aspx" rel="nofollow noreferrer">the docs say</a>. Just set it to InterpolationMode.NearestNeighbor. In your code sample, you never set m_interpolationMode.</p>
<p>Well, you could implement the scale yourself and do a simple linear interpolation (I.E. don't do any neighbor averaging like bicubic)... Those look nice and blocky.</p>
23,655
<p>The following returns </p> <blockquote> <p>Type of conditional expression cannot be determined because there is no implicit conversion between 'double' and '&lt;null&gt;'</p> </blockquote> <pre><code>aNullableDouble = (double.TryParse(aString, out aDouble) ? aDouble : null) </code></pre> <hr> <p>The reason why I can't just use aNullableBool instead of the roundtrip with aDouble is because aNullableDouble is a property of a generated EntityFramework class which cannot be used as an out par.</p>
<pre><code>aNullableDouble = double.TryParse(aString, out aDouble) ? (double?)aDouble : null; </code></pre>
<p>.NET supports <a href="http://msdn.microsoft.com/en-us/library/1t3y8s4s.aspx" rel="nofollow noreferrer">nullable types</a>, but by declaring them as such you have to treat them a bit differently (as, understandably, something which is normally a value type now is sort of reference-ish).</p> <p>This also might not help much if you end up having to do too much converting between nullable doubles and regular doubles... as might easily be the case with an auto-generated set of classes.</p>
10,921
<p>I'd like to know the standard way to benchmark a SQL Sever Query, preferably I'd like to know about the tools that come with SQL Server rather than 3rd Party tools.</p>
<p>set showplan_text on will show you the execution plan (to see it graphically use <kbd>CTRL</kbd> + <kbd>K</kbd> (sql 2000) or <kbd>CTRL</kbd> + <kbd>M</kbd> (sql 2005 +)</p> <p>set statistics IO on will show you the reads</p> <p>set statistics time on will show you the elapsed time</p>
<p>+1 on the execution plan. From here you can see where all the time is being spent in your particular query. Eg. 85% of the time is spent table scanning a particular table, can you put an index on that table to improve it? etc etc.</p>
5,523
<p>I have a JSP accessed through JBoss. It renders a list (a search result). If the response gets big, approximately larger than 200k the response is truncated. I can see how the page just ends in the middle of a tag in Firefox. IE totally freaks out an so does Fiddler. Responses smaller than 200k are no problem.</p> <p>Anyone has experienced this? I don't know where to look for the problem... any suggestions are welcome.</p>
<p>If your JSP renders a very complex html page, then it might just be the browsers tripping over their own feet. Can you retrieve the page via wget or curl? Is it truncated then, too?</p>
<p>I second Henning's suggestion. I have used JSPs on JBoss to return multi megabyte responses, I would look at the code or possibly an intermediate proxy server rather than JBoss.</p>
24,985
<p>In .NET, a value type (C# <code>struct</code>) can't have a constructor with no parameters. According to <a href="https://stackoverflow.com/questions/203695/structure-vs-class-in-c#204009">this post</a> this is mandated by the CLI specification. What happens is that for every value-type a default constructor is created (by the compiler?) which initialized all members to zero (or <code>null</code>).</p> <p>Why is it disallowed to define such a default constructor?</p> <p>One trivial use is for rational numbers:</p> <pre><code>public struct Rational { private long numerator; private long denominator; public Rational(long num, long denom) { /* Todo: Find GCD etc. */ } public Rational(long num) { numerator = num; denominator = 1; } public Rational() // This is not allowed { numerator = 0; denominator = 1; } } </code></pre> <p>Using current version of C#, a default Rational is <code>0/0</code> which is not so cool.</p> <p><strong>PS</strong>: Will default parameters help solve this for C#&nbsp;4.0 or will the CLR-defined default constructor be called?</p> <hr> <p><a href="https://stackoverflow.com/questions/333829/why-cant-i-define-a-default-constructor-for-a-struct-in-net#333840">Jon Skeet</a> answered:</p> <blockquote> <p>To use your example, what would you want to happen when someone did:</p> <pre><code> Rational[] fractions = new Rational[1000]; </code></pre> <p>Should it run through your constructor 1000 times?</p> </blockquote> <p>Sure it should, that's why I wrote the default constructor in the first place. The CLR should use the <em>default zeroing</em> constructor when no explicit default constructor is defined; that way you only pay for what you use. Then if I want a container of 1000 non-default <code>Rational</code>s (and want to optimize away the 1000 constructions) I will use a <code>List&lt;Rational&gt;</code> rather than an array.</p> <p>This reason, in my mind, is not strong enough to prevent definition of a default constructor.</p>
<p><strike> <strong>Note:</strong> the answer below was written a long time prior to C# 6, which is planning to introduce the ability to declare parameterless constructors in structs - but they still won't be called in all situations (e.g. for array creation)</strike> (in the end this feature <a href="https://stackoverflow.com/questions/31063109/parameterless-constructors-in-structs-for-c-sharp-6">was not added to C# 6</a>).</p> <hr> <p>EDIT: I've edited the answer below due to Grauenwolf's insight into the CLR.</p> <p>The CLR allows value types to have parameterless constructors, but C# doesn't. I believe this is because it would introduce an expectation that the constructor would be called when it wouldn't. For instance, consider this:</p> <pre><code>MyStruct[] foo = new MyStruct[1000]; </code></pre> <p>The CLR is able to do this very efficiently just by allocating the appropriate memory and zeroing it all out. If it had to run the MyStruct constructor 1000 times, that would be a lot less efficient. (In fact, it doesn't - if you <em>do</em> have a parameterless constructor, it doesn't get run when you create an array, or when you have an uninitialized instance variable.)</p> <p>The basic rule in C# is "the default value for any type can't rely on any initialization". Now they <em>could</em> have allowed parameterless constructors to be defined, but then not required that constructor to be executed in all cases - but that would have led to more confusion. (Or at least, so I believe the argument goes.)</p> <p>EDIT: To use your example, what would you want to happen when someone did:</p> <pre><code>Rational[] fractions = new Rational[1000]; </code></pre> <p>Should it run through your constructor 1000 times?</p> <ul> <li>If not, we end up with 1000 invalid rationals</li> <li>If it does, then we've potentially wasted a load of work if we're about to fill in the array with real values.</li> </ul> <p>EDIT: (Answering a bit more of the question) The parameterless constructor isn't created by the compiler. Value types don't have to have constructors as far as the CLR is concerned - although it turns out it <em>can</em> if you write it in IL. When you write "<code>new Guid()</code>" in C# that emits different IL to what you get if you call a normal constructor. See <a href="https://stackoverflow.com/questions/203695/structure-vs-class-in-c">this SO question</a> for a bit more on that aspect.</p> <p>I <em>suspect</em> that there aren't any value types in the framework with parameterless constructors. No doubt NDepend could tell me if I asked it nicely enough... The fact that C# prohibits it is a big enough hint for me to think it's probably a bad idea.</p>
<p>Here's my solution to the no default constructor dilemma. I know this is a late solution, but I think it's worth noting this is a solution. </p> <pre><code>public struct Point2D { public static Point2D NULL = new Point2D(-1,-1); private int[] Data; public int X { get { return this.Data[ 0 ]; } set { try { this.Data[ 0 ] = value; } catch( Exception ) { this.Data = new int[ 2 ]; } finally { this.Data[ 0 ] = value; } } } public int Z { get { return this.Data[ 1 ]; } set { try { this.Data[ 1 ] = value; } catch( Exception ) { this.Data = new int[ 2 ]; } finally { this.Data[ 1 ] = value; } } } public Point2D( int x , int z ) { this.Data = new int[ 2 ] { x , z }; } public static Point2D operator +( Point2D A , Point2D B ) { return new Point2D( A.X + B.X , A.Z + B.Z ); } public static Point2D operator -( Point2D A , Point2D B ) { return new Point2D( A.X - B.X , A.Z - B.Z ); } public static Point2D operator *( Point2D A , int B ) { return new Point2D( B * A.X , B * A.Z ); } public static Point2D operator *( int A , Point2D B ) { return new Point2D( A * B.Z , A * B.Z ); } public override string ToString() { return string.Format( "({0},{1})" , this.X , this.Z ); } } </code></pre> <p>ignoring the fact I have a static struct called null, (Note: This is for all positive quadrant only), using get;set; in C#, you can have a try/catch/finally, for dealing with the errors where a particular data type is not initialized by the default constructor Point2D(). I guess this is elusive as a solution to some people on this answer. Thats mostly why i'm adding mine. Using the getter and setter functionality in C# will allow you to bypass this default constructor non-sense and put a try catch around what you dont have initialized. For me this works fine, for someone else you might want to add some if statements. So, In the case where you would want a Numerator/Denominator setup, this code might help. I'd just like to reiterate that this solution does not look nice, probably works even worse from an efficiency standpoint, but, for someone coming from an older version of C#, using array data types gives you this functionality. If you just want something that works, try this:</p> <pre><code>public struct Rational { private long[] Data; public long Numerator { get { try { return this.Data[ 0 ]; } catch( Exception ) { this.Data = new long[ 2 ] { 0 , 1 }; return this.Data[ 0 ]; } } set { try { this.Data[ 0 ] = value; } catch( Exception ) { this.Data = new long[ 2 ] { 0 , 1 }; this.Data[ 0 ] = value; } } } public long Denominator { get { try { return this.Data[ 1 ]; } catch( Exception ) { this.Data = new long[ 2 ] { 0 , 1 }; return this.Data[ 1 ]; } } set { try { this.Data[ 1 ] = value; } catch( Exception ) { this.Data = new long[ 2 ] { 0 , 1 }; this.Data[ 1 ] = value; } } } public Rational( long num , long denom ) { this.Data = new long[ 2 ] { num , denom }; /* Todo: Find GCD etc. */ } public Rational( long num ) { this.Data = new long[ 2 ] { num , 1 }; this.Numerator = num; this.Denominator = 1; } } </code></pre>
43,277
<p>How do I run a function on a loop so all the results go straight into a list and is there a way to run a function which acts on all the values in a list?</p>
<p>Theres a couple ways to run a function on a loop like that - You can either use a list comprehension</p> <pre><code>test = list('asdf') [function(x) for x in test] </code></pre> <p>and use that result</p> <p>Or you could use the map function</p> <pre><code>test = list('asdf') map(function, test) </code></pre> <p>The first answer is more "pythonic", while the second is more functional. </p> <p>EDIT: The second way is also a lot faster, as it's not running arbitrary code to call a function, but directly calling a function using <code>map</code>, which is implemented in C.</p>
<p>This example shows how to do it (run it in an interpreter)</p> <pre><code>&gt;&gt;&gt; def square(x): ... return x*x ... &gt;&gt;&gt; a = [1,2,3,4,5,6,7,8,9] &gt;&gt;&gt; map(square,a) [1, 4, 9, 16, 25, 36, 49, 64, 81] </code></pre>
32,381
<p>I need to set an environment variable in Python and find the address in memory where it is located. Since it's on Linux, I don't mind about using libraries that only work consistently on Linux (if that's the only way). How would you do this?</p> <p>Edit: The scope of the problem is as follows: I'm trying to hack a program for class, and essentially I'm putting my shellcode into an environment variable and then overwriting one byte on the victim code with the address of my environment variable. I need to find a way to automate this in Python, so my question is two-fold:</p> <ul> <li><p>Is there a way to get the address in memory of an environment variable?</p></li> <li><p>Can this only be done in bash/C or can I do it purely in Python?</p></li> </ul>
<p>The built in function id() returns a unique id for any object, which just happens to be it's memory address. </p> <p><a href="http://docs.python.org/library/functions.html#id" rel="nofollow noreferrer">http://docs.python.org/library/functions.html#id</a></p>
<p>Pass the address itself in an environment variable, and just read it with os.getenv().</p>
39,959
<p>I need something like <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/Set.html" rel="noreferrer">this</a>, a collection of elements which contains no duplicates of any element. Does Common Lisp, specifically SBCL, have any thing like this? </p>
<p>For a quick solution, just use hash tables, as has been mentioned before.</p> <p>However, if you prefer a more principled approach, you can take a look at <a href="http://common-lisp.net/project/fset/" rel="noreferrer">FSet</a>, which is “a functional set-theoretic collections library”. Among others, it contains classes and operations for sets and bags.</p> <p>(EDIT:) The cleanest way would probably be to define your set-oriented operations as generic functions. A set of generic functions is basically equivalent to a Java interface, after all. You can simply implement methods on the standard HASH-TABLE class as a first prototype and allow other implementations as well.</p>
<p>Lisp hashtables are CLOS based. Specs <a href="http://clhs.lisp.se/Body/t_hash_t.htm" rel="nofollow noreferrer">here</a>.</p>
20,019
<p>I keep getting this error when trying to re-order items in my ReorderList control.</p> <p>"Reorder failed, see details below.</p> <p>Can't access data source. It does not a DataSource and does not implement IList."</p> <p>I'm setting the datasource to a DataTable right now, and am currently trying to use an ArrayList datasource instead, but am discouraged because of <a href="http://www.codeplex.com/AjaxControlToolkit/WorkItem/View.aspx?WorkItemId=7589" rel="nofollow noreferrer">this post</a> on the internet elsewhere. The control exists within an update panel, but no other events are subscribed to. Should there be something special with the OnItemReorder event? Just confused as to how it works.</p> <p>My question is, does anyone have any direct experience with this issue?</p>
<p>I figured it out. I converted the DataTable to an ArrayList then bound to the control. Thanks everyone for reading!</p>
<p>I've used it successfully in the past without much issue (binding to a List). Could you post some snippets of what you have in your front-end and code-behind?</p>
15,655
<p>I've been using the following snippet in developements for years. Now all of a sudden I get a DB Error: no such field warning</p> <pre><code>$process = "process"; $create = $connection-&gt;query ( "INSERT INTO summery (process) VALUES($process)" ); if (DB::isError($create)) die($create-&gt;getMessage($create)); </code></pre> <p>but it's fine if I use numerics</p> <pre><code>$process = "12345"; $create = $connection-&gt;query ( "INSERT INTO summery (process) VALUES($process)" ); if (DB::isError($create)) die($create-&gt;getMessage($create)); </code></pre> <p>or write the value directly into the expression</p> <pre><code>$create = $connection-&gt;query ( "INSERT INTO summery (process) VALUES('process')" ); if (DB::isError($create)) die($create-&gt;getMessage($create)); </code></pre> <p>I'm really confused ... any suggestions?</p>
<p>It's always better to use prepared queries and parameter placeholders. Like this in Perl DBI:</p> <pre><code>my $process=1234; my $ins_process = $dbh-&gt;prepare("INSERT INTO summary (process) values(?)"); $ins_process-&gt;execute($process); </code></pre> <p>For best performance, prepare all your often-used queries right after opening the database connection. Many database engines will store them on the server during the session, much like small temporary stored procedures.</p> <p>Its also very good for security. Writing the value into an insert string yourself means that you must write the correct escape code at each SQL statement. Using a prepare and execute style means that only <strong>one</strong> place (execute) needs to know about escaping, if escaping is even necessary.</p>
<p>Ditto what Zan Lynx said about placeholders. But you may still be wondering why your code failed.</p> <p>It appears that you forgot a crucial detail from the previous code that worked for you for years: quotes.</p> <p>This (tested) code works fine:</p> <pre><code>my $thing = 'abcde'; my $sth = $dbh-&gt;prepare("INSERT INTO table1 (id,field1) VALUES (3,'$thing')"); $sth-&gt;execute; </code></pre> <p>But this next code (lacking the quotation marks in the VALUES field just as your first example does) produces the error you report because VALUES (3,$thing) resolves to VALUES (3,abcde) causing your SQL server to look for a field called abcde and there is no field by that name.</p> <pre><code>my $thing = 'abcde'; my $sth = $dbh-&gt;prepare("INSERT INTO table1 (id,field1) VALUES (3,$thing)"); $sth-&gt;execute; </code></pre> <p>All of this assumes that your first example is not a direct quote of code that failed as you describe and therefore not what you intended. It resolves to:</p> <pre><code>"INSERT INTO summery (process) VALUES(process)" </code></pre> <p>which, as mentioned above causes your SQL server to read the item in the VALUES set as another field name. As given, this actually runs on MySQL without complaint and will fill the field called 'process' with NULL because that's what the field called 'process' contained when MySQL looked there for a value as it created the new record.</p> <p>I do use this style for quick throw-away hacks involving known, secure data (e.g. a value supplied within the program itself). But for anything involving data that comes from outside the program or that might possibly contain other than [0-9a-zA-Z] it will save you grief to use placeholders.</p>
42,615
<p>Can a macro be written in Scheme (with <code>define-syntax</code>, for example) which will take expressions like this:</p> <pre><code>(op a b c d e f g h i j) </code></pre> <p>And yield expressions like this as output?</p> <pre><code>(op (op (op (op (op (op (op (op (op a b) c) d) e) f) g) h) i) j) </code></pre> <p>Of course, for arbitrary lengths. I can't think of a way to do it, given some template like this:</p> <pre><code>(define-syntax op (syntax-rules () [(_) 'base-case] [(v1 v2 ...) 'nested-case??])) </code></pre>
<pre><code>(define bop list) (define-syntax op (syntax-rules () ((op a b) (bop a b)) ((op a b c ...) (op (bop a b) c ...)))) </code></pre> <p>For example, <code>(op 1 2 3 4)</code> expands to <code>(bop (bop (bop 1 2) 3) 4)</code> and evaluates to <code>(((1 2) 3) 4)</code>.</p>
<p>To show how the answer works out:</p> <pre><code>(op 1 2 3 4) </code></pre> <p>This is an op with 4 statements, so the 2nd case gets selected with a=1, b=2, c=3, ...=4:</p> <pre><code>(op (bop 1 2) 3 4) </code></pre> <p>This is an op with 3 statements, so 2nd case again. a=(bop 1 2), b=3, c=4:</p> <pre><code>(op (bop (bop 1 2) 3) 4) </code></pre> <p>Now this is a bop with 2 statements, so a=(bop (bop 1 2) 3), b=4, and it's done.</p>
44,191
<p>If you visit <a href="http://www.maplesoft.com/company/news/index.aspx" rel="nofollow noreferrer">this page</a> in Internet explorer, and choose a value from the "Current Media Releases" dropdown on the top right, eventually IE will try to redirect you to an ugly url containing this string:</p> <p>__EVENTTARGET=selArchives&amp;__EVENTARGUMENT=&amp;__LASTFOCUS=&amp;__VIEWSTATE=</p> <p>The page should only be updating the selArchives Query string value.</p> <p>The drop down has AutoPostBack set to true and the codebehind is in VB, here is the event handler:</p> <pre><code>Private Sub selArchives_SelectedIndexChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles selArchives.SelectedIndexChanged Response.Redirect("index.aspx?selArchives=" + selArchives.SelectedValue) End Sub </code></pre> <p>Obviously, I could just write the JavaScript myself, but I would like to find the source of the problem.</p> <hr> <p>Not sure what specifically was causing the problem. Looks like it was a combination of a few factors.</p> <p>Thanks for the help</p>
<p>Went to the site. Other than some javascript errors that are popping, it appears to be working fine.</p> <p>== Error: $ is not defined Source file: <a href="http://www.maplesoft.com/ScriptResource.axd?d=kNY1h-WYJzKkuCdZqmndbpb67jRr2cZCC6s2tf_nrnwCcH6rvds1RZUYXUp0gdMqnu-6-o6rl1eH-wm7AO9xVw2&amp;t=633640679588907500" rel="nofollow noreferrer">http://www.maplesoft.com/ScriptResource.axd?d=kNY1h-WYJzKkuCdZqmndbpb67jRr2cZCC6s2tf_nrnwCcH6rvds1RZUYXUp0gdMqnu-6-o6rl1eH-wm7AO9xVw2&amp;t=633640679588907500</a></p> <h1>Line: 1</h1>
<p>The problem only occurs in IE. It works fine in Firefox, and obviously Chrome as well.</p>
45,525
<p>I have a query that is currently using a correlated subquery to return the results, but I am thinking the problem could be solved more eloquently perhaps using ROW_NUMBER().</p> <p>The problem is around the profile of a value v, through a number of years for an Item. Each item has a number of versions, each with its own profile whick starts when the version is introduced and the data currently looks like this:</p> <pre> ItemId ItemVersionId Year Value =========================================== 1 1 01 0.1 1 1 02 0.1 1 1 03 0.2 1 1 04 0.2 1 1 05 0.2 1 1 06 0.3 1 1 07 0.3 1 1 08 0.4 1 2 04 0.3 1 2 05 0.3 1 2 06 0.3 1 2 07 0.4 1 2 08 0.5 1 3 07 0.6 1 3 08 0.7 2 1 01 0.1 2 1 01 0.1 2 1 01 0.2 etc </pre> <p>I want to return the full profile for an Item using the most recent version where applicable. For the above example for item 1:</p> <pre> ItemId ItemVersionId Year Value =========================================== 1 1 01 0.1 1 1 02 0.1 1 1 03 0.2 1 2 04 0.3 1 2 05 0.3 1 2 06 0.3 1 3 07 0.6 1 3 08 0.7 </pre> <p>I am currently using</p> <pre><code>SELECT ItemId, ItemVersionId, Year, Value FROM table t WHERE ItemId = 1 AND ItemVersionId = (SELECT MAX(ItemVersionId) FROM table WHERE ItemId = t.ItemId AND Year = t.Year) </code></pre> <p>Whilst this returns the correct I suspect there is a more efficient way to do it, especially when the table gets large.</p> <p>I am using SQL Server 2005.</p> <p>Thanks in advance</p>
<p>I would do it with a CTE:</p> <pre><code>WITH Result AS ( SELECT Row_Number() OVER (PARTITION BY ItemId, Year ORDER BY ItemversionId DESC) AS RowNumber ,ItemId ,ItemversionId ,Year ,Value FROM table ) SELECT ItemId ,ItemversionId ,Year ,Value FROM Result WHERE RowNumber = 1 ORDER BY ItemId, Year </code></pre>
<p>I think it's okay how you do it. You could check if there is a <strong>composite index on ItemId and Year</strong>.</p> <p>You could inspect the query plan to see the impact of that query.</p> <p>If there is an "Item" table in your database you could try another approach. <strong>Insert a column ItemVersionId</strong> in that table and make sure you update that value when new versions are saved. Then in your query <strong>join the Item table using ItemId and ItemVersionId</strong> instead of using that subquery.</p>
35,051
<p>My company is heavily invested in the MS BI Stack (SQL Server Reporting Services, -Analysis Services and -Integration Services), but I want to have a look at what the seemingly most talked about open-source alternative Pentaho is like.</p> <p>I've installed a version, and I got it up and running quite painlessly. So that's good. But I haven't really the time to start using it for actual work to get a thorough understanding of the package.</p> <p>Have any of you got any insights into what are the pros and cons of Pentaho vs MS BI, or any links to such comparisons?</p> <p>Much appreciated!</p>
<p>I reviewed multiple Bi stacks while on a path to get off of Business Objects. A lot of my comments are preference. Both tool sets are excellent. Some things are how I prefer chocolate fudge brownie ice cream over plain chocolate.</p> <p>Pentaho has some really smart guys working with them but Microsoft has been on a well funded and well planned path. Keep in mind MS are still the underdogs in the database market. Oracle is king here. To be competitive MS has been giving away a lot of goodies when you buy the database and have been forced to reinvent their platform a couple of times. I know this is not about the database, but the DB battle has cause MS to give away a lot in order to add value to their stack.</p> <p>1.) Platform <br> SQL server doesn't run on Unix or Linux so they are automatically excluded from this market. Windows is about the same price as some versions or Unix now. Windows is pretty cheap and runs faily well now. It gives me about as much trouble as Linux.</p> <p>2.) OLAP <br> Analysis services was reinvented in 2005 (current is 2008) over the 2000 version. It is an order of magnatude more powerful over 2000. The pentaho (Mondrian) is not as fast once you get big. It also has few features. It is pretty good but there are less in the way of tools. Both support Excel as the platform which is esscential. The MS version is more robust.</p> <p>3.) ETL <br> MS - DTS has been replaced with SSIS. Again, order of magnatude increase in speed, power, and ability. It controls any and all data movement or program control. If it can't do it you can write a script in Powershell. On par with Informatica in the 2008 release. Pentaho - Much better than is used to be. Not as fast as I would like but I can do just about everything I want to do. </p> <p>4.) dashboard <br> Pentaho has improved this. It is sort of uncomfortable and unfriendly to develop but there is really not a real equiv for MS. </p> <p>5.) reports <br> MS reports is really powerful but not all that hard to use. I like it now but hated it at first, until I got to know it a little better. I had been using crystal reports and the MS report builder is much more powerful. It is easy to do hard things in MS, but a little harder to do easy things. Pentaho is a little clumsy. I didn't like it at all but you might. I found it to be overly complex. I wish it was either more like the Crystal report builder or the MS report builder but it is jasper like. I find is to be hard. That may be a preference.</p> <p>6.) ad hoc <br> MS - this was the real winner for me. I tested it with my users an they instantly in love with the MS user report builder. What made the difference was how it was not just easy to use, but also productive. Pentaho - is good but pretty old school. It uses the more typical wizard based model and has powerful tools but I hate it. It is an excellent tool for what it is, but we have moved on from this style and no one wants to go back. Same problem I had with logiXML. The interface worked well for what it was but is not really much of a change from what we used 12 years. <a href="http://wiki.pentaho.com/display/PRESALESPORTAL/Methods+of+Interactive+Reporting" rel="noreferrer">http://wiki.pentaho.com/display/PRESALESPORTAL/Methods+of+Interactive+Reporting</a></p> <p>There are some experienced people out there that can make Pentaho really run well, I just found the MS suite to be more productive.</p>
<p>If you are looking for a robust, low cost alternative to the big boys LogiXML has dashboarding and ad hoc reporting on a .NET platform. We've been using them since late 2006 when Pentaho was just starting, but I haven't looked at it in awhile. </p>
18,186
<p>I am trying to upload a file or stream of data to our web server and I cant find a decent way of doing this. I have tried both <code>WebClient</code> and <code>WebRequest</code> both have their problems. </p> <p><strong>WebClient</strong><br> Nice and easy but you do not get any notification that the asynchronous upload has completed, and the <code>UploadProgressChanged</code> event doesnt get called back with anything useful. The alternative is to convert your binary data to a string and use <code>UploadStringASync</code> because then at least you get a <code>UploadStringCompleted</code>, problem is you need a lot of ram for big files as its encoding all the data and uploading it in one go.</p> <p><strong>HttpWebRequest</strong><br> Bit more complicated but still does what is needed, problem I am getting is that even though it is called on a background thread (supposedly), it still seems to be blocking my UI and the whole browser until the upload has completed which doesnt seem quite right.</p> <p>Normal .net does have some appropriate <code>WebClient</code> methods for <a href="http://msdn.microsoft.com/en-us/library/system.net.webclient.onuploaddatacompleted.aspx" rel="nofollow noreferrer">OnUploadDataCompleted</a> and progress but these arent available in Silverlight .net ... big omission I think!</p> <p>Does anyone have any solutions, I need to upload multiple binary files preferrably with a progress but I need to perform some actions when the files have completed their upload.</p> <p>Look forward to some help with this.</p>
<p>The way i get around it is through INotifyPropertyChanged and event notification.</p> <p>The essentials:</p> <pre><code> public void DoIt(){ this.IsUploading = True; WebRequest postRequest = WebRequest.Create(new Uri(ServiceURL)); postRequest.BeginGetRequestStream(new AsyncCallback(RequestOpened), postRequest); } private void RequestOpened(IAsyncResult result){ WebRequest req = result.AsyncState as WebRequest; req.BeginGetResponse(new AsyncCallback(GetResponse), req); } private void GetResponse(IAsyncResult result) { WebRequest req = result.AsyncState as WebRequest; string serverresult = string.Empty; WebResponse postResponse = req.EndGetResponse(result); StreamReader responseReader = new StreamReader(postResponse.GetResponseStream()); this.IsUploading= False; } private Bool_IsUploading; public Bool IsUploading { get { return _IsUploading; } private set { _IsUploading = value; OnPropertyChanged("IsUploading"); } } </code></pre> <p>Right now silverlight is a PiTA because of the double and triple Async calls. </p>
<p>Matt Berseth had some thoughts in this, might help:</p> <p><a href="http://mattberseth.com/blog/2008/07/aspnet_file_upload_with_realti_1.html" rel="nofollow noreferrer">http://mattberseth.com/blog/2008/07/aspnet_file_upload_with_realti_1.html</a></p> <p><strong>@Dan</strong> - Apologies mate, I coulda sworn Matt's article was about Silverlight, but it's quite clearly not. Blame it on those two big glasses of Chilean red I just downed. :-) </p>
3,372
<p>As you may know, in <code>VS 2008</code> <kbd>ctrl</kbd>+<kbd>tab</kbd> brings up a nifty navigator window with a thumbnail of each file. I love it, but there is one tiny thing that is annoying to me about this feature: <em>the window stays around after releasing the <kbd>ctrl</kbd> key</em>. When doing an <kbd>alt</kbd>+<kbd>tab</kbd> in windows, you can hit tab to get to the item you want (while still holding down the <kbd>alt</kbd> key), and then when you find what you want, <em>lifting up</em> on the <kbd>alt</kbd> key selects that item.</p> <p>I wish <code>VS 2008</code> would do the same. For me, when I lift off of <kbd>ctrl</kbd>, the window is still there. I have to hit <kbd>enter</kbd> to actually select the item. I find this annoying.</p> <p>Does anyone know how to make <code>VS 2008</code> dismiss the window on the <em>release</em> of the <kbd>ctrl</kbd> key?</p>
<p>You probably have the text-to-speech narrator enabled.</p> <p><a href="http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2467648&amp;SiteID=1&amp;mode=1" rel="nofollow noreferrer">http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2467648&amp;SiteID=1&amp;mode=1</a></p> <blockquote> <p>Just uncheck all checkboxes under "Text-To-Speech" narrator software.</p> <p>--> To open Narrator using the keyboard, press <strong><kbd>CTRL</kbd>+<kbd>ESC</kbd></strong>, press <strong><kbd>R</kbd></strong>, type narrator, and then press Enter.</p> </blockquote> <p>This one drove me crazy for several months until I found this posting.</p>
<p>Just in case anyone still needed a fix for this (I've encountered this behavior in VS2010) what you can do is:<br /></p> <ul> <li>Close VS</li> <li>Enable sticky keys</li> <li>Reopen VS</li> <li>Disable sticky keys</li> </ul> <p>This solved it for me.</p>
3,758
<p>My firm have a talented and smart operations staff who are working very hard. I'd like to give them a SQL-execution tool that helps them avoid common, easily-detected SQL mistakes that are easy to make when they are in a hurry. Can anyone suggest such a tool? Details follow.</p> <p>Part of the operations team remit is writing very complex ad-hoc SQL queries. Not surprisingly, operators sometimes make mistakes in the queries they write because they are so busy. </p> <p>Luckily, their queries are all SELECTs not data-changing SQL, and they are running on a copy of the database anyway. Still, we'd like to prevent errors in the SQL they run. For instance, sometimes the mistakes lead to long-running queries that slow down the duplicate system they're using and inconvenience others until we find the culprit query and kill it. Worse, occasionally the mistakes lead to apparently-correct answers that we don't catch until much later, with consequent embarrassment. </p> <p>Our developers also make mistakes in complex code that they write, but they have Eclipse and various plugins (such as <a href="http://findbugs.sourceforge.net/" rel="nofollow noreferrer" title="FindBugs">FindBugs</a>) that catch errors as they type. I'd like to give operators something similar - ideally it would see</p> <pre><code>SELECT U.NAME, C.NAME FROM USER U, COMPANY C WHERE U.NAME = 'ibell'; </code></pre> <p>and before you executed, it would say "Hey, did you realise that's a Cartesian product? Are you sure you want to do that?" It doesn't have to be very smart - finding obviously missing join conditions and similar evident errors would be fine.</p> <p>It looks like <a href="http://www.toadsoft.com/" rel="nofollow noreferrer" title="TOAD">TOAD</a> should do this but I can't seem to find anything about such a feature. Are there other tools like TOAD that can provide this kind of semi-intelligent error correction?</p> <p>Update: I forgot to mention that we're using MySQL.</p>
<p>If your people are using the mysql(1) program to run queries, you can use the <a href="http://dev.mysql.com/doc/refman/5.1/en/mysql-tips.html" rel="nofollow noreferrer">safe-updates</a> option (aka i-am-a-dummy) to get you part of what you need. Its name is somewhat misleading; it not only prevents UPDATE and DELETE without a WHERE (which you're not worried about), but also adds an implicit LIMIT 1000 to SELECT statements, and aborts SELECTs that have joins and are estimated to consider over 1,000,000 tuples --- perfect for discouraging Cartesian joins.</p>
<p>You might find <a href="http://www.red-gate.com/Products/SQL_Prompt/index.htm" rel="nofollow noreferrer">SQL Prompt</a> from redgate useful. I'm not sure what database engine you're using, as it's only for MSSQL Server</p>
3,623
<p>I've created a list using the SharePoint (MOSS 2007) Issue Tracking List. A "Comments" field is automatically created in this list. The Comments column has extra functionality that provides a sort of history/log whenever an edit is made to a list item. Unfortunately, a comment entry is created even when an insignificant edit is made. For example, I could edit the title of the list item and leave the comment field blank. This results in a blank entry being saved for the Comments field (except the date/time of the edit and the person making it). If there are several edits made (with the Comments field left empty), then several blank entries appear to be stored in the data (for the Comment field). Is there a way to not save a comment entry if it's blank?</p> <p>Also, is there a way to have a Comments field in a Custom List? This column seems to only be available in the Issues Tracking List.</p>
<p>The "Comments" field is actually an "Append-Only Comments" Column . You should be able to add this to any custom list. (You will need versioning turned on on the list to operate in the same manner as it does in issue tracker.)</p> <p>As far as not getting "blank" entries, the only way to do this, would be to turn off "Append Changes to Existing Text" option on the column under list settings. Doing this would turn it into just another multi-line text box.</p> <p>I think you are missing the point of this column. After an edit is made, you can click on the Date/time in this column and it will show you that version of the list item. I realize that you could do this in version history as well, but this gives you a quick link inside the item. Also, trained users should be documenting what they changed inside this box. Might help to rename the column to "Note Changes Made Here." :)</p> <p>Good Luck</p>
<p>This post may help: <a href="https://chuvash.eu/2012/04/02/add-comments-column-to-your-sharepoint-list/" rel="nofollow noreferrer">Add Comments column to your sharepoint list</a></p> <blockquote> <ol> <li><p>List Settings - &gt; Versioning Settings -&gt; Item Version History = Yes</p> </li> <li><p>Add “Append-only comments” column from existing site columns</p> </li> </ol> </blockquote>
28,661
<p>When I add a # in insert mode on an empty line in Vim while editing python files, vim moves the # to the beginning of the line, but I would like the # to be inserted at the tab level where I entered it.</p> <p>For example, when writing this in vim</p> <pre><code>for i in range(10): # </code></pre> <p>the # does not stay there where I entered it.</p> <p>It is moved like so, by vim.</p> <pre><code>for i in range(10): # </code></pre> <p>Does anyone know of a configuration item in vim that would change this?</p> <p>If it helps, I am using Ubuntu 8.10.</p>
<p>I found an answer here <a href="http://vim.wikia.com/wiki/Restoring_indent_after_typing_hash" rel="noreferrer">http://vim.wikia.com/wiki/Restoring_indent_after_typing_hash</a></p> <p>It seems that the vim smartindent option is the cause of the problem. The referenced page above describes work-a-rounds but after reading the help in smartindent in vim itself (:help smartindent), I decided to try cindent instead of smartindent.</p> <p>I replaced </p> <pre><code>set smartindent </code></pre> <p>with </p> <pre><code>set cindent </code></pre> <p>in my .vimrc file</p> <p>and so far it is working perfectly.</p> <p>This changed also fixed the behavior of '&lt;&lt;' and '>>' for indenting visual blocks that include python comments.</p> <p>There are more configuration options for and information on indentation in the vim help for smartindent and cindent (:help smartindent and :help cindent).</p>
<p>I removed <code>set smartindent</code> from <code>~/.vimrc</code> but it still didn't disable smartindent. When I opened a .py file and ran <code>:set smartindent?</code> it displayed <code>smartindent</code>.</p> <p>Turns out that further down in the <code>~/.vimrc</code> was this line:</p> <pre><code>autocmd BufRead *.py set smartindent cinwords=if,elif,else,for,while,try,except,finally,def,class ^^^^^^^^^^^ </code></pre> <p>Once I deleted "smartindent" from that line, then smartindent was finally disabled and my comments were indented properly again.</p>
46,124
<p>When browsing the cube in Microsoft SQL Server Analysis Services 2005, I would like to peek at the MDX (supposedly) queries generated by client access tools such as Excel. Is there a tool or method that enables me to do just that?</p> <p>I'm really looking for something like Oracle's v$sessions -- I know about sp_who and sp_who2 for the relational SQL Server, but is there one for MSAS?</p>
<p>Use SQL Server Profiler - it can connect to Analysis Services... When you create a trace make sure you click "Show All Events" and capture the "Execute MDX" events.</p>
<p>I remember doing something along these lines a few years ago. I am not sure that Analysis Services will actually log the MDX it uses, but it does log something. I believe you can right-click the server properties in AS, and there is a tab to tell it a file to log queries to. </p> <p>(Sorry I cant be more specific, it was a fair while ago, and I havent got AS in front of me nowadays!)</p>
39,285
<p>While working on a tool that allows to exchange images of several third-party applications and thus creating individual "skins" for those applications, I have stumbled across a jpg-format about which I cannot seem to find any decent informations.</p> <p>When looking at it in a hex-editor, it starts with the tag "CF10". Searching the internet has only provided a tool that is able to handle these kind of files, without any additional informations.</p> <p>Does anyone have any further informations about this type of jpg-format?</p>
<p><a href="http://linux.die.net/man/1/file" rel="nofollow noreferrer"><code>file(1)</code></a> should give you some useful information. You can also use ImageMagick's <a href="http://www.imagemagick.org/script/identify.php" rel="nofollow noreferrer"><code>identify(1)</code></a> program (optionally with the <code>-verbose</code> option) to get even more details about the file. See the example on that page for a good idea of what information it provides.</p>
<p>CF stands for "Compression Factor". CF-10 means factor ten, and I don't think it's different from any "standard" jpeg.</p>
13,875
<p>I've been slowly working on a personnel project to run a webmud like game using extjs as my frontend. One of the design choices I made was to allow user-generated evaluated code for game logic. So when the player enters a new "room" a number of state scripts would be called along the lines of "has player been here before, should they be here, do they have x inventory item" and then respond accordingly. Furthermore basic room "actions" would be hard coded ( go N/S/E/W ) but advanced actions would be available as the same user-generated evaluated scripts.</p> <p>Originally I was going to be lazy and use evaluated PHP for this logic, but my paranoid sense is kicking in. So the two alternatives I have found is the runkit_sandbox but it doesn't support an interchange of objects between the primary thread and the sandbox ( just simple data types and arrays) OR using ecmascript as my game logic <a href="http://ejohn.org/blog/spicing-up-embedded-javascript/" rel="nofollow noreferrer">http://ejohn.org/blog/spicing-up-embedded-javascript/</a>.</p> <p>The pro/cons of the two is that with runkit, I can lock the script down pretty hard at a tremendous cost to speed while the ecma interpreter would allow me to selectively bind variables, functions, and possibly objects to the javascript run space but its still in beta state and I've yet to see how well it runs. </p> <p>Is these it for options or is there something else out there I don't know about that might be a better choice? Environment: linux, PHP-CGI 5.3 or as a google app engine. </p>
<p>I wouldn't recommend evaluating user-contributed PHP-code -- even within a runkit sandbox. PHP is a very complex language, and it's closely tied to its environment. Without knowing the specifics, I would anticipate that there are numerous holes that people could leverage to break out of the sandbox.</p> <p>There are other languages, that you can embed, than javascript. <a href="http://www.lua.org/" rel="nofollow noreferrer">Lua</a> is a popular choice for these kinds of things. There is even a <a href="http://pecl.php.net/package/lua" rel="nofollow noreferrer">php extension in pecl</a>, with bindings for it.</p> <p>If you're going the runkit route anyway, you could look into a shared memory solution, such as <a href="http://docs.php.net/manual/en/book.memcache.php" rel="nofollow noreferrer">memcache</a>, for exchanging data between processes.</p>
<p>There is a PHP Sandbox for basic stuff available. It's early stages but looks promising.</p> <p><a href="http://www.phpclasses.org/package/7015-PHP-Execute-external-PHP-scripts-in-a-separate-process.html" rel="nofollow">http://www.phpclasses.org/package/7015-PHP-Execute-external-PHP-scripts-in-a-separate-process.html</a></p> <p>or from GitHub: <a href="https://github.com/fregster/PHPSandbox" rel="nofollow">https://github.com/fregster/PHPSandbox</a></p> <p>Paul</p>
29,850
<p>I have a directory structure like the following;</p> <blockquote> <p>script.php</p> <p>inc/include1.php<br/> inc/include2.php</p> <p>objects/object1.php<br/> objects/object2.php</p> <p>soap/soap.php</p> </blockquote> <p>Now, I use those objects in both <code>script.php</code> and <code>/soap/soap.php</code>, I could move them, but I want the directory structure like that for a specific reason. When executing <code>script.php</code> the include path is <code>inc/include.php</code> and when executing <code>/soap/soap.php</code> it's <code>../inc</code>, absolute paths work, <code>/mnt/webdev/[project name]/inc/include1.php...</code> But it's an ugly solution if I ever want to move the directory to a different location.</p> <p>So is there a way to use relative paths, or a way to programmatically generate the <code>"/mnt/webdev/[project name]/"</code>?</p>
<p>This should work </p> <pre><code>$root = realpath($_SERVER["DOCUMENT_ROOT"]); include "$root/inc/include1.php"; </code></pre> <hr> <p><strong>Edit:</strong> added imporvement by <a href="https://stackoverflow.com/questions/4369/include-files-requiring-an-absolute-path#4388">aussieviking</a></p>
<blockquote> <blockquote> <p>@Flubba, does this allow me to have folders inside my include directory? flat include directories give me nightmares. as the whole objects directory should be in the inc directory.</p> </blockquote> </blockquote> <p>Oh yes, absolutely. So for example, we use a single layer of subfolders, generally:</p> <pre><code>require_once('library/string.class.php') </code></pre> <p>You need to be careful with relying on the include path too much in really high traffic sites, because php has to hunt through the current directory and then all the directories on the include path in order to see if your file is there and this can slow things up if you're getting hammered. </p> <p>So for example if you're doing MVC, you'd put the path to your application directoy in the include path and then specify refer to things in the form</p> <pre><code>'model/user.class' 'controllers/front.php' </code></pre> <p>or whatever.</p> <p>But generally speaking, it just lets you work with really short paths in your PHP that will work from anywhere and it's a lot easier to read than all that realpath document root malarkey. </p> <p>The benefit of those script-based alternatives others have suggested is they work anywhere, even on shared boxes; setting the include path requires a little more thought and effort but as I mentioned lets you start using __autoload which just the coolest.</p>
2,638
<p>Let say I have a sheet in with columns Customer and CreatedDate with lots of row with data. Anyone who knows how to setup (through VBA or Formula) a second sheet that displays rows from the first sheet based on certain where statements, i.e. all rows with customers "created this month." (similar to a select ... where query against a SQL database).</p> <p>Thanks! /Niels</p>
<p>You can create a Pivot Table out of your data, then slice-n-dice it lots of ways.</p>
<p>There isn't an exact equivalent to the SQL <code>select ... where</code> functionality in Excel, but take a look at the <code>VLOOKUP</code> function. It may be what you are looking for. If that doesn't have enough functionality, you will probably have to use VBA:</p> <pre><code> Dim DataRange as Range Dim RowNum as Integer Dim NewRow as Integer Dim TestMonth as Integer Dim ThisMonth as Integer Set DataRange = Range(Sheet1.Cells(1,1), Sheet1.Cells(100,2)) ThisMonth = Application.WorksheetFunction.Month(Application.WorksheetFunction.Today()) NewRow = 1 For RowNum from 1 to DataRange.Rows.Count TestMonth = Application.WorksheetFunction.Month(DataRange.Cells(RowNum, 1).Value) if TestMonth = ThisMonth Then Sheet2.Cells(NewRow, 1).Value = DataRange.Cells(RowNum, 2).Value NewRow = NewRow + 1 End If Next RowNum</code></pre>
42,952
<p>Does anybody know any resources on this subject?</p> <p>I'm developing an embedded application for 2x16 LCD display. Ideally I would like to have a general (display independent) framework, that could be used virtually on any display - one or more segment(s) LED, 1x16, 2x16 LCD, etc. Also would like to learn about general guidelines for such small user interfaces.</p> <p>EDIT: I'm interested in high-level functionality, how to organize the user interface - the menus, options and the user input. We don't dicuss the LCD controller issues here.</p>
<p>I would design it for a single-line interface, using more lines would give you more space.</p> <p>I would go for at least 4 buttons:</p> <ul> <li>MENU</li> <li>UP</li> <li>DOWN</li> <li>OK</li> </ul> <p>If you specify the line width (like 16), then this would work for 16x2, 16x1 and 16 7-segment displays.... and you would take that into consideration when designing the text on the menus.</p> <p>The UI would be more useful if you add more buttons, I would think about these, ordered by priority, but these are not essential:</p> <ul> <li>LEFT &amp; RIGHT</li> <li>NUMERIC KEYS</li> <li>QWERTY</li> </ul> <p>You would have a main menu, that would take you to nested submenus or action items.</p> <p>I'll give an example, let's assume you're doing a digital clock that would work on 16x1 or 16x2.</p> <p>The main screen would be something like 08:15P SUN101908 when you press the menu key, it will show a menu (Set Time, Set Date, Set Alarm,Set Display), with UP&amp;DOWN to move the item, and OK to select an item.</p> <p>If you select "Set time" the UP and DOWN arrows will change the hours and OK will accept and move to the minutes selection ...etc.</p> <p>If you had a numeric keypad, it would be simpler to use.</p>
<p>I don't know of any "project" or library built for this explicit purpose.</p> <p>I recommend that you take the approach of having a "display layer" code that operates on the concepts of screens and fields. The screen is responsible for "owning" all the fields on the screen, and the fields are responsible for specifying what is displayed, and what variable the field affects, and the input method(s) to affect the field values. The fields also store any function pointers to pre- and post- field setup/validation functions.</p> <p>Doing this will help you maintain a fairly consistent UI. The code will also be concentrated in one spot, so it's potentially easier to debug.</p>
26,584
<p>We have a couple of web servers using load balancer. Machines are running IIS6 on port 81. Externally, site is accessable using port 80. External name and name of the machine are different.</p> <p>We're getting </p> <pre><code>System.ServiceModel.EndpointNotFoundException: The message with To '&lt;url&gt;' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver's EndpointAddresses agree. </code></pre> <p>Relevant part of web.config is:</p> <pre><code> &lt;endpoint binding="ws2007HttpBinding" bindingConfiguration="MyServiceBinding" contract="MyService.IMyService" listenUriMode="Explicit" /&gt; </code></pre> <p>We tried adding listenUri, but that didn't solve our problems.</p> <p>Any ideas?</p>
<pre><code>[ServiceBehavior(AddressFilterMode=AddressFilterMode.Any)] </code></pre> <p>Putting this attribute on service solves the problem.</p>
<p>What is the specific load balancer? Using an F5 BIG-IP we got it working fairly easily, but we were using the same port and (relative) uri on the nlb as individual machines (so we can treat an individual machine the same as the farm if we choose). Obviously each machine has a different name, but this setup also allows you to test individual servers by spoofing the host - for example, by editing your HOSTS file to point [your farm name] to [test server IP].</p> <p>The biggest pain we had was SSL; using TransportWithMessageCredential security, WCF refuses inbound http connections - so we had to set up the nlb to re-encrypt between the nlb and the server node - but not a biggie.</p> <p>The only other issue we had was with hosting WCF inside IIS, and WCF not being able to correctly identify the intended site (although IIS was fine) over http (but fine over https). To fix this I wrote a custom factory that simply ignored http completely (only listened on https) - which ties in neatly with the TransportWithMessageCredential requirements anyway, so I wasn't bothered by this.</p> <p>I wonder if you wouldn't get more joy by hosting on a standard port but as a different site (IP/host-header/etc).</p>
32,694
<p>In C++, is it safe to extend scope via a reference?</p> <p>In code, what I mean is: </p> <pre><code>MyCLass&amp; function badIdea() { MyClass obj1; ... return obj1; } </code></pre>
<p>It is NOT safe to extend the scope via reference. Objects in C++ are not reference counted when obj1 goes out of scope it will be deleted, refering to the result of badIdea() will only get you into trouble</p>
<p>Please clarify what you do mean.</p> <p>Assuming you intend to do this:</p> <pre><code>int * p = NULL; { int y = 22; p = &amp;y; } *p = 77; // BOOM! </code></pre> <p>Then no, absolutely not, scope does not get extended by having a reference.</p> <p>You may want to look at smart pointers, e.g. from boost libraries: <a href="http://www.boost.org/doc/libs/1_36_0/libs/smart_ptr/smart_ptr.htm" rel="nofollow noreferrer">clickety</a></p>
32,032
<p>We have a classic ASP application that simply works and we have been loathe to modify the code lest we invoke the wrath of some long-dead Greek gods.</p> <p>We recently had the requirement to add a feature to an application. The feature implementation is really just a database operation requires minimal change to the UI.</p> <p>I changed the UI and made the minor modification to submit a new data value to the sproc call (sproc1).</p> <p>In sproc1 that is called directly from ASP, we added a new call to another sproc that happens to be located on another server, sproc2.</p> <p>Somehow, this does not work via our ASP app, but works in SQL Management Studio.</p> <p>Here's the technical details:</p> <ol> <li>SQL 2005 on both database servers. </li> <li>Sql Login is authenticating from the ASP application to SQL 2005 Server 1. </li> <li>Linked server from Server 1 to Server 2 is working. </li> <li>When executing sproc1 from SQL Management Studio - works fine. Even when credentialed as the same user our code uses (the application sql login). </li> <li>sproc2 works when called independently of sproc1 from SQL Management Studio. </li> <li>VBScript (ASP) captures an error which is emitted in the XML back to the client. Error number is 0, error description is blank. Both from the ADODB.Connection object and from whatever Err.Number/Err.Description yields in VBScript from the ASP side. </li> </ol> <p>So without any errors, nor any reproducibility (i.e. through SQL Mgmt Studio) - does anyone know the issue? </p> <p>Our current plan is to break down and dig into the code on the ASP side and make a completely separate call to Server 2.sproc2 directly from ASP rather than trying to piggy-back through sproc1. </p>
<p>Have you got <strong>set nocount on</strong> set in both stored procedures? I had a similar issue once and whilst I can't remember exactly how I solved it at the moment, I know that had something to do with it!</p>
<p>Example code might help :) Are you trying to return two tables from the stored procedure; I don't think ADO 2.6 can handle multiple tables being returned.</p>
13,301
<p>If you have the following:</p> <pre><code>$var = 3; // we'll say it's set to 3 for this example if ($var == 4) { // do something } else if ($var == 5) { // do something } else if ($var == 2) { // do something } else if ($var == 3) { // do something } else { // do something } </code></pre> <p>If say 80% of the time <code>$var</code> is 3, do you worry about the fact that it's going through 4 if cases before finding the true case?</p> <p>I'm thinking on a small site it's not a big deal, but what about when that if statement is going to run 1000s of times a second?</p> <p>I'm working in PHP, but I'm thinking the language doesn't matter.</p>
<p>Here's how we did it when I used to write software for radar systems. (Speed matters in radar. It's one of the few places where "real time" actually means "real" instead of "fast".)</p> <p>[I'll switch to Python syntax, it's easier for me and I'm sure you can interpret it.]</p> <pre><code>if var &lt;= 3: if var == 2: # do something elif var == 3: # do something else: raise Exception else: if var == 4: # do something elif var == 5: # do something else: raise Exception </code></pre> <p>Your if-statements form a tree instead of a flat list. As you add conditions to this list, you jiggle around the center of the tree. The flat sequence of <em>n</em> comparisons takes, on average, <em>n</em>/2 steps. The tree leads to a sequence of comparisons that takes log(<em>n</em>) comparisons.</p>
<p>With code where it is purely an equality analysis I would move it to a switch/case, as that provides better performance.</p> <pre><code>$var = 3; // we'll say it's set to 3 for this example switch($var) { case 4: //do something break; case 5: //do something break; case: //do something when none of the provided cases match (same as using an else{ after the elseif{ } </code></pre> <p>now if your doing more complicated comparisons I would either nest them in the switch, or just use the elseif.</p>
20,822
<p>I've been battling PHP's email reading functions for the better part of two days. I'm writing a script to read emails from a mailbox and save any attachments onto the server. If you've ever done something similar, you might understand my pain: <strong>PHP doesn't play well with email!</strong></p> <p>I've connected to the POP3 server and I can iterate the files. Here's a rough outline of the code:</p> <pre><code>if (!$mbox = imap_open ("{myserver.com:110/pop3/notls}INBOX", "u", "p")) die ('Cannot connect/check mail! Exiting'); if ($hdr = imap_check($mbox)) $msgCount = $hdr-&gt;Nmsgs; else die ("Failed to get mail"); foreach ($overview as $message) { $msgStruct = imap_fetchstructure($mbox, $message-&gt;msgno); // if it has parts, there are attachments that need reading if ($msgStruct-&gt;parts) { foreach ($msgStruct-&gt;parts as $key =&gt; $part) { switch (strtoupper($part-&gt;subtype)) { case 'GIF': case 'JPEG':case 'PNG': //do something - but what?! break; } } } } </code></pre> <p>I've marked where I'm stuck. I can use <code>imap_fetchbody($mbox, $message-&gt;msgno, $key+1)</code> but that gets me a bunch of data like this:</p> <pre><code>/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAUDBAQEAwUEBAQFBQUGBwwIBwcHBw8LCwkMEQ8S EhEPERETFhwXExQaFRERGCEYGh0dHx8fExciJCIeJBweHx7/2wBDAQUFBQcGBw4ICA4eFBEU Hh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh7/wAAR CAHiAi0DASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAA AgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkK FhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWG h4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl 5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREA ... </code></pre> <p>I'm lead to believe that this is MIME data. I need it as an image! I've seen several classes bombing around the internet that claim to do the required wizardry. I can't get any of them to work. I don't understand why I'm finding this so hard!</p> <p><strong>In short</strong>, I'm looking for something that can turn a raw MIME string into real data.</p>
<p>I found a quick guide how to treat emails with PHP: <a href="http://www.linuxscope.net/articles/mailAttachmentsPHP.html" rel="nofollow noreferrer">here</a>.</p> <p>At the bottom of that page there's a attachment-body <code>echo</code>:</p> <pre><code>if (substr($ContentType,0,4) == "text") { echo imap_qprint($fileContent); } else { echo imap_base64($fileContent); } </code></pre> <p>I guess this is what you might need... </p> <p>(edit: in your case if it's image always you can skip the <code>if</code> part. And of course, save the file instead of echoing it:)</p>
<p>MIME data is base-64 encoded, so I think you should be able to decode it using base64_decode</p>
19,659
<p>I have a Flex App that allows you to dynamically build an animation. It has a UI and a display of the resulting animation, which is a custom Class extending UIComponent. Is there some way to allow users to save/export the dynamically generated Component to a SWF, for reuse in other Flash/Flex Apps?</p>
<p>You can easily export a screenshot, but i don't think you can save a swf. see this thread: <a href="http://www.actionscript.org/forums/showthread.php3?t=145846" rel="nofollow noreferrer">http://www.actionscript.org/forums/showthread.php3?t=145846</a></p>
<p>You can export the component (or components) in a SWC design-time library or an RSL run-time library to share between applications.</p>
38,346
<p>Im sure this will be a simple one but have a project that started as a test.<br> When it was created it was saved as "Project2.dpr"</p> <p>Now the test is no longer a 'test', i would like to change the projects name to something more meaningful. </p> <p>whats the best way to do this?</p> <p>Any issues with just changing the file name and the Program line to the new name? i.e.</p> <p>meaningful.dpr</p> <pre><code>Program meaningful; </code></pre> <p>Note:Delphi 7 if it matters</p>
<p>Just do "Save Project as" from the file menu in Delphi giving it the name you want and, later on when you feel like, remove the Project2.* files from your folder as they are not needed anymore.</p>
<p>FYI: Starting with Delphi 8 you can simply right-click the project in the project manager and select "Rename" (or simply press F2). This has the benefit over the "Save As" approach that you don't end up with a copy.</p> <p>(sorry, no Delphi 7 - thanks to Lars for the update)</p>
38,086
<p>I have the HTML given below:</p> <pre><code>&lt;ul id="thumbsPhotos"&gt; &lt;li src="/images/1alvaston-hall-relaxing-lg.jpg" onclick="updatePhoto (this.title)"&gt;&lt;img src="/images/1alvaston-hall-relaxing-sl.jpg" width="56" height="56"&gt;&lt;/li&gt; &lt;li onclick="updatePhoto(this.title)" src=""&gt;&lt;img src="" width="56" height="56"&gt;&lt;/li&gt; &lt;li onclick="updatePhoto(this.title)" src=""&gt;&lt;img src="" width="56" height="56"&gt;&lt;/li&gt; &lt;/ul&gt; </code></pre> <p>Now, I want to replace all the <code>src</code> in <code>&lt;li&gt;</code> tags not in <code>&lt;img&gt;</code> tags using <code>InnerHTML</code>. With this, my output will be:</p> <pre><code>&lt;ul id="thumbsPhotos"&gt; &lt;li title="/images/1alvaston-hall-relaxing-lg.jpg" onclick="updatePhoto(this.title)"&gt;&lt;img src="/images/1alvaston-hall-relaxing-sl.jpg" width="56" height="56"&gt;&lt;/li&gt; &lt;li onclick="updatePhoto(this.title)" title=""&gt;&lt;img src="" width="56" height="56"&gt;&lt;/li&gt; &lt;li onclick="updatePhoto(this.title)" title=""&gt;&lt;img src="" width="56" height="56"&gt;&lt;/li&gt; &lt;/ul&gt; </code></pre>
<p>Not tested, but here's a regex which might do it for you...</p> <pre><code>// find: &lt;li ([^&gt;]*)src="(.*?)"(.*?)&gt; // replace: &lt;li $1title="$2"$3&gt; </code></pre> <p><strong>Update</strong>: tested and it works on your example.</p> <p>If you wanted to run this on the client side using Javascript (for whatever whacky reason), you could do this:</p> <pre><code>var ul = document.getElementById("thumbsPhotos"); ul.innerHTML = ul.innerHTML.replace( /&lt;li ([^&gt;]*)src="(.*?)"(.*?)&gt;/g, '&lt;li $1title="$2"$3&gt;' ); </code></pre>
<p>You could most likely use a RegEx replace that matches on the src attribute and do the conversion. I see that you want to also add the title attribute even if there is no match, that should be possible as well.</p> <p>I'll try to dig up some RegEx examples for this.</p>
27,949
<p>I'm using a Visual Studio web setup project to install an application that extends the functionality of Project Server. I want to call a method from the PSI ( Project Server Interface ) from one of the custom actions of my setup project, but every time a get a "401 Unauthorized access" error. What should I do to be able to access the PSI? The same code, when used from a Console Application, works without any issues.</p>
<p>It sounds like in the console situation you are running with your current user credentials, which have access to the PSI. When running from the web, it's running with the creds of the IIS application instance. I think you'd either need to set up delegation to pass the session creds to the IIS application, or use some static creds for your IIS app that have access to the PSI.</p>
<p>I finally found the answer. You can call the LoginWindows PSI service an set the credentials to NetworkCredentials using the appropriate user, password and domain tokens. Then you can call any PSI method, as long as the credentials are explicit. Otherwise, using DefaultCredentials you'll get an Unauthorized Access error, because an msi is run with Local System Account.</p>
4,111
<p>I'm trying to create a deployment tool that will install software based on the hardware found on a system. I'd like the tool to be able to determine if the optical drive is a writer (to determine if burning software sould be installed) or can read DVDs (to determine if a player should be installed). I tried uing the following code </p> <pre><code>strComputer = "." Set objWMIService = GetObject("winmgmts:\\" &amp; strComputer &amp; "\root\cimv2") Set colItems = objWMIService.ExecQuery("Select * from Win32_CDROMDrive") For Each objItem in colItems Wscript.Echo "MediaType: " &amp; objItem.MediaType Next </code></pre> <p>but it always respons with CD-ROM</p>
<p>You can use WMI to enumerate what Windows knows about a drive; get the <a href="http://msdn.microsoft.com/en-us/library/aa394132(VS.85).aspx" rel="nofollow noreferrer"><code>Win32_DiskDrive</code></a> instance from which you should be able to grab the the <a href="http://msdn.microsoft.com/en-us/library/aa394346(VS.85).aspx" rel="nofollow noreferrer"><code>Win32_PhysicalMedia</code></a> information for the physical media the drive uses; the <a href="http://msdn.microsoft.com/en-us/library/aa394346(VS.85).aspx" rel="nofollow noreferrer">MediaType</a> property to get what media it uses (CD, CDRW, DVD, DVDRW, etc, etc).</p>
<p>Platform SDK - IDiscMaster::EnumDiscRecorders (XP / 2003)</p> <p>DirectX and DirectShow has extensive interfaces to work with DVD</p> <p>Else enumerate disk drives and try firing a DeviceIonControlCode that supports extarcting the type info. </p> <p>Good luck</p>
11,543
<p>I have an intermittent problem with some code that writes to a Windows Event Log, using C# and .Net's <code>EventLog</code> class.</p> <p>Basically, this code works day-to-day perfectly, but very occasionally, we start getting errors like this:</p> <blockquote> <p>"System.ArgumentException: Only the first eight characters of a custom log name are significant, and there is already another log on the system using the first eight characters of the name given. Name given: 'Application', name of existing log: 'Application'."</p> </blockquote> <p>I can identify from the other information on our logs that the call stack affected is like this - You can clearly see I am in fact trying to write to an existing <code>LB_Email</code> log (<code>LogEmail</code> is called first):</p> <pre><code>public static void LogEmail(string to, string type) { string message = String.Format("{0}\t{1}\t{2}", DateTime.Now, to, type); Log(message, "LB_Email", EventLogEntryType.Information); } private static void Log(string message, string logName, EventLogEntryType type) { using (EventLog aLog = new EventLog()) { aLog.Source = logName; aLog.WriteEntry(message, type); } } </code></pre> <p>Once the errors start occurring, it seems like access to our <code>LB_Email</code> eventlog is locked somehow - viewing properties on the particular eventlog shows most information greyed-out and unchangeable, and other processes appear to be prevented from logging to that log too. However, I am seeing the error (which uses the same Log method above) via a try-catch that logs to an 'LB_Error' log, and that continues to function as expected.</p> <p>I am calling this code from a multi-threaded application, but I have been unable to identify if the code above is thread-safe or not.</p> <p>I can also confirm that the log in question is working again fine after killing and restarting the process... and it had appropriate settings to reuse entries when it got full... though I don't think that was the issue.</p> <p>I'd love to hear your thoughts and suggestions.</p>
<p>The <a href="http://msdn.microsoft.com/en-us/library/system.diagnostics.eventlog.aspx" rel="nofollow noreferrer">documentation</a> states that:</p> <blockquote> <p>You can only use the Source to write to one log at a time</p> </blockquote> <p>So I suspect this problem is caused by your multithreaded app calling the <code>Log</code> method more that once at a given time and for the same Source.</p> <p>I suggest that, instead of a static class (or methods), you use a thread-safe singleton class for logging these events.</p> <p>EDIT:</p> <p>Jon Skeet has an <a href="http://www.yoda.arachsys.com/csharp/singleton.html" rel="nofollow noreferrer">excellent article</a> on singletons.</p> <p>If you don't want to implement a singleton class you can do something like this:</p> <pre><code> static readonly object lockObj = new object(); public static void LogEmail(string to, string type) { string message = String.Format("{0}\t{1}\t{2}", DateTime.Now, to, type); Log(message, "LB_Email", EventLogEntryType.Information); } private static void Log(string message, string logName, EventLogEntryType type) { lock (lockObj) { using (EventLog aLog = new EventLog()) { aLog.Source = logName; aLog.WriteEntry(message, type); } } } </code></pre> <p>Hope this solves your problem.</p>
<p>Thanks Bruno,</p> <p>So, am I mistaken in thinking that the EventLog instance in the Log method is different from the EventLog instance in the same method call in a different thread? Or am I just getting confused about object instances inside a static method?</p> <p>OK, so I have several wrapper methods to the Log(...) method. If I moved the Log method into a singleton class, changed the wrappers (LogEmail, LogXxxx, LogYyy etc then I could keep my Log.Zzzz interfaces the same, but leverage the security of the singleton LogSingleton.Instance.Log(...) from the current Logs. OR because I want to write to different logs, would each one require its own LogSingletonXxx?</p> <p>You can tell I'm confused :) Yes - I'd really appreciate some synchronisation code :)</p> <p>Nij</p>
42,122
<p>I'd like to eliminate dependencies on hardcoded paths for configuration data in my Java apps, I understand that using ResourceBundle will help me use the classloader to find resources.</p> <p>Can someone tell me how I would replace a hardcoded path to a resource (say a .properties configuration data file required by a class) with appropriate use of ResourceBundle? Simple clear example if possible, thanks all.</p>
<p>Prior to Java 6, ResourceBundle typically allowed:</p> <ul> <li>Strings from a group of localised properties files, using <a href="http://java.sun.com/javase/6/docs/api/java/util/PropertyResourceBundle.html" rel="nofollow noreferrer">PropertyResourceBundle</a></li> <li>Objects from a group of localised classes, using <a href="http://java.sun.com/javase/6/docs/api/java/util/ListResourceBundle.html" rel="nofollow noreferrer">ListResourceBundle</a></li> </ul> <p>Java 6 comes with the <a href="http://java.sun.com/javase/6/docs/api/java/util/ResourceBundle.Control.html" rel="nofollow noreferrer">ResourceBundle.Control</a> class which opens the door to other sources of ResourceBundles, for example:</p> <ul> <li>XML files (see example 2 in Javadoc)</li> <li>Database rows</li> </ul> <p>Hope this helps.</p>
<p>The trick behind Resource.getBundle(..) is the use of the classloader. You can load everything thats in your classpath by accessing it via this.getClass().getClassLoader(). Resource.getBundle(..) is a practical helper to use it in the resource/locatization topic.</p>
18,712
<p>I'm currently in the process of updating a site from preview 2 of ASP.NET MVC to the Beta release. I'm down to my last compile error with no solution in site after an exhaustive search. I have some code in Global.asax.cs which sets up IOC using the Windsor container:</p> <pre><code>ControllerBuilder.Current.SetControllerFactory(typeof(WindsorControllerFactory)); </code></pre> <p>The compiler is complaining that:</p> <pre><code>The type 'System.Web.Mvc.IControllerFactory' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. </code></pre> <p>The weird thing is that this assembly <strong>is</strong> referenced. I reference it from the MVC beta assemblies folder and set to copy local. I can even delete the bin directory, rebuild, then view the assembly in reflector and see the interface. I've tried removing and re-adding the reference, changing web.config's assemblies and namespace sections. I even removed System.Web.Mvc from the GAC to make sure it was using the correct copy.</p> <p>While writing this question it occurred to me that perhaps Windsor is internally expecting the older version of the assembly. I'm downloading the latest from MvcContrib to determine if this is the issue and I'll post the result. If this is the case, shouldn't the version number of the assembly have changed with the latest release to make this obvious?</p> <p>Thanks for reading, Brett</p>
<p>PublicKeyToken=null doesn't seem right...</p> <p><strong>Edit:</strong> I was right, the PublicKeyToken should be "31bf3856ad364e35". It sounds like they're linked against a private build of System.Web.Mvc.dll that isn't signed.</p>
<p>Check that you've updated the appropriate assembly references to 3.5.0.0 in your web.config file, as described in the update notes for the beta.</p>
46,718
<p>The install instructions are:</p> <pre><code>$ python setup.py build $ sudo python setup.py install # or su first </code></pre> <p>This gives me an .egg file. How do I tell the install to dump the files as a normal, uncompressed library?</p> <p>Thanks!</p>
<p>OK, I hate to answer my own question, but:</p> <p>find your python site-packages (mine is /usr/local/lib/python2.5/site-packages )</p> <p>then:</p> <pre><code>$ unzip MySQL_python-1.2.2-py2.5-linux-i686.egg </code></pre> <p>This worked fine for me</p>
<p>This will tell setuptools to not zip it up:</p> <pre><code>sudo python setup.py install --single-version-externally-managed </code></pre>
33,627
<p>How can I find out the number of dimensions in an array in Classic ASP ( VBScript ) .</p> <p>I am being passed an Array with multiple dimensions but I only want to look at the last. Seems easy in other languages.</p>
<pre><code>Ubound(MySingleDimensionalArray, 2) ' Number of Array Elements Ubound(MyMultiDimensionalArray, 1) ' Number of Columns Ubound(MyMultiDimensionalArray, 2) ' Number of Rows </code></pre>
<pre><code>function ArrayDimensions( theArray ) dim Result,test Result = 0 if isarray(theArray) then on error resume next do test = -2 test = ubound(theArray,result+1) if test &gt; -2 then result = result + 1 loop until test=-2 on error goto 0 end if ArrayDimensions = Result end function </code></pre>
34,441
<p>I am currently doing some socket programming using C/C++. To be able to use a somewhat cleaner interface, and a more OO structure, I decided to write a few simple wrapper classes around parts of the C socket API, but while doing so I stumbled upon a problem:</p> <p>Given the following code:</p> <pre><code>// Global method int foo(int x) { return x; } // Class that calls the global method class FooBar { public: void foo() { return; }; void baz() { foo(1); } }; </code></pre> <p>g++ gives the following error message:</p> <pre><code>test.cpp: In member function ‘void FooBar::baz()’: test.cpp:10: error: no matching function for call to ‘FooBar::foo(int)’ test.cpp:9: note: candidates are: void FooBar::foo() </code></pre> <p>Renaming the class method solves the problem.</p> <p>Why is it that there is some kind of naming conflict even though the method signatures are different? What is the best way to fix this?</p> <p>Thanks /Erik</p>
<p>The problem is that it first looks in the scope of your class, and finds a foo function. The lookup will stop then, and the compiler tries to match arguments. Since it only has the one foo function in that scope in your class, calling the function fails.</p> <p>You need to explicitly state that you want to call the free function:</p> <pre><code>::foo(1); </code></pre> <p>Another solution is to declare the function within baz:</p> <pre><code>void baz() { int foo(int); foo(1); } </code></pre> <p>The scope that the foo function is assumed to be defined in is the surrounding namespace of your class.</p>
<p>You must use the scope resolution try:</p> <p>::foo(1);</p>
41,482
<p>I have my J2EE application deployed into a JBossAS. My Application has to respond to two different hostnames (<a href="https://foo.com/myApp" rel="nofollow noreferrer">https://foo.com/myApp</a> and <a href="https://bar.com/myApp" rel="nofollow noreferrer">https://bar.com/myApp</a>). It is the same instance of the app serving those two hostnames.</p> <p>But I don't know how to configure the keystores. I need different keystores for each hostname. Is there a way to tie a virtual host in JBoss' server.xml to a specific connector?</p> <p>Or do I have to use two different IP-addresses and create a connector for each?</p> <p>A solution that does not require a second IP-address would be greatly appreciated.</p>
<p>With SSL you have to use two different I.P. addresses if you wish to use different SSL certificates. This isn't a shortcoming in Tomcat/JBoss, it is just the reality of the protocol.</p> <p>I can't remember the technical reason off the top of my head (Google knows), but it comes down to the server not being able to read the domain name until it has decoded the incoming SSL request.</p> <p>To use two different keystores you will need to define two different connectors (using different I.P. addresses or ports) in the jbossweb-tomcat55.sar/server.xml file. This will get your SSL certificates working, but if you only have one I.P. your second certificate will need to be setup on a non-standard port.</p>
<p>two apps can share one keystore which holds two certificates. The certificate is issued for a given domain. Define the second domain as a virtual host with different domain and do not touch the server.xml. It should work this way.</p>
25,532
<p>Let's say we're tracking the end-user IP for a web service:</p> <pre><code>ip = Request.ServerVariables("HTTP_X_FORWARDED_FOR") If ip = "" Then ip = Request.ServerVariables("REMOTE_ADDR") End If </code></pre> <p>I've read that this is the best method of retrieving end-user IP because it works even for users on a transparent proxy.</p> <p>If we're using the end-user IP address to filter malicious users, are there are any security implications with the above method instead of, say, just using Request.ServerVariables("REMOTE_ADDR")?</p> <p>For example, if we banned a malicious user by end-user IP, could they easily change their IP via a proxy and continue using our web service?</p> <p>Thanks in advance for your help.</p>
<p><code>REMOTE_ADDR</code> is generated by the web server based on the connection from the client. <code>HTTP_X_FORWARDED_FOR</code> is based on a HTTP header sent by the client.</p> <p>You can't trust input from the client, particularly input that is easily faked, such as HTTP headers. Clients can stick <strong>anything</strong> into that <code>HTTP_X_FORWARDED_FOR</code> header.</p>
<p>If the users are using a transparent proxy then the above code will get the real IP address. If they're using an anonymous proxy, though (like Anonymizer) then there's really no way to get the users actual IP address.</p>
41,093
<p>Does anyone know of existing software or algorithms to calculate a package size for shipping multiple items?</p> <p>I have a bunch of items in our inventory database with length, width and height dimesions defined. Given these dimensions I need to calculate how many of the purchased items will fit into predefined box sizes.</p>
<p>This is a <a href="http://en.wikipedia.org/wiki/Bin_packing_problem" rel="noreferrer">Bin Packing</a> problem, and it's NP-hard. For small number of objects and packages, you might be able to simply use the brute force method of trying every possibility. Beyond that, you'll need to use a heuristic of some sort. The Wikipedia article has some details, along with references to papers you probably want to check out.</p> <p>The alternative, of course, is to start with a really simple algorithm (such as simply 'stacking' items) and calculate a reasonable upper-bound on shipping using that, then if your human packers can do better, you make a slight profit. Or discount your calculated prices slightly on the assumption that your packing is not ideal.</p>
<p>After lot of searching i have found a <a href="https://github.com/davidmchapman/3DContainerPacking" rel="nofollow noreferrer">GitHub</a> repository that might help someone. Function <code>PackingService.Pack()</code> takes list of <code>Container</code> and list of <code>Item</code>(s) to be packed as parameter and return result which contains lot of information including </p> <p>"container(s) packed in percentage and list of packed and unpacked items"</p>
16,979
<p>I've been using ISAPI_Rewrite from Helicon (<a href="http://www.helicontech.com/isapi_rewrite/" rel="nofollow noreferrer">http://www.helicontech.com/isapi_rewrite/</a>) on a Server 2003 box for years and have always had good luck with it.</p> <p>I'm migrating all the sites on the 2003 box to a new shiny Server 2008 box. I would prefer to not purchase a new license and I have heard that IIS 7 will have capability built in.</p> <p>All the rewrites are setup in a .htaccess configuration file just like mod_rewrite for Apache.</p> <p>Does anyone know if this capability ever got baked into IIS 7 and if so do you know of any good articles that explain how to get it all enabled?</p> <p>Thanks.</p>
<p>IIS7 does have the functionality built in, as you mentioned. You can either use a custom HttpModule, as described in <a href="http://weblogs.asp.net/scottgu/archive/2007/02/26/tip-trick-url-rewriting-with-asp-net.aspx" rel="nofollow noreferrer">Tip/Trick: Url Rewriting with ASP.NET</a>.</p> <p>Alternatively, you could install the the <a href="http://www.iis.net/downloads/default.aspx?tabid=34&amp;g=6&amp;i=1691" rel="nofollow noreferrer">IIS7 Rewrite Module</a> as <a href="http://www.hanselman.com/blog/ASPNETMVCAndTheNewIIS7RewriteModule.aspx" rel="nofollow noreferrer">described here</a>.</p>
<p>Yes, check out the <a href="http://www.iis.net/extensions/URLRewrite" rel="nofollow noreferrer">IIS 7 URL Rewrite Module</a>.</p>
39,079
<p>I've got a text file full of records where each field in each record is a fixed width. My first approach would be to parse each record simply using string.Substring(). Is there a better way? </p> <p>For example, the format could be described as:</p> <pre><code>&lt;Field1(8)&gt;&lt;Field2(16)&gt;&lt;Field3(12)&gt; </code></pre> <p>And an example file with two records could look like:</p> <pre><code>SomeData0000000000123456SomeMoreData Data2 0000000000555555MoreData </code></pre> <p>I just want to make sure I'm not overlooking a more elegant way than Substring().</p> <hr> <p><strong>Update:</strong> I ultimately went with a regex like Killersponge suggested:</p> <pre><code>private readonly Regex reLot = new Regex(REGEX_LOT, RegexOptions.Compiled); const string REGEX_LOT = "^(?&lt;Field1&gt;.{6})" + "(?&lt;Field2&gt;.{16})" + "(?&lt;Field3&gt;.{12})"; </code></pre> <p>I then use the following to access the fields:</p> <pre><code>Match match = reLot.Match(record); string field1 = match.Groups["Field1"].Value; </code></pre>
<p>Use <a href="http://www.filehelpers.net/" rel="noreferrer">FileHelpers</a>.</p> <p>Example:</p> <pre><code>[FixedLengthRecord()] public class MyData { [FieldFixedLength(8)] public string someData; [FieldFixedLength(16)] public int SomeNumber; [FieldFixedLength(12)] [FieldTrim(TrimMode.Right)] public string someMoreData; } </code></pre> <p>Then, it's as simple as this:</p> <pre><code>var engine = new FileHelperEngine&lt;MyData&gt;(); // To Read Use: var res = engine.ReadFile("FileIn.txt"); // To Write Use: engine.WriteFile("FileOut.txt", res); </code></pre>
<p>Nope, Substring is fine. That's what it's for.</p>
19,675
<p>I am trying to test the likelihood that a particular clustering of data has occurred by chance. A robust way to do this is Monte Carlo simulation, in which the associations between data and groups are randomly reassigned a large number of times (e.g. 10,000), and a metric of clustering is used to compare the actual data with the simulations to determine a p value.</p> <p>I've got most of this working, with pointers mapping the grouping to the data elements, so I plan to randomly reassign pointers to data. THE QUESTION: what is a fast way to sample without replacement, so that every pointer is randomly reassigned in the replicate data sets?</p> <p>For example (these data are just a simplified example):</p> <blockquote> <p>Data (n=12 values) - Group A: 0.1, 0.2, 0.4 / Group B: 0.5, 0.6, 0.8 / Group C: 0.4, 0.5 / Group D: 0.2, 0.2, 0.3, 0.5</p> </blockquote> <p>For each replicate data set, I would have the same cluster sizes (A=3, B=3, C=2, D=4) and data values, but would reassign the values to the clusters.</p> <p>To do this, I could generate random numbers in the range 1-12, assign the first element of group A, then generate random numbers in the range 1-11 and assign the second element in group A, and so on. The pointer reassignment is fast, and I will have pre-allocated all data structures, but the sampling without replacement seems like a problem that might have been solved many times before.</p> <p>Logic or pseudocode preferred.</p>
<p>Here's some code for sampling without replacement based on Algorithm 3.4.2S of Knuth's book Seminumeric Algorithms.</p> <pre><code>void SampleWithoutReplacement ( int populationSize, // size of set sampling from int sampleSize, // size of each sample vector&lt;int&gt; &amp; samples // output, zero-offset indicies to selected items ) { // Use Knuth's variable names int&amp; n = sampleSize; int&amp; N = populationSize; int t = 0; // total input records dealt with int m = 0; // number of items selected so far double u; while (m &lt; n) { u = GetUniform(); // call a uniform(0,1) random number generator if ( (N - t)*u &gt;= n - m ) { t++; } else { samples[m] = t; t++; m++; } } } </code></pre> <p>There is a more efficient but more complex method by Jeffrey Scott Vitter in "An Efficient Algorithm for Sequential Random Sampling," ACM Transactions on Mathematical Software, 13(1), March 1987, 58-67. </p>
<p>Another algorithm for sampling without replacement is described <a href="http://rosettacode.org/wiki/Knuth%27s_algorithm_S" rel="nofollow">here</a>. </p> <p>It is similar to the one described by John D. Cook in his answer and also from Knuth, but it has different hypothesis: The population size is unknown, but the sample can fit in memory. This one is called "Knuth's algorithm S".</p> <p>Quoting the rosettacode article: </p> <blockquote> <ol> <li>Select the first n items as the sample as they become available;</li> <li>For the i-th item where i > n, have a random chance of n/i of keeping it. If failing this chance, the sample remains the same. If not, have it randomly (1/n) replace one of the previously selected n items of the sample.</li> <li>Repeat #2 for any subsequent items.</li> </ol> </blockquote>
40,169
<p>I have two assemblies with the same name in the Global Assembly cache, but with different version numbers. How do I tell my program which version to reference?</p> <p>For the record, this is a VB.Net page in an ASP.Net web site.</p>
<p>As long as the version number is different (which would be required), you can specify the proper version through your web.config file. This is how I have things setup in one of my apps to reference the proper version of Crystal Reports, since we have multiple versions in the GAC:</p> <pre><code>&lt;system.web&gt; &lt;compilation&gt; &lt;assemblies&gt; &lt;add assembly="CrystalDecisions.Web, Version=11.5.3700.0, Culture=neutral, PublicKeyToken=692FBEA5521E1304"/&gt; &lt;add assembly="CrystalDecisions.Shared, Version=11.5.3700.0, Culture=neutral, PublicKeyToken=692FBEA5521E1304"/&gt; &lt;add assembly="CrystalDecisions.ReportSource, Version=11.5.3700.0, Culture=neutral, PublicKeyToken=692FBEA5521E1304"/&gt; &lt;add assembly="CrystalDecisions.Enterprise.Framework, Version=11.5.3300.0, Culture=neutral, PublicKeyToken=692FBEA5521E1304"/&gt; &lt;/assemblies&gt; &lt;/compilation&gt; &lt;/system.web&gt; </code></pre>
<p>To install an assembly in the GAC you have to give it a strong name. Strong names are never duplicated. So to specify which assembly you want to use you reference it by the strong name.</p>
30,310
<h3>ANSWER:</h3> <p>If you ever see these lines and are mistified like I was, here's what they mean.</p> <p><code>Thread[AWT-EventQueue-0] (Suspended (exception NullPointerException))</code></p> <p><code>EventDispatchTread.run() line: not available [local variables unavailable]</code></p> <p>It's not that the variables are unavailable because they are lurking behind a shroud of mystery in a library somewhere dank. No no, they just went out of scope! It's still your fault, you still have to find the null, and no you can't blame the library. Important lesson!</p> <h3>QUESTION:</h3> <p>One of the most frustrating things for me, as a beginner is libraries! It's a love/hate relationship: On the one hand they let me do things I wouldn't normally understand how to do with the code that I do understand, on the other hand because I don't completely understand them, they sometimes throw a wrench in code that is otherwise working fine! It's because I don't understand the errors that can occur when using these libraries, because I didn't write them, and because eclipse doesn't give me a great deal to go with when one of imports starts acting up...</p> <p>So here's the problem: I've been working with java.awt.event to handle a bunch of JButtons on the screen for this and that. I get an error when I use one of the buttons I've made. The error is:</p> <p><code>Thread[AWT-EventQueue-0] (Suspended (exception NullPointerException))</code></p> <p><code>EventDispatchTread.run() line: not available [local variables unavailable]</code></p> <p>What does this mean? What could be causing it? I'm embarrassed to post code, but if you can stand to try to decipher my terrible style, here is the method that seems to cause this error to be thrown.</p> <pre><code>public void actionPerformed(ActionEvent e) { String cmd = e.getActionCommand(); String name; code... if(cmd.equals(&quot;Play&quot;)) { name = field.getText(); card = getCard(name); if(card != null) { if(rules.zoneHasCard(card, rules.hand)) { display.updateStatusMessage(rules.play(card)); field.setText(&quot;&quot;); display.updateHand(rules.zoneList(&quot;hand&quot;)); display.updateDiscard(rules.zoneList(&quot;Discard&quot;)); // This is the error here! The discard Zone was empty! } else { field.setText(&quot;You do not have &quot; + card.getName()); field.selectAll(); } } else { field.setText(&quot;That cardname is unused&quot;); field.selectAll(); } } } </code></pre>
<p>Welcome to the complexity of writing GUI code.</p> <p>When you run a Swing program, a background thread called the Event Dispatch Thread is created. When the user clicks on a JButton, for example, JButton creates and fires an event using this Event Dispatch Thread. Hence the name: it's the thread that dispatches events!</p> <p>Your code:</p> <pre><code>public void actionPerformed(ActionEvent e) { String cmd = e.getActionCommand(); String name; // more code... } </code></pre> <p>is called by this Event Dispatch Thread, so your code can handle the event.</p> <p>Somewhere within your code you are trying to do something with a variable that is currently equal to null. The error message is telling you, "hey while running some code on the event dispatch thread, I encountered a NullPointerException" <strong>in your code</strong>.</p> <p>Why are you not receiving more info? Possibly you configured Eclipse not to include debug info when compiling?</p> <p>For now, I recommend adding some lines to your actionPerformed method to show the state of variables:</p> <pre><code>System.out.println("field = " + field); System.out.println("rules = " + rules); System.out.println("display = " + display); </code></pre> <p>See if this shows you any nulls.</p> <p>Even if the NullPointerException comes from a library, the stack trace will show which line of your code called that library. But only if you've configured Eclipse to generate debugging info.</p> <p>In the longer term, work through the Sun's Swing Tutorial to learn more about these issues.</p>
<p>You might have forgotten to actually set an ActionCommand.</p> <p>In the <a href="http://java.sun.com/javase/6/docs/api/java/awt/event/ActionEvent.html#getActionCommand()" rel="nofollow noreferrer">ActionEvent API Doc</a> there's a note regarding possible null results of getActionCommand().</p>
44,661
<p>In a Linux embedded application I'm developing, there is the need to record some events that happen from time to time. These records are saved on a MTD flash device and once written there is no need to change them or do efficient searches, but read access is required to show the data back to the user. A big problem is that power can go away at any time, without a proper shutdown sequence. The frequency these events occur can be very slow (days/weeks), but several of them will occur at once. The data to be saved for each event is strongly typed: date, time, a couple of short text strings and several integers.</p> <p>Currently I inherited a solution based on jffs2 and SQLite that is far from optimal because the DB file sometimes get corrupted. When this happens the whole file gets unreadable and there is no way to understand if it was caused by a bug in jffs2, or in SQLite or if the flash sector was bad, or if the power was cut at the wrong time. </p> <p>Is there a library or a combination of filesystem/library that can better help me solve this kind of problem ? Or should I just use a text file with a CSV-like format ?</p>
<p>I'm not expert on embeded systems, but I would think that a CSV would probably be best. It basically can't be corrupted, or if it does, then you can easily see the error and fix it manually (new line or just removing a line). I have been working on receiving the data from an embeded system where they have a lot of corruption problems (partially on the system and partially during the phone line transfer). It would be very helpful if it were in a CSV type format so we could find the errors and remove or fix them instead of corrupting the entire data set.</p> <p>If you aren't needing to search within the system, then a CSV works perfectly.</p>
<p>Two csv/text files. Start a new pair each time the system restarts. Write each event to the first file, flush the file to store, write the record to the second file, then flush again.</p> <p>This way, if you crash during the first write all the data in the second copy (up until that write) will still be there.</p> <p>Make sure the flush is a full file system flush and not just the clib buffer flush.</p> <p>Maybe also place the files on separate file systems. Reserving space ahead of what you need could also help speed up the process.</p>
20,770
<p>I am currently encountering a problem where under certain circumstances, the extruder stutters when it starts a new layer. I am printing on an Anycubic i3 Mega and am slicing with Cura 3.6.0. The problem seems to occur in the main part of prints, as well as in supports. However it seems to only occur after a retraction has taken place. I have taken a video of the stuttering which can be found here: <a href="https://photos.app.goo.gl/G3TLKveMsLNRQmgv7" rel="nofollow noreferrer">https://photos.app.goo.gl/G3TLKveMsLNRQmgv7</a> When a print is done the stuttering results in walls looking like this: <a href="https://i.stack.imgur.com/AlAZQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AlAZQ.jpg" alt="In this case, the problem occured in the support structure"></a> Can anyone help me figure out what is causing the stuttering? Thank you very much!</p>
<p>You retraction settings may be too high. Direct drive extruders require less retraction than Bowden style extruders. Typical retraction settings for direct drive are 1.5mm at 50mm/s and for Bowden, 4mm at 50mm/s. The speed usually makes more of a difference than distance beyond a certain point.</p> <p>You can get away with smaller retraction settings if you increase travel speed because there will be less time to ooze. You could also try using Coasting as well.</p> <p>Anyway, try reducing your retraction settings if they're higher than what I stated above. Another alternative is to set an extra prime distance so that extra filament is extruded after the retraction.</p>
<p><em>Definition: <strong>Sparse layer fill</strong> (called stuttering by the OP)</em></p> <hr /> <h2>Why a sparsely filled support structure... (at the support bottom)</h2> <p>Support structures are added by Ultimaker Cura as the first part of the layer before it progresses to the rest of the print object. The bottom part of the support structure is definitely showing under extrusion, as if there was not enough filament available to print the support solidly. Actually, that is exactly what is the problem, there is not enough filament available for printing as a result of a retraction and the following extrusion after movement of the filament. The bottom part of the support is most probably printed after <em><strong>the head stopped far from the support</strong></em> (end of the previous layer) while printing your object. This means that the filament needs to be retracted, the head moved to the support structure, filament extruded (de-retracted) and printing of the support structure starting. When retraction is not optimally tuned, the <em><strong>nozzle may not be primed correctly with filament and cause a sparsely printed support structure</strong></em>. A similar reasoning could apply to support structures being printed at the final stage of the layer (as long as there are large movements to the support structure requiring the activation of the retraction).</p> <h2>Why is the support better printed higher up...</h2> <p>You see that when Z advances above the thickness of the right part of the print, the support structure is better printed. This could be caused by the fact that the head now doesn't need to move far from the last position of the print to the support structure, this doesn't require a retraction action.</p> <h2>What to do to print better support structures...</h2> <p>Try tune your retraction settings, see e.g. <a href="https://3dprinting.stackexchange.com/a/7004/">this answer</a> shows an image of a <a href="https://www.thingiverse.com/thing:1159886" rel="nofollow noreferrer">calibration print</a> to determine the optimal settings.</p> <p>Note that you not only can play with the filament retraction settings (<code>Enable Retraction</code>, <code>Retraction Speed</code> and <code>Retraction Distance</code>), the option called <code>Enable Coasting</code> and <code>Coasting Volume</code> can also be used to stop extruding while the printer head prints the rest of the object to use the over-pressure of the molten filament in the nozzle and finally <code>Retraction Extra Prime Amount</code> can extrude some extra filament to prime the nozzle with some extra material so that the nozzle is optimally filled and ready for printing the support after the main print object. Also take care choosing the right <code>Support Speed</code>, too fast will result in lower quality.</p>
1,100
<p>How can I Submit an InfoPath form to a SharePoint library AND to an email box at the same time when the user hits SUBMIT?</p> <p>I need my form to be approved by several users in a particular order; then re-submitted to the SharePoint site, and to another email box so that the next approver can see the approvals, and approve their own, then submit it and have it drop onto Sharepoint again, etc.</p> <p>The email chain works (the form as an attachment), and the approvals show, but the form doesn't get updated on the SharePoint Form library. </p>
<p>You can add an additional DataSource for submission (send to SharePoint library) and add a rule to your submit button before the save&amp;close rules.</p> <p>BUT - I would suggest a method that is based on workflows and a form that is held on a SharePoint site. I had a very similar task, where approvals were needed in a staged manner.</p> <p>If there is a fixed number of approvers, create fields for each one - if the number is not fixed you will need some replacing rules that change the current approver with the next one. Then you will need one (or more) workflows that are triggered by a flag field (or more) that you promoted before. (Make them writable from the outside during publication - the workflow will need that) This field (or fields) trigger the workflow that sends the email. After sending it should clear that flag to avoid infinite looping.</p> <p>The mail should contain a link to the SharePoint library. And the approver should work on that library rather than sending the XML file through the network.</p> <p><em>Hint</em>: Enable versioning on that libary - and you (or the approvers) can keep track of the changes. </p> <p><em>Hint 2</em>: If this process will be part of the daily work, maybe a custom view and alerts will help the approvers keep track of the form's status.</p>
<p>Sorry for asking but have you considered using a workflow rather then emails? I'm not sure whether the basic workflows in WSS 3.0 will cater for your senario but the MOSS workflow should be more than adequate.</p>
29,107
<p>How do you build a hierarchical set of tags with data in PHP?</p> <p>For example, a nested list:</p> <pre><code>&lt;div&gt; &lt;ul&gt; &lt;li&gt;foo &lt;/li&gt; &lt;li&gt;bar &lt;ul&gt; &lt;li&gt;sub-bar &lt;/li&gt; &lt;/ul&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; </code></pre> <p>This would be build from flat data like this:</p> <pre><code>nested_array = array(); nested_array[0] = array('name' =&gt; 'foo', 'depth' =&gt; 0) nested_array[1] = array('name' =&gt; 'bar', 'depth' =&gt; 0) nested_array[2] = array('name' =&gt; 'sub-bar', 'depth' =&gt; 1) </code></pre> <p>It would be nice if it were nicely formatted like the example, too.</p>
<p><em>Edit: Added formatting</em></p> <p>As already said in the comments, your data structure is somewhat strange. Instead of using text manipulation (like OIS), I prefer DOM:</p> <pre><code>&lt;?php $nested_array = array(); $nested_array[] = array('name' =&gt; 'foo', 'depth' =&gt; 0); $nested_array[] = array('name' =&gt; 'bar', 'depth' =&gt; 0); $nested_array[] = array('name' =&gt; 'sub-bar', 'depth' =&gt; 1); $nested_array[] = array('name' =&gt; 'sub-sub-bar', 'depth' =&gt; 2); $nested_array[] = array('name' =&gt; 'sub-bar2', 'depth' =&gt; 1); $nested_array[] = array('name' =&gt; 'sub-sub-bar3', 'depth' =&gt; 3); $nested_array[] = array('name' =&gt; 'sub-sub3', 'depth' =&gt; 2); $nested_array[] = array('name' =&gt; 'baz', 'depth' =&gt; 0); $doc = new DOMDocument('1.0', 'iso-8859-1'); $doc-&gt;formatOutput = true; $rootNode = $doc-&gt;createElement('div'); $doc-&gt;appendChild($rootNode); $rootList = $doc-&gt;createElement('ul'); $rootNode-&gt;appendChild($rootList); $listStack = array($rootList); // Stack of created XML list elements $depth = 0; // Current depth foreach ($nested_array as $nael) { while ($depth &lt; $nael['depth']) { // New list element if ($listStack[$depth]-&gt;lastChild == null) { // More than one level at once $li = $doc-&gt;createElement('li'); $listStack[$depth]-&gt;appendChild($li); } $listEl = $doc-&gt;createElement('ul'); $listStack[$depth]-&gt;lastChild-&gt;appendChild($listEl); array_push($listStack, $listEl); $depth++; } while ($depth &gt; $nael['depth']) { array_pop($listStack); $depth--; } // Add the element itself $li = $doc-&gt;createElement('li'); $li-&gt;appendChild($doc-&gt;createTextNode($nael['name'])); $listStack[$depth]-&gt;appendChild($li); } echo $doc-&gt;saveXML(); </code></pre> <p>Your formatting convention is kind of strange. Replace the last line with the following to achieve it:</p> <pre><code>printEl($rootNode); function printEl(DOMElement $el, $depth = 0) { $leftFiller = str_repeat("\t", $depth); $name = preg_replace('/[^a-zA-Z]/', '', $el-&gt;tagName); if ($el-&gt;childNodes-&gt;length == 0) { // Empty node echo $leftFiller . '&lt;' . $name . "/&gt;\n"; } else { echo $leftFiller . '&lt;' . $name . "&gt;"; $printedNL = false; for ($i = 0;$i &lt; $el-&gt;childNodes-&gt;length;$i++) { $c = $el-&gt;childNodes-&gt;item($i); if ($c instanceof DOMText) { echo htmlspecialchars($c-&gt;wholeText); } elseif ($c instanceof DOMElement) { if (!$printedNL) { $printedNL = true; echo "\n"; } printEl($c, $depth+1); } } if (!$printedNL) { $printedNL = true; echo "\n"; } echo $leftFiller . '&lt;/' . $name . "&gt;\n"; } } </code></pre>
<p>You mean something like</p> <pre><code>function array_to_list(array $array, $width = 3, $type = 'ul', $separator = ' ', $depth = 0) { $ulSpace = str_repeat($separator, $width * $depth++); $liSpace = str_repeat($separator, $width * $depth++); $subSpace = str_repeat($separator, $width * $depth); foreach ($array as $key=&gt;$value) { if (is_array($value)) { $output[(isset($prev) ? $prev : $key)] .= "\n" . array_to_list($value, $width, $type, $separator, $depth); } else { $output[$key] = $value; $prev = $key; } } return "$ulSpace&lt;$type&gt;\n$liSpace&lt;li&gt;\n$subSpace" . implode("\n$liSpace&lt;/li&gt;\n$liSpace&lt;li&gt;\n$subSpace", $output) . "\n$liSpace&lt;/li&gt;\n$ulSpace&lt;/$type&gt;"; } echo array_to_list(array('gg', 'dsf', array(array('uhu'), 'df', array('sdf')), 'sdfsd', 'sdfd')) . "\n"; </code></pre> <p>produces</p> <pre><code>&lt;ul&gt; &lt;li&gt; gg &lt;/li&gt; &lt;li&gt; dsf &lt;ul&gt; &lt;li&gt; &lt;ul&gt; &lt;li&gt; uhu &lt;/li&gt; &lt;/ul&gt; &lt;/li&gt; &lt;li&gt; df &lt;ul&gt; &lt;li&gt; sdf &lt;/li&gt; &lt;/ul&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/li&gt; &lt;li&gt; sdfsd &lt;/li&gt; &lt;li&gt; sdfd &lt;/li&gt; &lt;/ul&gt; </code></pre> <p>I know theres a little gap there if a sub list don't start with an explanation.</p> <p>Personally I usually don't really care how the HTML looks as long as its easy to work with in PHP.</p> <p>Edit: OK, it works if you run it through this first ... :P</p> <pre><code>function flat_array_to_hierarchical_array(array &amp;$array, $depth = 0, $name = null, $toDepth = 0) { if ($depth == 0) { $temp = $array; $array = array_values($array); } if (($name !== null) &amp;&amp; ($depth == $toDepth)) { $output[] = $name; } else if ($depth &lt; $toDepth) { $output[] = flat_array_to_hierarchical_array(&amp;$array, $depth + 1, $name, $toDepth); } while ($item = array_shift($array)) { $newDepth = $item['depth']; $name = $item['name']; if ($depth == $newDepth) { $output[] = $name; } else if ($depth &lt; $newDepth) { $output[] = flat_array_to_hierarchical_array(&amp;$array, $depth + 1, $name, $newDepth); } else { array_unshift($array, $item); return $output; } } $array = $temp; return $output; } $arr = flat_array_to_hierarchical_array($nested_array); echo array_to_list($arr); </code></pre>
41,158
<p>I've been thinking a lot lately about how to go about doing functional programming in C (<em>not</em> C++). Obviously, C is a procedural language and doesn't really support functional programming natively.</p> <p>Are there any compiler/language extensions that add some functional programming constructs to the language? GCC provides <a href="http://gcc.gnu.org/onlinedocs/gcc/Nested-Functions.html" rel="noreferrer">nested functions</a> as a language extension; nested functions can access variables from the parent stack frame, but this is still a long way away from mature closures.</p> <p>For example, one thing that I think could be really useful in C is that anywhere where a function pointer is expected, you could be able to pass a lambda expression, creating a closure which decays into a function pointer. C++0x is going to include lambda expressions (which I think is awesome); however, I'm looking for tools applicable to straight C.</p> <p>[Edit] To clarify, I'm not trying to solve a particular problem in C that would be more suited to functional programming; I'm merely curious about what tools are out there if I wanted to do so.</p>
<p>You can use GCC's nested functions to simulate lambda expressions, in fact, I have a macro to do it for me:</p> <pre><code>#define lambda(return_type, function_body) \ ({ \ return_type anon_func_name_ function_body \ anon_func_name_; \ }) </code></pre> <p>Use like this:</p> <pre><code>int (*max)(int, int) = lambda (int, (int x, int y) { return x &gt; y ? x : y; }); </code></pre>
<p>What is it about C that you want to make functional, the syntax or the semantics? The semantics of functional programming could certainly be added to the C compiler, but by the time you were done, you'd essentially have the equivalent of one of the existing functional languages, such as Scheme, Haskell, etc.</p> <p>It would be a better use of time to just learn the syntax of those languages which directly support those semantics.</p>
26,558
<p>I'm trying to use a Microsoft Access database for a demo project that I'm thinking of doing in either CodeIgniter or CakePHP. Ignoring the possible folly of using Microsoft Access, I haven't been able to figure out precisely how the connection string corresponds to the frameworks' database settings. In straight PHP, I can use this code to connect to an Access database:</p> <pre><code>$db_connection = odbc_connect( "DRIVER={Microsoft Access Driver (*.mdb)}; DBQ=\\path\\to\\db.mdb", "ADODB.Connection", "", "SQL_CUR_USE_ODBC" ); </code></pre> <p>How do those strings correspond to the Code Igniter db settings? This doesn't seem to be quite working:</p> <pre><code>$db['access']['hostname'] = "{Microsoft Access Driver (*.mdb)}"; $db['access']['username'] = "ADODB.Connection"; $db['access']['password'] = ""; $db['access']['database'] = "\\path\\to\\db.mdb"; $db['access']['dbdriver'] = "odbc"; $db['access']['dbprefix'] = ""; $db['access']['pconnect'] = TRUE; $db['access']['db_debug'] = TRUE; $db['access']['cache_on'] = FALSE; $db['access']['cachedir'] = ""; $db['access']['char_set'] = "utf8"; $db['access']['dbcollat'] = "utf8_general_ci"; </code></pre>
<p>Try setting up a DSN and changing to the following:</p> <pre><code>$db['access']['hostname'] = "&lt;dsn name&gt;"; $db['access']['username'] = ""; $db['access']['password'] = ""; $db['access']['database'] = "&lt;dsn name&gt;"; </code></pre> <p>There's also a section in the CodeIgniter documentation that addresses connection strings:</p> <p><a href="http://codeigniter.com/user_guide/database/connecting.html" rel="nofollow noreferrer">http://codeigniter.com/user_guide/database/connecting.html</a></p>
<p>Would it be possible to use the SQL Express (free!) engine instead and just import/export your MS Access db? You could thank me later on that. :-)</p> <p>Another option is to use the ADOdb library instead of CI's native DB library, though you'll lose the Active Record support from CI, and have to rewrite certain libraries in CI to utilize it, but it's worth it if you still want to use CI with a DB that isn't supported for your application. I had to do it early on when there was bugs in the Postgres implementation.</p>
21,562
<p>Experienced with Rails / ActiveRecord 2.1.1</p> <ul> <li>You create a first version with (for example) ruby script\generate scaffold product title:string description:text image_url:string</li> <li>This create (for example) a migration file called 20080910122415_create_products.rb</li> <li>You apply the migration with rake db:migrate</li> <li>Now, you add a field to the product table with ruby script\generate migration add_price_to_product price:decimal</li> <li>This create a migration file called 20080910125745_add_price_to_product.rb</li> <li>If you try to run rake db:migrate, it will actually revert the first migration, not apply the next one! So your product table will get destroyed!</li> <li>But if you ran rake alone, it would have told you that one migration was pending</li> </ul> <p>Pls note that applying rake db:migrate (once the table has been destroyed) will apply all migrations in order.</p> <p>The only workaround I found is to specify the version of the new migration as in: </p> <pre><code>rake db:migrate version=20080910125745 </code></pre> <p>So I'm wondering: is this an expected new behavior?</p>
<p>You should be able to use </p> <pre><code>rake db:migrate:up </code></pre> <p>to force it to go forward, but then you risk missing interleaved migrations from other people on your team</p> <p>if you run </p> <pre><code>rake db:migrate </code></pre> <p>twice, it will reapply all your migrations.</p> <p>I encounter the same behavior on windows with SQLite, it might be a bug specific to such an environment.</p> <p><strong>Edit</strong> -- I found why. In the railstie database.rake task you have the following code :</p> <pre><code>desc "Migrate the database through scripts in db/migrate. Target specific version with VERSION=x. Turn off output with VERBOSE=false." task :migrate =&gt; :environment do ActiveRecord::Migration.verbose = ENV["VERBOSE"] ? ENV["VERBOSE"] == "true" : true ActiveRecord::Migrator.migrate("db/migrate/", ENV["VERSION"] ? ENV["VERSION"].to_i : nil) Rake::Task["db:schema:dump"].invoke if ActiveRecord::Base.schema_format == :ruby end </code></pre> <p>Then in my environment variables I have </p> <pre><code>echo %Version% #=&gt; V3.5.0f </code></pre> <p>in Ruby</p> <pre><code>ENV["VERSION"] # =&gt; V3.5.0f ENV["VERSION"].to_i #=&gt;0 not nil ! </code></pre> <p>thus the rake task calls </p> <pre><code>ActiveRecord::Migrator.migrate("db/migrate/", 0) </code></pre> <p>and in ActiveRecord::Migrator we have : </p> <pre><code>class Migrator#:nodoc: class &lt;&lt; self def migrate(migrations_path, target_version = nil) case when target_version.nil? then up(migrations_path, target_version) when current_version &gt; target_version then down(migrations_path, target_version) else up(migrations_path, target_version) end end </code></pre> <p>Yes, <code>rake db:migrate VERSION=0</code> is the long version for <code>rake db:migrate:down</code> </p> <p><strong>Edit</strong> - I would go update the lighthouse bug but I the super company proxy forbids that I connect there</p> <p>In the meantime you may try to unset Version before you call migrate ...</p>
<p>This is not the expected behaviour. I was going to suggest reporting this as a bug on lighthouse, but I see you've <a href="http://rails.lighthouseapp.com/projects/8994/tickets/1021-rake-dbmigrate-doesnt-detect-new-migration" rel="nofollow noreferrer">already done so</a>! If you provide some more information (including OS/database/ruby version) I will take a look at it. </p>
9,568
<p>Anyone know of a link to a good article/tutorial for getting started using jQuery for AJAX calls rather than ASP.NET AJAX? I'm trying to avoid using UpdatePanels entirely in this app, which I haven't been able to accomplish in the past.</p>
<p>The most complete article I've ever found about this topic is <a href="https://www.codeproject.com/Articles/95525/ASP-NET-and-jQuery-to-the-Max" rel="nofollow noreferrer">ASP.NET and jQuery to the Max</a>.</p> <p>It avoids update panel, script manager and viewstate.</p>
<p>Damien Edwards just gave an awesome talk on this topic at the recent MIX11 conference. You can <a href="https://web.archive.org/web/20110724072836/http://channel9.msdn.com/events/MIX/MIX11/FRM12" rel="nofollow noreferrer">watch the video online (archived - click any download links below the player)</a> and download his code. He developed a jquery-UI extenders project that works the same way that ASP.NET AJAX toolkit works, but just extends normal controls/tags/elements with jQuery functionality. The project is on <a href="https://web.archive.org/web/20190801214144/https://archive.codeplex.com/?p=jquery" rel="nofollow noreferrer">CodePlex (archived)</a>.</p>
24,765
<p>Suppose I have a COM object which users can access via a call such as:</p> <pre><code>Set s = CreateObject("Server") </code></pre> <p>What I'd like to be able to do is allow the user to specify an event handler for the object, like so:</p> <pre><code>Function ServerEvent MsgBox "Event handled" End Function s.OnDoSomething = ServerEvent </code></pre> <p>Is this possible and, if so, how do I expose this in my type library in C++ (specifically BCB 2007)?</p>
<p>This is how I did it just recently. Add an interface that implements IDispatch and a coclass for that interface to your IDL:</p> <pre><code>[ object, uuid(6EDA5438-0915-4183-841D-D3F0AEDFA466), nonextensible, oleautomation, pointer_default(unique) ] interface IServerEvents : IDispatch { [id(1)] HRESULT OnServerEvent(); } //... [ uuid(FA8F24B3-1751-4D44-8258-D649B6529494), ] coclass ServerEvents { [default] interface IServerEvents; [default, source] dispinterface IServerEvents; }; </code></pre> <p>This is the declaration of the CServerEvents class:</p> <pre><code>class ATL_NO_VTABLE CServerEvents : public CComObjectRootEx&lt;CComSingleThreadModel&gt;, public CComCoClass&lt;CServerEvents, &amp;CLSID_ServerEvents&gt;, public IDispatchImpl&lt;IServerEvents, &amp;IID_IServerEvents , &amp;LIBID_YourLibrary, -1, -1&gt;, public IConnectionPointContainerImpl&lt;CServerEvents&gt;, public IConnectionPointImpl&lt;CServerEvents,&amp;__uuidof(IServerEvents)&gt; { public: CServerEvents() { } // ... BEGIN_COM_MAP(CServerEvents) COM_INTERFACE_ENTRY(IServerEvents) COM_INTERFACE_ENTRY(IDispatch) COM_INTERFACE_ENTRY(IConnectionPointContainer) END_COM_MAP() BEGIN_CONNECTION_POINT_MAP(CServerEvents) CONNECTION_POINT_ENTRY(__uuidof(IServerEvents)) END_CONNECTION_POINT_MAP() // .. // IServerEvents STDMETHOD(OnServerEvent)(); private: CRITICAL_SECTION m_csLock; }; </code></pre> <p>The key here is the implementation of the IConnectionPointImpl and IConnectionPointContainerImpl interfaces and the connection point map. The definition of the OnServerEvent method looks like this:</p> <pre><code>STDMETHODIMP CServerEvents::OnServerEvent() { ::EnterCriticalSection( &amp;m_csLock ); IUnknown* pUnknown; for ( unsigned i = 0; ( pUnknown = m_vec.GetAt( i ) ) != NULL; ++i ) { CComPtr&lt;IDispatch&gt; spDisp; pUnknown-&gt;QueryInterface( &amp;spDisp ); if ( spDisp ) { spDisp.Invoke0( CComBSTR( L"OnServerEvent" ) ); } } ::LeaveCriticalSection( &amp;m_csLock ); return S_OK; } </code></pre> <p>You need to provide a way for your client to specify their handler for your events. You can do this with a dedicated method like "SetHandler" or something, but I prefer to make the handler an argument to the method that is called asynchronously. This way, the user only has to call one method:</p> <pre><code>STDMETHOD(DoSomethingAsynchronous)( IServerEvents *pCallback ); </code></pre> <p>Store the pointer to the IServerEvents, and then when you want to fire your event, just call the method:</p> <pre><code>m_pCallback-&gt;OnServerEvent(); </code></pre> <p>As for the VB code, the syntax for dealing with events is a little different than what you suggested:</p> <pre><code>Private m_server As Server Private WithEvents m_serverEvents As ServerEvents Private Sub MainMethod() Set s = CreateObject("Server") Set m_serverEvents = New ServerEvents Call m_searchService.DoSomethingAsynchronous(m_serverEvents) End Sub Private Sub m_serverEvents_OnServerEvent() MsgBox "Event handled" End Sub </code></pre> <p>I hope this helps.</p>
<p>I ended up following the technique described <a href="http://www.mp3car.com/vbulletin/digitalmods-scripts-api/81425-how-create-c-com-object-streetdeck.html" rel="nofollow noreferrer">here</a>.</p>
8,691
<p>I need to prompt an alert message when a user selects a particular option in a select menu. Is there a way to do this using jQuery?</p>
<p>assuming your select has an ID of myselect, and the value you want to check is "myval"</p> <pre><code>$("#myselect").change(function() { if($(this).val() == "myval") { alert('message'); } }); </code></pre> <p>I havent tested this but the concept should be sound</p>
<p>I recently bumped into the <a href="http://nadiana.com/jquery-confirm-plugin" rel="nofollow noreferrer">jQuery Confirm Plugin</a>. It may or may be what you are looking for.</p>
44,368
<p>When writing CUDA applications, you can either work at the driver level or at the runtime level as illustrated on this image (The libraries are CUFFT and CUBLAS for advanced math):</p> <p><a href="https://i.stack.imgur.com/TD51p.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/TD51p.jpg" alt="CUDA layer model"></a><br> <sub>(source: <a href="https://www.tomshw.it/guides/hardware/graphic/20080618/images/nvidia-CUDA,Q-7-111103-13.jpg" rel="noreferrer">tomshw.it</a>)</sub> </p> <p>I assume the tradeoff between the two are increased performance for the low-evel API but at the cost of increased complexity of code. What are the concrete differences and are there any significant things which you cannot do with the high-level API?</p> <p>I am using CUDA.net for interop with C# and it is built as a copy of the driver API. This encourages writing a lot of rather complex code in C# while the C++ equivalent would be more simple using the runtime API. Is there anything to win by doing it this way? The one benefit I can see is that it is easier to integrate intelligent error handling with the rest of the C# code.</p>
<p>The CUDA runtime makes it possible to compile and link your CUDA kernels into executables. This means that you don't have to distribute cubin files with your application, or deal with loading them through the driver API. As you have noted, it is generally easier to use. </p> <p>In contrast, the driver API is harder to program but provided more control over how CUDA is used. The programmer has to directly deal with initialization, module loading, etc. </p> <p>Apparently more detailed device information can be queried through the driver API than through the runtime API. For instance, the free memory available on the device can be queried only through the driver API. </p> <p>From the CUDA Programmer's Guide:</p> <blockquote> <p>It is composed of two APIs:</p> <ul> <li>A low-level API called the CUDA driver API,</li> <li>A higher-level API called the CUDA runtime API that is implemented on top of the CUDA driver API.</li> </ul> <p>These APIs are mutually exclusive: An application should use either one or the other.</p> <p>The CUDA runtime eases device code management by providing implicit initialization, context management, and module management. The C host code generated by nvcc is based on the CUDA runtime (see Section 4.2.5), so applications that link to this code must use the CUDA runtime API.</p> <p>In contrast, the CUDA driver API requires more code, is harder to program and debug, but offers a better level of control and is language-independent since it only deals with cubin objects (see Section 4.2.5). In particular, it is more difficult to configure and launch kernels using the CUDA driver API, since the execution configuration and kernel parameters must be specified with explicit function calls instead of the execution configuration syntax described in Section 4.2.3. Also, device emulation (see Section 4.5.2.9) does not work with the CUDA driver API.</p> </blockquote> <p>There is no noticeable performance difference between the API's. How your kernels use memory and how they are laid out on the GPU (in warps and blocks) will have a much more pronounced effect. </p>
<p>There are some real issues with argument alignment and the driver API. Check out the CUDA 2.2 beta (or later) documentation for more information.</p>
30,079
<p>I have two PHP files that I need to link. How can I link the files together using PHP? The effect I want is to have the user click a button, some information is proccessed on the page, and then the result is displayed in a different page, depending on the button the user clicked.Thanks</p>
<p>It sounds like you might want an HTML form:</p> <pre><code>&lt;form method="post" action="other_file.php"&gt; &lt;input name="foo" type="..."... /&gt; ... &lt;/form&gt; </code></pre> <p>Then <code>$_POST["foo"]</code> will contain the value of that input in other_file.php.</p>
<p>In PHP's most basic setup, you can use two independent files: one generates the form, the second handles the response. You can also handle this with one file that checks to see if the form as been posted.</p> <pre><code> if (isset($_POST['foo'])) { ... } </code></pre> <p>or</p> <pre><code> if ($_SERVER['REQUEST_METHOD'] == 'POST') { ... } </code></pre> <p>Once you get beyond this level, there are must cleaner ways to build the scripts so that you keep your logic and your interfaces separate, but I'm guessing from the question that you're not there yet.</p>
29,403
<p>When we serialize an enum from C# to SQL Server we use a NCHAR(3) datatype with mnemonic values for each value of the enum. That way we can easily read a SELECT qry.</p> <p>How do you save enum to your database?</p> <p>What datatype do you use?</p>
<p>A better way would be to store as an int. That way you can deserialise/cast from the DB right back to the correct enum value.</p> <p>If the enum is likely to be changed in the future then use explicit values e.g.</p> <pre><code>public enum ActionType { Insert = 1, Update = 2, Delete = 3 } </code></pre> <p>The practicalities of storing as a mnemonic must cause you have clashes depending on your mnemonic generating algorithm?</p>
<p>nchar(3) for mnemonics?</p> <p>As Martin pointed out, use a lookup table with INT as PK, VARCHAR identifier as UK, and NVARCHAR description.</p> <p>You can write a stored procedure to script the table values as C# enum or as C# public consts.</p> <p>Thus the values are documented both in the database and in the C# source code.</p>
48,585
<p>Modelsim, an HDL simulator, allows you to specify the font used by the output. Fixed width fonts allow for more orderly output, but many fixed width fonts are not easy on the eyes. What would you recommend? I currently use Lucida Console.</p> <p>I've tried Inconsolata and Consolas per some recommendations, but Modelsim does not render them well at 10 point. I'm not sure why.</p>
<p>See the answers to this question: <a href="https://stackoverflow.com/questions/4689/recommended-fonts-for-programming">Recommended Fonts for Programming?</a></p>
<p><a href="http://damieng.com/blog/2007/11/14/droid-sans-mono-great-coding-font" rel="nofollow noreferrer">Droid Sans Mono</a> and <a href="http://www.bitstream.com/font_rendering/products/dev_fonts/vera.html" rel="nofollow noreferrer">Bitstream Vera Sans Mono</a> are both liberally licensed and very readable to my eyes (far more so than any fonts that ship with Windows, including Lucida Console).</p>
26,308
<p>I'm getting notifications to back up my encryption key for EFS in Vista, however i haven't enabled bit locker or drive encryption.</p> <p>Anyone know how to find out what files may be encrypted or have an explanation for why it would notify me?</p>
<p>To find out which files on your system have been encrypted with EFS, you can simply run this command:</p> <pre><code>CIPHER.EXE /U /N </code></pre>
<p>Clippy noticed that you have sensitive information in your files and automatically encrypted them.</p> <p>Are you sure it's for EFS? I've had things prompt me to backup my keys before, but I didn't know exactly what they were to. I was assuming it was like a DRM protected file or something. It was a while ago so i don't remember exactly what the specific details were. I never backed it up and haven't been locked out of anything.</p>
3,279
<p>I have a .NET UserControl (FFX 3.5). This control contains several child Controls - a Panel, a couple Labels, a couple TextBoxes, and yet another custom Control. I want to handle a right click anywhere on the base Control - so a right click on any child control (or child of a child in the case of the Panel). I'd like to do it so that it's maintainable if someone makes changes to the Control without having to wire in handlers for new Controls for example.</p> <p>First I tried overriding the WndProc, but as I suspected, I only get messages for clicks on the Form directly, not any of its children. As a semi-hack, I added the following after InitializeComponent:</p> <pre><code> foreach (Control c in this.Controls) { c.MouseClick += new MouseEventHandler( delegate(object sender, MouseEventArgs e) { // handle the click here }); } </code></pre> <p>This now gets clicks for controls that support the event, but Labels, for example, still don't get anything. Is there a simple way to do this that I'm overlooking?</p>
<p>If the labels are in a subcontrol then you'd have to do this recursively:</p> <pre><code>void initControlsRecursive(ControlCollection coll) { foreach (Control c in coll) { c.MouseClick += (sender, e) =&gt; {/* handle the click here */}); initControlsRecursive(c.Controls); } } /* ... */ initControlsRecursive(Form.Controls); </code></pre>
<p>To handle a <em>MouseClick</em> event for right click on all the controls on a custom <em>UserControl</em>:</p> <pre><code>public class MyClass : UserControl { public MyClass() { InitializeComponent(); MouseClick += ControlOnMouseClick; if (HasChildren) AddOnMouseClickHandlerRecursive(Controls); } private void AddOnMouseClickHandlerRecursive(IEnumerable controls) { foreach (Control control in controls) { control.MouseClick += ControlOnMouseClick; if (control.HasChildren) AddOnMouseClickHandlerRecursive(control.Controls); } } private void ControlOnMouseClick(object sender, MouseEventArgs args) { if (args.Button != MouseButtons.Right) return; var contextMenu = new ContextMenu(new[] { new MenuItem("Copy", OnCopyClick) }); contextMenu.Show((Control)sender, new Point(args.X, args.Y)); } private void OnCopyClick(object sender, EventArgs eventArgs) { MessageBox.Show("Copy menu item was clicked."); } } </code></pre>
30,805
<p>After discussion with colleagues regarding the use of the 'var' keyword in C# 3 I wondered what people's opinions were on the appropriate uses of type inference via var?</p> <p>For example I rather lazily used var in questionable circumstances, e.g.:-</p> <pre><code>foreach(var item in someList) { // ... } // Type of 'item' not clear. var something = someObject.SomeProperty; // Type of 'something' not clear. var something = someMethod(); // Type of 'something' not clear. </code></pre> <p>More legitimate uses of var are as follows:-</p> <pre><code>var l = new List&lt;string&gt;(); // Obvious what l will be. var s = new SomeClass(); // Obvious what s will be. </code></pre> <p>Interestingly LINQ seems to be a bit of a grey area, e.g.:-</p> <pre><code>var results = from r in dataContext.SomeTable select r; // Not *entirely clear* what results will be here. </code></pre> <p>It's clear what results will be in that it will be a type which implements IEnumerable, however it isn't entirely obvious in the same way a var declaring a new object is.</p> <p>It's even worse when it comes to LINQ to objects, e.g.:-</p> <pre><code>var results = from item in someList where item != 3 select item; </code></pre> <p>This is no better than the equivilent foreach(var item in someList) { // ... } equivilent.</p> <p>There is a real concern about type safety here - for example if we were to place the results of that query into an overloaded method that accepted IEnumerable&lt;int&gt; and IEnumerable&lt;double&gt; the caller might inadvertently pass in the wrong type.</p> <p><code>var</code> <em>does</em> maintain strong typing but the question is really whether it's dangerous for the type to not be immediately apparent on definition, something which is magnified when overloads mean compiler errors might not be issued when you unintentionally pass the wrong type to a method.</p>
<p>I still think <code>var</code> can make code more readable in some cases. If I have a Customer class with an Orders property, and I want to assign that to a variable, I will just do this:</p> <pre><code>var orders = cust.Orders; </code></pre> <p>I don't care if Customer.Orders is <code>IEnumerable&lt;Order&gt;</code>, <code>ObservableCollection&lt;Order&gt;</code> or <code>BindingList&lt;Order&gt;</code> - all I want is to keep that list in memory to iterate over it or get its count or something later on.</p> <p>Contrast the above declaration with:</p> <pre><code>ObservableCollection&lt;Order&gt; orders = cust.Orders; </code></pre> <p>To me, the type name is just noise. And if I go back and decide to change the type of the Customer.Orders down the track (say from <code>ObservableCollection&lt;Order&gt;</code> to <code>IList&lt;Order&gt;</code>) then I need to change that declaration too - something I wouldn't have to do if I'd used var in the first place.</p>
<p>Don't use that, makes your code unreadable.</p> <p>ALWAYS use as strict typing as possible, crutches only makes your life hell.</p>
6,278
<p>Using cyanoacrylate to glue PLA parts sometimes leaves a white residue or haze near the glue locations. Is there an easy way to remove it?</p> <p>I've tried water and alcohol swabs but after drying the haze remains.</p> <p><a href="https://i.stack.imgur.com/cw1pu.jpg" rel="nofollow noreferrer" title="Photo showing white residue"><img src="https://i.stack.imgur.com/cw1pu.jpg" alt="Photo showing white residue" title="Photo showing white residue" /></a></p>
<h2>Make sure to set the scale properly for your use case!</h2> <p>In CAD, you define your measurement space in either Inch or in Millimeter units, and that is your grid. In blender, the native unit is the meter.</p> <p>This can be easily converted in exporting (remember to set it to scale!), but it is best to just set the measurement scale to actually match what you design: if you want to design a 5 mm hole, set your scale to Millimeters and make sure you export in millimeters. If you want to design in meters (maybe you design a building), then work in meters, and set your export scale in the end so that 1 meter actually is represented as 1 meter - or rather as 1000 millimeters.</p> <p><a href="https://3dprinting.stackexchange.com/a/7561/8884">The STL in the end will not know the difference</a>: it all is defined in scales of <em>unitary units</em>, and it doesn't even know if it was originally designed in meters, inch or angström. The typical slicer expects the unit to be either millimeters or inch, so any scaling of the exported model that does not result in units equivalent to 1 mm or 24.5 mm is bad procedure - converting between these two types is just scaling the model by 2450%.</p> <h2>Make sure to design closed manifolds made up of triangles!</h2> <p>When working with blender, it is very easy to leave the item in a shape that contains multiple intersecting, non-manifold surfaces and areas of inverted surfaces. While <em>interecting shells</em> is not a problem (the slicers can handle those by unionizing the item), the intersection usually covers up the non-manifold areas, making them hard to spot.</p> <p>As a result, before finalizing your project, I suggest follow this procedure:</p> <ul> <li>In Blender, turn on the visual for the normals of surfaces. If an area does not look like a hedgehog after that, the normals in that area are reversed and you need to flip the surfaces there or re-mesh it.</li> <li>Triangulate the surface using the triangulate modifier. This is to spot artifacts from conversion to STL early and be able to fix them: STL only knows triangles, while blender knows <em>bent</em> n-gons.</li> <li>Add a new object. A cube with side length 1.</li> <li>Do a test export to STL with scale 1, which also contains the 1-unit cube as an extra shell.</li> <li>Import the model into a software such as meshmixer, that has a command to separate shells.</li> <li>Separate the item to all shells. In Meshmixer this is in analyze, separate shells.</li> <li>After separating the shells, measure your 1-unit cube. If it is not 1 mm, calculate your scaling factor. It should be a multiple of 10.</li> <li>Next, you should check each shell for gaps or other errors. In meshmixer, the automatic analyze feature points to these areas with red, blue and magenta lines.</li> <li>Fix the marked errors in blender, then return to the test export. This time use the proper scaling factor. Repeat until no errors remain.</li> </ul>
<p>It doesn't matter, you scale it in the slicer or elsewhere. You're not going to slice the STL file in Blender. You'll probably need to do more work to get things print ready outside blender anyway.</p> <p>So when I use blender I don't even bother checking what units it's using. I don't use it for parts design or tech drawing.</p>
2,123
<p>I am using Lotus Notes as my email client, and how to change the defailt email delete date to something new so my older emails don't get deleted after pre-defined date?</p>
<p>go to TOOLS -> Change Delete Date , you can put any number of days to change the default delete date of selected email.</p>
<p>Lotus Notes email does not automatically delete email - There is no "default delete date". </p> <p>It does have a setting to automatically delete email from your Trash folder after a specified amount of hours (but mail only gets put in the Trash if you delete it). You can find this setting by clicking More >> Preferences... "Delete documents in my Trash folder after __ hours"</p>
11,093
<p>I have a collection of HTML documents for which I need to parse the contents of the &lt;meta&gt; tags in the &lt;head&gt; section. These are the only HTML tags whose values I'm interested in, i.e. I don't need to parse anything in the &lt;body&gt; section.</p> <p>I've attempted to parse these values using the XPath support provided by JDom. However, this isn't working out too well because a lot of the HTML in the &lt;body&gt; section is not valid XML.</p> <p>Does anyone have any suggestions for how I might go about parsing these tag values in manner that can deal with malformed HTML?</p> <p>Cheers, Don</p>
<p>You can likely use the <a href="http://jerichohtml.sourceforge.net/doc/index.html" rel="nofollow noreferrer">Jericho HTML Parser</a>. In particular, have a look at <a href="http://jerichohtml.sourceforge.net/samples/console/src/FindSpecificTags.java" rel="nofollow noreferrer">this</a> to see how you can go about finding specific tags.</p>
<p><a href="http://jtidy.sourceforge.net/" rel="nofollow noreferrer">JTidy</a> should provide a good starting point for this.</p>
38,344
<p>I keep it in single line, if it's short. Lately I've been using this style for longer or nested ternary operator expressions. A contrived example:</p> <pre><code>$value = ( $a == $b ) ? 'true value # 1' : ( $a == $c ) ? 'true value # 2' : 'false value'; </code></pre> <p>Personally which style you use, or find most readable?</p> <p><strong>Edit:</strong> <em>(on when to use ternary-operator)</em></p> <p>I usually avoid using more than 2 levels deep ternary operator. I tend prefer 2 levels deep ternary operator over 2 level if-else, when I'm echoing variables in PHP template scripts.</p>
<p>The ternary operator is generally to be avoided, but this form can be quite readable:</p> <pre><code> result = (foo == bar) ? result1 : (foo == baz) ? result2 : (foo == qux) ? result3 : (foo == quux) ? result4 : fail_result; </code></pre> <p>This way, the condition and the result are kept together on the same line, and it's fairly easy to skim down and understand what's going on.</p>
<p>I tend not to use the ternary operator at all as I find if .. else much more readable.</p>
30,127
<p>I want a recommendation on an ISP which has Subversion installed so I can get a repository started. So far I found out discount.asp doesn't have that on their servers and will not support it. So I'm looking for a recommendation</p>
<p>What's your price range? Do you want a straght SVN provider or do you want to host a website too?</p> <p>For Straight SVN Hosting Check out</p> <p><a href="http://cvsdude.com/" rel="nofollow noreferrer">http://cvsdude.com/</a></p> <p><a href="http://www.assembla.com/" rel="nofollow noreferrer">http://www.assembla.com/</a></p> <p>Only Hosting provider I can think of with SVN support (outside of a VPS provider) would be</p> <p><a href="http://www.dreamhost.com" rel="nofollow noreferrer">http://www.dreamhost.com</a></p>
<p><a href="http://csoft.net" rel="nofollow noreferrer">csoft.net</a> is pretty good. They've been around for a long time, they're cheap, good, open source friendly, very geek friendly, and accounts come with SVN (and with the more expensive plans, a ton of other features). Also, ever tried to deal with frontline tech support at a big host (like, ick, 1&amp;1) where you had a sneaking suspicion you were actually dealing with a very poorly programmed Eliza bot rather than a human? Yeah, well, csoft isn't like that. :-)</p> <p>If you're looking for something a little more user friendly, you might check out <a href="http://unfuddle.com/" rel="nofollow noreferrer">Unfuddle</a>. I haven't used them personally but they get a lot of good press here on SO, and they've got a nice feature set.</p>
12,494
<p>I have an array of mouse points, a stroke width, and a softness. I can draw soft circles and soft lines. Which algorithm should I use for drawing my array of points? I want crossed lines to look nice as well as end points. </p>
<p>I would definitely choose the Bezier for that purpose, and in particular I will implement the piecewise cubic Bezier - it is truly easy to implement and grasp and it is widely used by 3D Studio max and Photoshop.</p> <p>Here is a good source for it: <a href="http://local.wasp.uwa.edu.au/~pbourke/surfaces_curves/bezier/cubicbezier.html" rel="nofollow noreferrer">http://local.wasp.uwa.edu.au/~pbourke/surfaces_curves/bezier/cubicbezier.html</a></p> <p>Assuming that you have an order between the points, in order to set the four control points you should go as follows:</p> <p>I define the tangent between point P[i] and point P[i+1]</p> <ul> <li>T1 = (P[i+1] - P[i-1])</li> <li>T2 = (P[i+2] - P[i])</li> </ul> <p>And to create the piecewise between two points I do the following:</p> <ul> <li>Control Point Q1: P[i]</li> <li>Control Point Q2: the point lying along the tangent from Q1 => Q1 + 0.3T1</li> <li>Control Point Q3: the point lying along the tangent to Q4 => Q4 - 0.3T2</li> <li>Control Point Q4: P[i+1]</li> </ul> <p>The reason I chose 0.3T is arbitrary in order to give it enough 'strength' but not too much, you can use more elaborated methods that will take care of acceleration (C2 continuity) as well.</p> <p>Enjoy</p>
<p>I think maybe you're looking for a spline algorithm.</p> <p>Here is a spline tutorial, which you might find helpfull:</p> <p>[<a href="http://www.doc.ic.ac.uk/~dfg/AndysSplineTutorial/index.html]" rel="nofollow noreferrer">http://www.doc.ic.ac.uk/~dfg/AndysSplineTutorial/index.html]</a></p> <p>The subject is also covered in most books on graphics programming.</p> <p>Cheers.</p>
11,146