text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
that
The getOrDie function seems to be generating an additional 'unchecked error' message when dying. For example, this trivial program (which should obviously generate an error):
#include <streamexecutor/streamexecutor.h>
namespace se = ::streamexecutor;
int main(int argc, char *argv[])
{
se::Platform *platform =
se::getOrDie(se::PlatformManager::getPlatformByName("foobar"));
return 0;
}
generates this output:
Error extracting an expected value: no available platform with name foobar.
Program aborted due to an unhandled Error:
Error value was Success. (Note: Success values must still be checked prior to being destroyed).
Is this expected?
Thanks for the report, jprice. What you described is not expected behavior. You're right that the error should not be "unhandled".
So far, however, I can't reproduce that behavior. I think the example program you gave should fail in the same way as the example/example executable that is checked into StreamExecutor now. They both die because there is no platform registered for the required name, and they are both wrapped in getOrDie.
When you run the example executable, do you see the same behavior, because I only see the first line:
Error extracting an expected value: no available platform with name CUDA.
and no other error message.
If you are seeing the same problem with the example, I think there may be some difference in the versions of the LLVM support libraries we are using. Do you have a revision number for the LLVM you are using?
Thanks again for your help.
Hmm, maybe a false alarm.
It only seems to happen if I build SE in Release/RelWithDebInfo mode and then compile the test program without passing -DNDEBUG, which I guess conflicts with the SE build which would have used -DNDEBUG. I can't reproduce it with the example for that reason, unless I manually compile it without that flag. If SE is built in Debug mode as well, there is no issue.
It's good to know that we have to be careful with the compiler flags. I'll have to think more about the implications here, but maybe we need to include a se-config tool like llvm-config to help users get the right compiler flags. Or we might rethink the idea of using LLVM's libraries at all. | https://reviews.llvm.org/D24107 | CC-MAIN-2021-25 | refinedweb | 375 | 64.91 |
SharePoint and Other stuff that goes bump in the night...Because if you're not going to Share, What's the point? Community 5.6.583.17018 (Build: 5.6.583.17018)2008-05-07T18:39:00ZSharePoint 2010 Service Pack 1 (SP1) Availability And Download Links<p>In case you haven’t seen it yet, the Office Sustained Engineering team has announced the availability of Office 2010 SP1 and SharePoint Server 2010 SP1. You can find there post on it here: <a href="">Office 2010 and SharePoint 2010 Service Pack 1 Availability</a></p> <p>Here are the relevant links for anyone interested:</p> <p><a href="" target="_blank">Updates Resource Center For SharePoint 2010</a> <br />This page provides resources for IT professionals who are responsible for installing software updates on SharePoint 2010 farms.</p> <p><a href="">Service Pack 1 (<b>SP1</b>) for Microsoft <b>SharePoint</b> <b>Foundation</b> 2010 and Microsoft <b>SharePoint</b> Server 2010 (white paper)</a> <br />Describes the fixes and improvements to existing functionality that are included in <b>SharePoint</b> Server 2010 <b>SP1</b> and <b>SharePoint</b> <b>Foundation</b> 2010 <b>SP1</b>.</p> <p><b><a href="">Service Pack 1 for Microsoft <b>SharePoint</b> <b>Server</b> <b>2010</b> (KB2460045)</a> <br /></b>Service Pack 1 (<b>SP1</b>) for Microsoft <b>SharePoint</b> <b>Server</b> <b>2010</b> contains new updates which improve security, performance, and stability. Additionally, the SP is a roll-up of all previously released updates.</p> <p><a href="">Service Pack 1 for <b>SharePoint</b> <b>Foundation</b> 2010 (KB2460058)</a> <br />Service Pack 1 (<b>SP1</b>) for Microsoft <b>SharePoint</b> <b>Foundation</b> 2010 contains new updates which improve security, performance, and stability. Additionally, the SP is a roll-up of all previously released updates.</p> <p><a href="">Service Pack 1 for Microsoft <b>SharePoint</b> <b>Foundation</b> 2010 Language Pack (KB2460059)</a> <br />Service Pack 1 (<b>SP1</b>) for Microsoft <b>SharePoint</b> <b>Foundation</b> 2010 Language Pack contains new updates which improve security, performance, and stability. Additionally, the SP is a roll-up of all previously released updates.</p> <p><a href="">Service Pack 1 for Microsoft <b>SharePoint</b> <b>Foundation</b> 2010 Client Object Model Redistributable (KB2508825) 64-bit Edition</a> <br />Service Pack 1 (<b>SP1</b>) for Microsoft <b>SharePoint</b> <b>Foundation</b> 2010 Client Object Model Redistributable 64-bit Edition contains new updates which improve security, performance, and stability. Additionally, the SP is a roll-up of all previously released updates.</p> <p><a href="">Service Pack 1 for Microsoft <b>SharePoint</b> <b>Foundation</b> 2010 Client Object Model Redistributable (KB2508825) 32-bit Edition</a> <br />Service Pack 1 (<b>SP1</b>) for Microsoft <b>SharePoint</b> <b>Foundation</b> 2010 Client Object Model Redistributable 32-bit Edition contains new updates which improve security, performance, and stability. Additionally, the SP is a roll-up of all previously released updates.</p><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay 2010 – Clearing the Configuration Cache<p>ThereData\Microsoft\SharePoint\Config\<GUID> The overall steps remain largely the same:</p> <ol> <li> <p>Stop the Timer service. To do this, follow these steps: </p> <ol> <li>Click <b>Start</b>, point to <b>Administrative Tools</b>, and then click <b>Services</b>. </li> <li>Right-click <b>SharePoint 2010 Timer</b>, and then click <b>Stop</b>. </li> <li>Close the Services console.</li> </ol> </li> <li>On the computer that is running Microsoft SharePoint Server 2010 and on which the Central Administration site is hosted, click <b>Start</b>, click <b>Run</b>, type <b>explorer</b>, and then press ENTER. </li> <li>In Windows Explorer, locate and then double-click the following folder: </li> <li>%SystemDrive%\ProgramData\Microsoft\SharePoint\Config\<i>GUID</i></li> <li><b>Notes</b></li> <ol> <li>The %SystemDrive% system variable specifies the letter of the drive on which Windows is installed. By default, Windows is installed on drive C. </li> <li>The <i>GUID</i> placeholder specifies the GUID folder. There may be more than one of these.</li> <li>The ProgramData folder may be hidden. To view the hidden folder, follow these steps: </li> <ol> <li>On the <b>Tools</b> menu, click <b>Folder Options.</b></li> <li>Click the <b>View</b> tab. </li> <li>In the <b>Advanced settings</b> list, click <b>Show hidden files and folders</b> under <b>Hidden files and folders</b>, and then click <b>OK</b>.</li> <li>You can also simply type this directly in the path if you do not want to show hidden files and folders.</li> </ol> </ol> <li>Back up the Cache.ini file. (Make a copy of it. <b>DO NOT DELETE THIS FILE</b>, Only the XML files in the next step)</li> <li>Delete all the XML configuration files in the GUID folder (<b>DO NOTE DELETE THE FOLDER</b>). Do this so that you can verify that the GUID folders content is replaced by new XML configuration files when the cache is rebuilt. <br /><b>Note</b> When you empty the configuration cache in the GUID folder, make sure that you <b>do </b><b><u>NOT</u></b><b> delete the GUID folder and the Cache.ini</b> file that is located in the GUID folder. </li> <li>Double-click the Cache.ini file. </li> <li>On the <b>Edit </b>menu, click <b>Select All</b>. </li> <li>On the <b>Edit</b> menu, click <b>Delete</b>. </li> <li>Type <b>1</b>, and then click <b>Save</b> on the <b>File</b> menu. (Basically when you are done, the only text in the config.ini file should be the number 1)</li> <li>On the <b>File</b> menu, click <b>Exit</b>. </li> <li>Start the Timer service. To do this, follow these steps: <ol> <li>Click <b>Start</b>, point to <b>Administrative Tools</b>, and then click <b>Services</b>. </li> <li>Right-click <b>SharePoint 2010 Timer</b>, and then click <b>Start</b>. </li> <li>Close the Services console. </li> </ol> </li> <li><b>Note</b> The file system cache is re-created after you perform this procedure. Make sure that you perform this procedure on all servers in the server farm. </li> <li>Make sure that the Cache.ini file in the GUID folder now contains its previous value. For example, make sure that the value of the Cache.ini file is not 1. </li> <li>Check in the GUID folder to make sure that the xml files are repopulating. This may take a bit of time.</li> </ol> <p>For the original steps for clearing out the configuration cache in SharePoint 2007, there are many articles that cover the steps, one of them is the following: <a href=""></a></p><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay Articles for May and June 2011<p>Here is a preview of some of the content that I am currently working on for the next two months:</p> <ul> <li>Silverlight Development for SharePoint (Upcoming Series of Articles)</li> <li>User and Group Management in SharePoint 2010 for Site Admins (Upcoming Series of Articles)</li> <li>Video Series on User and Group Management to accompany the above Series of Articles.</li> <li>Developing Tools using the SharePoint 2010 Client Object Model (Upcoming Series of Articles)</li> </ul> <p> </p> <p>There will be more coming as well, this is just the top items I am building out for the immediate future. Stay tuned and check back next week.</p><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay: Creating an Integrated BI Solution with Microsoft SharePoint Server<p>Just a quick heads up on some upcoming training coming your way in April. Amanda Brophy will be presenting on BI integration with SharePoint and Performance point server. This is a 60 minute webcast that will take place on April 21st. You can find out more and register on the following page: <a title="" href=""></a></p> <p>Here is a brief summary of the event and the topics covered:</p> <h3>Business Insights Webcast: Creating an Integrated BI Solution with Microsoft SharePoint Server (Level 100)</h3> <h4>Event ID: 1032477752</h4> <p><strong>Language(s):</strong> English.</p> <p><strong>Product(s):</strong> Microsoft SharePoint Server 2010.</p> <p><strong>Audience(s): </strong>Non-Tech Influencing BDM, Tech Influencing BDM.</p> <p>This webcast gives an overview of the value of business intelligence (BI) and an introduction to Microsoft PerformancePoint Server in Microsoft SharePoint Server 2010. Learn about product capabilities such as dynamic scorecards, dashboards, decomposition trees, and capabilities for self-service access to information. <br /><b>Presenter: </b>Amanda Brophy, Senior Product Manager, Microsoft Corporation</p><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay Explorer 9 Goes Live!<p>At 9pm tonight Pacific time Internet Explorer 9 will go live! To watch the Live Keynote from SXSW go to <a title="" href=""></a> Live right now! 5:40PM Pacific Time</p><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay up to date on SharePoint 2010 Certifications<p.</p> <p.</p> <p.</p> <p>Of course if you go over the exam topics in advance you will already know what topics are covered… I never bothered to check, so some of it was a surprise to me. And yes, I did pass both exams. </p> <p>For more info on the SharePoint Server 2010 Certifications and Topics covered, see the following:</p> <p><a href="" target="_blank">Microsoft Learning SharePoint Certifications Home Page</a></p> <p><a href="" target="_blank">MCTS For SharePoint Server</a></p> <p><a href="" target="_blank">Microsoft Professional (MCITP and MCPD) for SharePoint Server</a></p><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay Extended on Microsoft Certification Packs<p>For:</p> <table border="0" cellspacing="0" cellpadding="0"><tbody> <tr> <td valign="top" width="160"> <p><b>Pack Size</b></p> </td> <td valign="top" width="160"> <p><b>Savings</b></p> </td> <td valign="top" width="160"> <p><b>Retakes</b></p> </td> </tr> <tr> <td valign="top" width="160"> <p>2 Exams</p> </td> <td valign="top" width="160"> <p>15%</p> </td> <td valign="top" width="160"> <p>Included with each exam</p> </td> </tr> <tr> <td valign="top" width="160"> <p>3 Exams</p> </td> <td valign="top" width="160"> <p>15%</p> </td> <td valign="top" width="160"> <p>Included with each exam</p> </td> </tr> <tr> <td valign="top" width="160"> <p>4 Exams</p> </td> <td valign="top" width="160"> <p>20%</p> </td> <td valign="top" width="160"> <p>Included with each exam</p> </td> </tr> <tr> <td valign="top" width="160"> <p>5 Exams</p> </td> <td valign="top" width="160"> <p>20%</p> </td> <td valign="top" width="160"> <p>Included with each exam</p> </td> </tr> <tr> <td valign="top" width="160"> <p>1 Exam</p> </td> <td valign="top" width="160"> <p>Regular Price</p> </td> <td valign="top" width="160"> <p>15% Additional Cost</p> </td> </tr> </tbody></table> <p>For more information on the Certification Pack offerings: <a href=""></a></p> <p>That should give you some extra incentive to knock out those exams for that certification you have been working on.</p><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay 2010 October CU Issue Workaround<p>For:</p> <p><strong><font color="#666666">“The October Cumulative Update for the packages listed above (</font></strong>SharePoint Server Package <strong>2394320 </strong>and<strong> </strong>Project Server Package <strong>2394322)</strong><strong><font color="#666666">.”</font></strong></p> <p><font color="#000000".</font></p> <p>For those who have already installed the October CU, check the SharePoint teams blog for the current workaround:</p> <p><a href="">SharePoint 2010 October Cumulative Update Issue: Details and Workaround</a> (From the SharePoint Product team blog)</p><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay and Updates–The SharePoint 2010 October CU Server Packages have released<p>The:</p> <p>Microsoft SharePoint Foundation (MSF) October CU Server-Package(Article is not Live yet for the MSF Package): <br /><a title=";EN-US;2394323" href=";EN-US;2394323">;EN-US;2394323</a></p> <p>Microsoft SharePoint Server 2010 (MSS) October CU Server-Package: <br /><a title=";EN-US;2394320" href=";EN-US;2394320">;EN-US;2394320</a></p> <p>Microsoft SharePoint Server 2010 + Project Server 2010 October CU Server-Package: <br /><a title=";EN-US;2394322" href=";EN-US;2394322">;EN-US;2394322</a></p> <p.</p><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay and Updates: SharePoint 2010 - One Patch to Rule them all!<p>So something that the SharePoint team has been working towards for awhile has finally come to fruition in the August CU this year. In the past it could be a real pain trying to figure out what patches were actually needed to get the different versions of SharePoint up to date and patched to the latest and greatest. Not anymore. At least, not in the case of any of the SharePoint 2010 versions. Sorry 2007 admins out there, there are reasons that 2007 has not been able to do this yet, but that is outside the scope of this article.</p> <p>As of the August CU, SharePoint Server Packages are now Consolidated based on what SharePoint products you have installed. Running Microsoft SharePoint Foundation (MSF)? How about Microsoft SharePoint Server or SharePoint with Project Server 2010? Well your life just got a whole lot easier. There is now a CU Server Package for each of these that includes all products. We have a CU for just Foundation, One for Server that also includes the CU for Foundation, and even one for SharePoint + Project Server that includes all the patches for all 3 products! Also included are the language packs, etc… So you should truly have one SharePoint patch to apply when it comes to CU’s. The key to figuring out which ones contain all the needed patches is to look for the term Server-Package. These are the ones the contain multiple hotfix packages. Here are the relevant links for the August CU Server-Packages:</p> <h3><a href="" target="_blank"><font size="3">Description of the SharePoint Foundation 2010 Cumulative Update Server Hotfix Package (SharePoint Foundation server-package): August 31, 2010</font></a></h3> <h3><font size="3"></font></h3> <h3><font size="3"><a href="" target="_blank">Description of the SharePoint Server 2010 Cumulative Update Server Hotfix Package (MOSS server-package): August 31, 2010</a></font></h3> <h3><font size="3"></font></h3> <h3><font size="3"><a href="" target="_blank">Description of the SharePoint Server 2010 and Project Server 2010 Cumulative Update Server Hotfix Package (MOSS server-package, Project server-package): August 31, 2010</a></font></h3> <p>So each CU now contains the patches for the its base or dependent products. For example, You can’t have Microsoft SharePoint Server without having Foundation installed, so the MSS Patches now include all fixes for Foundation. Project Server Relies on SharePoint Server, so it includes all the MSS patches as well, and since again, MSS depends on Foundation, it also includes the foundation packages. So the SharePoint + Project CU Server-Package contains the updates for all three.</p> <p>The easiest way to verify this is to look at the Files listed towards the bottom of the KB articles for each package. Notice that in the Foundation Server Package, you have the file STS-X-NONE.msp listed with all the files and versions that it will update in SharePoint Foundation Server. Then look at the MSS and MSS + Project Server KB’s and notice that the same STS-X-NONE.msp is listed as part of each package with the same files and version numbers. So each package contains the patches from it’s base or dependent products.</p> <p>So now your life should be much easier when managing your 2010 SharePoint environments, well, at least the patching piece should be easier. Also for those wanting some more guidance around patching SharePoint 2010 in general including limiting downtime for High Availability etc… here you go:</p> <p><a href="">Software updates overview (SharePoint Server 2010)</a> <br />This article provides an overview of the software update process for SharePoint Server.</p> <p><a href="">Prepare to deploy software updates (SharePoint Server 2010)</a> <br />This article helps you determine which approach to use to update the servers or server farms in your environment, and lists the steps that you must take before you can start to install the update. </p> <p><a href="">Install a software update (SharePoint Server 2010)</a> <br />This article contains instructions for installing a software update and upgrading your content to that level.</p><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay Valley Code Camp – October 9th and 10th<p>This October will see the return of the Silicon Valley Code Camp. There are a ton of great sessions and great presenters with sessions covering just about any topic a developer could ask for. Everything from different programming languages and methodologies to writing code for specific products and platforms. You can find out more about it here: <a href="" target="_blank">Silicon Valley Code Camp 10</a></p> <p>I will also be onsite presenting a session on custom logging services and instrumentation in SharePoint 2010. The session will focus on creating a custom logging service in SharePoint 2010 to log events to the SharePoint Trace(ULS) Logs. The main focus is to allow for logging from custom code such as features, web parts, etc… to better troubleshoot issues with custom code in SharePoint 2010. This will hopefully allow many developers to avoid hitting that brick wall where your deployed code fails, but there is no useful data in the SharePoint logs to help troubleshoot or resolve any conflicts or issues.</p> <p>You can register for the Code Camp here: <a title="" href=""></a></p> <p>For a listing of all the available sessions you can go here: <a href=""></a> </p> <p>For those interested, you can sign up for my session as well by going here: <a title="" href=""></a></p> <p>Even if you are not interested in my particular session, feel free to let me know you are going to be at the code camp in the comments below, maybe I will see you there!</p><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay PivotViewer – It’s alive! And so is your data Visualization!<p>**Update – Channel 9 Interview link has been updated and is Live** The Silverlight PivotViewer is a powerful new control that allows you to visualize large amounts of data is now officially released. Everyone from developers to designers can quickly and easily build large collections of data and embed them in your website. Ever wanted to visually sort and group large amounts of data? Ever needed to quickly recognize patterns in your data? The PivotViewer gives you the power to view and manipulate large sets of data quickly and flexibly. You can find the PivotViewer and associated documentation and training materials here: <a href=""></a> and here are some more useful links to get you going as well:</p> <p align="center"><a href="" target="_blank"><img style="border-bottom: 0px; border-left: 0px; display: block; float: none; margin-left: auto; border-top: 0px; margin-right: auto; border-right: 0px" title="PivotViewer Control" border="0" alt="PivotViewer Control" src="" width="644" height="362" /></a>PivotViewer in Action </p> <p>· Silverlight PivotViewer Control Download and technical documentation: <a href=""></a> </p> <p>· Silverlight PivotViewer Community forum, feel free to post your collections and technical questions: <a href=""></a></p> <p>· Check out a partner that is using the PivotViewer live on their website today: <a href=""></a></p> <p>· Interview with Brian Goldfarb and Brett Brewer on channel 9: <a href=""></a> (**This is now live, this is the new URL**)</p> <p>· You can view and feel free to comment on the <a href=""></a> - GetSatisfaction post from Live Labs </p> <p>· Live Labs Facebook page, <a href=""></a></p><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay content databases are available for this operation when restoring a site<p>Recently one of my customers was seeing some odd behavior in SharePoint 2007 while attempting to restore a site. Each time the restore operation (STSADM –o restore) was run, the following error message would appear:</p> <p><font color="#0080ff">No content databases are available for this operation. Create a content database, and then try the operation again. To create a content database, click "Content databases" on the Application Management page, select the Web application to use, and then click "Add a content database".</font></p> <p>Now this seems pretty straight forward. Except for on thing, the particular web application that the site collection was being restored to, had plenty of Content Databases (around 10) and most of them had plenty of room for new sites. The site quotas had not been hit for these content databases and none of the Content databases was locked or in read-only mode.</p> <p>Looking at the ULS logs, we got more information about the error included in the stack trace, here is the full error in the ULS logs:</p> <p><font color="#0080ff">05/17/2010 14:57:22.18 STSADM.EXE (0x11C0) 0x19F0 Windows SharePoint Services General 72ju High stsadm: No content databases are available for this operation. Create a content database, and then try the operation again. To create a content database, click "Content databases" on the Application Management page, select the Web application to use, and then click "Add a content database". Callstack: at Microsoft.SharePoint.Administration.SPContentDatabaseCollection.FindBestContentDatabaseForSiteCreation(Guid siteIdToAvoid, Guid webIdToAvoid) at Microsoft.SharePoint.Administration.SPSiteCollection.Restore(String strSiteUrl, String strFilename, Boolean bOverwrite, Boolean hostHeaderAsSiteName) at Microsoft.SharePoint.StsAdmin.SPAdminRestore.RunSiteCollectionRestore(StringDictionary keyValues) at Microsoft.SharePoint.StsAdmin.SPAdminRestore.Run(StringDictionary keyValues) ... <br /></font></p> <p><font color="#0080ff">05/17/2010 14:57:22.18* STSADM.EXE (0x11C0) 0x19F0 Windows SharePoint Services General 72ju High ... at Microsoft.SharePoint.StsAdmin.SPStsAdmin.RunOperation(SPGlobalAdmin globalAdmin, String strOperation, StringDictionary keyValues, SPParamCollection pars)</font></p> <p>The part that tickled my brain was this bit:</p> <p><font color="#0080ff">Microsoft.SharePoint.Administration.SPContentDatabaseCollection<strong>.FindBestContentDatabaseForSiteCreation</strong>(Guid <strong>siteIdToAvoid</strong>, Guid <strong>webIdToAvoid</strong>)</font></p> <p><strong>FindBestContentDataBaseForSiteCreation</strong> tells us exactly what it is doing, the code at this point attempts to locate the best content database within the web application to restore the site collection to. The part to pay particular attention to, is that it takes two GUIDs when making this call, <strong>siteIdToAvoid</strong> and <strong>webIdToAvoid</strong>. </p> <p>So assume that when this call is being made, the GUID from the site collection that you are attempting to restore is being used. Makes perfect sense, what we are doing is looking for a content database that does not already contain a site collection with this ID. Further investigation showed that this particular restore file, was being used in place of a template in order to create new sites based off of an existing one. The issue with this, is that backup/restore, maintains the Site Collection (siteIdToAvoid) and Subsite (webIdToAvoid) ID of the backed up site. You cannot restore the same site to the same content database more than once unless you overwrite or delete the existing instance. So using a backup as a template for creating new sites, will not work more than once per content database.</p> <p>So in our case, the site had been restored enough times so that each content database in the web application already had an instance of the site, which is why the above error message was being displayed. Though a bit misleading depending on how you read it, it is actually an accurate error message.</p> <p>If you really need to take an existing site, and create other sites based on it, save the site as a template. And use STSADM to add that template to the gallery. You can then create new sites based on it. There are of course some limitations with templates as well, but they are pretty straight forward. Here are some references for site templates:</p> <ul> <li><a href="" target="_blank">Save a site as a site template</a> </li> <li><a href="" target="_blank">Manage site templates</a> </li> </ul> <p>Note that these are both direct from the Site Actions in the Browser. In order to add a site so that it can be globally available, you need to use STSADM to add the template to SharePoint.</p> <ul> <li><a href="" target="_blank">STSADM –o addTemplate Reference</a> </li> </ul><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay Rebirth! Here I come again!, I realize it has been a long long long time since I have done anything here on the blog. This will all be changing very soon. With the release of SharePoint 2010 things have really started jumping again here on the Premier Field Engineer/Dedicated Support Engineer teams at MS, especially for those of us who support SharePoint. Time to get back in the trenches and start sharing everything I can with you. More to come soon... you have been warned!<div style="clear:both;"></div><img src="" width="1" height="1">JamesWay Web Platform<p> </p> <p.</p> <p>For example if I choose that I want it to install the requirements for asp.net development, it will download and install the following:</p> <ul> <li>Internet Information Services (IIS) 5.1 on Windows XP SP3</li> <li>IIS 6.0 on Windows Server 2003 SP2</li> <li>IIS 7.0 on Windows Vista SP1 and Windows Server 2008</li> <li>SQL Server 2008 Express</li> <li>.NET Framework 3.5 SP1</li> <li>Visual Web Developer 2008 Express SP1</li> </ul> <p. </p> <p>You can get both the Web Platform and Web Application installers here: <br /><a title="" href=""></a></p> <p>One of the additional benefits is that the installer will check to make sure it is getting the newest version of each component, so you don't have to worry so much about figuring out what updates you need to download when the install runs.</p> <p>Try it out, and let us know your thoughts!</p><img src="" width="1" height="1">JamesWay 7 – Keyboard Shortcuts for the desktop<P:</P> <TABLE border=1 cellSpacing=0 cellPadding=2 width=500> <TBODY> <TR> <TD vAlign=top width=244>Keyboard Shortcut</TD> <TD vAlign=top width=254>Description</TD></TR> <TR> <TD vAlign=top width=244>Window Key + Right Arrow</TD> <TD vAlign=top width=254>Dock window to the right hand side of the screen and maximize vertically. Basically maximize a window so it takes up the right hand side of the screen. <BR><BR>Once window is docked to the right, hitting this combo again(in a multiple monitor setup) will move it to the next monitor to the right(if there is one to the right). <BR><BR>If Window is docked to the left side of the screen this shortcut will un-dock it and put it back to the original position. <BR><BR>*You can get the same effect by clicking the top title area of a window and dragging it to the right hand edge of your screen.*</TD></TR> <TR> <TD vAlign=top width=244>Window Key + Left Arrow</TD> <TD vAlign=top width=254>Same as above (Window + Right Arrow) but will dock to the left hand side of the screen or move the window to the next monitor to the left if one exists in a multiple monitor setup. <BR><BR>*You can get the same effect by clicking the top title area of a window and dragging it to the left hand edge of your screen.*</TD></TR> <TR> <TD vAlign=top width=244>Window Key + Up Arrow</TD> <TD vAlign=top width=254>Will Maximize the currently selected Window(Full Screen). <BR><BR>*You can get the same effect by clicking the top title area of a window and dragging it to the top(upper) edge of your screen or by double clicking the top title area of a non-maximized window.*</TD></TR> <TR> <TD vAlign=top width=244>Window Key + Down Arrow</TD> <TD vAlign=top width=254>Will restore a maximized window back to it’s default size. <BR><BR>Will minimize a window that is not maximize. <BR><BR>To minimize a maximized (Full Screen) window, simply hit this shortcut twice. <BR><BR>*You can get the same effect by double clicking the top title area of a maximized window.*</TD></TR> <TR> <TD vAlign=top width=244>Window Key + Shift + Up Arrow</TD> <TD vAlign=top width=254>On a non-maximized window will do a vertical maximize. Basically this will stretch the bottom and top of the window to hit the bottom and top of the screen while not changing the width of the window. <BR><BR>*This same effect can be done with the mouse by putting the mouse cursor on the top edge of a window so that the resize arrows appear, and then double clicking.*</TD></TR> <TR> <TD vAlign=top width=244>Window Key + E</TD> <TD vAlign=top width=254>While not a new shortcut, I mention it here as the functionality has changed a bit. In the current build of Win 7 Beta, this opens a window to the Libraries (Music, Videos, Documents, Pictures…) instead of going straight to the default My Computer window that Vista and XP used.</TD></TR></TBODY></TABLE> <P:292dddb8-4972-4941-b8e4-744eea0e6aa1 class=wlWriterEditableSmartContent>Technorati Tags: <A href="" rel=tagWindows 7</A>,<A href="" rel=tagTips and Tricks</A>,<A href="" rel=tagKeyboard Shortcuts</A></DIV><img src="" width="1" height="1">JamesWay SharePoint: Getting Up To Date Official Style<P>Great.</P> <P>You can find the article here: <A href="" mce_href=""></A></P> <P>This should help for anyone who is looking to get up to date. Looks like I don't have to update my post now :) Enjoy! And thanks to the product team for putting this out. This should help ease a lot of confusion.</P><img src="" width="1" height="1">JamesWay Soon: More SharePoint Patching Love<p>I will be posting an update on patching SharePoint in a few days. This will account for updates that have come out after the infrastructure updates.</p><img src="" width="1" height="1">JamesWay SharePoint: getting up to date with the moss and wss infrastructure updates<p><img style="border-right: 0px; border-top: 0px; border-left: 0px; border-bottom: 0px" height="137" alt="SharePoint Logo" src="" width="456" border="0" /> <img style="border-right: 0px; border-top: 0px; border-left: 0px; border-bottom: 0px" height="137" alt="WSS Logo" src="" width="244" border="0" /> </p> <p> </p> <p <a title="Todd Carters Blog" href="" target="_blank">HERE</a>.:</p> <ol> <li>Download any needed updates.</li> <li>Apply patches in order starting with the Central Administration server, do not run the PSConfig wizard (This will be done last)</li> <ol> <li>WSS Patches</li> <li>MOSS Patches</li> </ol> <li>Apply patches to the remaining servers in the farm. Again do not run the PSConfig portion of the update.</li> <li>Return to the central admin server once all servers are patched, run PSConfig on the Central Admin server.</li> <li>Once PSConfig completes, run it on the remaining servers one at a time.</li> </ol> <p>This is just a summary, Todd's article covers this in much more depth including detaching Content Databases to speed up the process as well as some other configuration details to streamline the process. I highly recommend reading his article before patching.</p> <p>The below section covers the patches that are needed and the order they should be applied. This will get you up to date from a fresh RTM install all the way to the Infrastructure Updates (IU).</p> <p>From an RTM MOSS installation use the following patching to get current(Make sure you read the articles and get the appropriate patches for 32 or 64 bit):</p> <ol> <li>WSS sp1 (<a href="" target="_blank">Available HERE</a>)</li> <li>Moss SP1 (<a href="" target="_blank">Available HERE</a>)</li> <li>WSS latest language(local) patch (953484) (<a href="" target="_blank">Available HERE</a>) *Use the "View and Request Hotfix Downloads" link at the top of the page.*</li> <li>WSS IU – Contains latest global but no language specific patches (which is why you are doing step3) (<a href="" target="_blank">Available HERE</a>)</li> <li>Moss IU – Contains latest global and language specific patches (<a href="" target="_blank">Available HERE</a>)</li> <li>AAM hotfix (Coming soon via KB 956248) This will be released soon and should also be available in the next rollup package.</li> <ol> <li>To get this hotfix now, follow the directions (<a title="Customer Support hotfix request." href="" target="_blank">HERE</a>) to call customer service and request it.</li> </ol> </ol> <p> </p> <p>The AAM hotfix is mentioned in the notes on the infrastructure updates. The KB for this is forthcoming and I will post when it is released as well. You can call customer service to request it using the link above. <strong>Always read the known issues portions of the KB articles when downloading a particular patch or hotfix.</strong> The above list is from RTM to current. If you already have SP1 installed, you can start at step 3. If you already have the patch from step 3 applied, you can go to step 4.</p> <p.</p> <p.</p> <p>So when doing any patching to MOSS/WSS you need to keep the following items in mind:</p> <ul> <li>WSS patches (32 or 64 bit)</li> <ul> <li>Local patches (Language Specific)</li> <li>Global patches</li> </ul> <li>MOSS patches (32 or 64 bit)</li> <ul> <li>Local Patches (Language Specific)</li> <li>Global patches</li> </ul> </ul> <p> </p> <p>SharePoint patching resources:</p> <p><a href="" target="_blank">Updates Resource Center for SharePoint Products and Technologies</a></p> <p><a href="">Understanding and deploying hotfixes, public updates, and service packs</a> (WSS)</p> <p><a href="" target="_blank">Deploying software updates for Windows SharePoint Services 3.0</a></p> <p><a href="">Understanding and deploying hotfixes, public updates, and service packs</a> (MOSS)</p> <p><a href="" target="_blank">Deploying software updates for SharePoint Server 2007</a></p> <p> </p> <p>Hopefully this helps sort through some of the confusion when it is time to patch your SharePoint environment. I will post more information as it becomes available especially in regards to the upcoming AAM hotfix KB.</p> <p> </p> <p>Special Thanks to:</p> <p>Gabe Bratton</p> <p>Tony McIntyre</p> <p>Todd Carter</p><img src="" width="1" height="1">JamesWay... Some Initial thoughts.</P> <P.</P> ...</P><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay and WPF, the journey continues'": "<A title="Windows Presentation Foundation Unleashed - By Adam Nathan" href="" target=_blank><SPAN style="COLOR: blue; mso-bidi-font-size: 11.0pt">WPF Unleashed</SPAN></A>" (<A title="Professional SharePoint 2007 Web Content Management Development: Building Publishing Sites with Office SharePoint Server 2007" href="" target=_blank><SPAN style="COLOR: blue; mso-bidi-font-size: 11.0pt">Andrew's Book</SPAN></A>) I am really interested in seeing how I can leverage Silverlight to enhance the publishing controls in MOSS.<?xml:namespace prefix = o<o:p></o:p></SPAN></P> '">I have also been spending a lot of time over on <A title="Microsoft Home Page for Everything Silverlight." href="" target=_blank><SPAN style="COLOR: blue; mso-bidi-font-size: 11.0pt">Silverlight.net</SPAN></A> and <A title="Microsoft Community for Learning .Net Client Development." href="" target=_blank><SPAN style="COLOR: blue; mso-bidi-font-size: 11.0pt">WindowsClient.net</SPAN></A>:<o:p></o:p></SPAN></P> <UL type=disc> Service" href="" target=_blank><SPAN style="COLOR: blue; mso-bidi-font-size: 11.0pt">Silverlight Streaming Service from Microsoft Live</SPAN></A> (Free Sign SDK" href="" target=_blank><SPAN style="COLOR: blue; mso-bidi-font-size: 11.0pt">Silverlight Streaming Home on MSDN</SPAN>< Plugin for Windows Live Writer" href="" target=_blank><SPAN style="COLOR: blue; mso-bidi-font-size: 11.0pt">Gadget for Inserting your Silverlight Streaming application into Windows Live Spaces Blogs</SPAN></A><o:p></o:p></SPAN></LI></UL> '">Everything is going good so far. There is a lot of great content out there on Silverlight and WPF. I can't wait to start building something exciting. Talk to you later...<o:p></o:p></SPAN></P><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay + VS2008: New tools for MOSS Ninjas!<P>I just realized that the SharePoint extensions for VS2008 have shipped. The official name is Windows SharePoint Services 3.0 Tools: Visual Studio 2008 Extensions, Version 1.2 Not sure how I missed this! You can find them <A class="" title="Windows SharePoint Services 3.0 Tools: Visual Studio 2008 Extensions, Version 1.2" href="" mce_href="">HERE</A>. The user guide is the same as the 1.1 toolset and can be found <A class="" title="WSS Tools Uer Guide v1.1" href="" mce_href="">HERE</A>.:</P> <P><STRONG>Visual Studio 2008 Project Templates </STRONG></P> <UL> <LI>Web Part <LI>Team Site Definition <LI>Blank Site Definition <LI>List Definition <LI>Empty SharePoint Project </LI></UL> <P><BR><B>Visual Studio 2008 Item Templates</B> (items that can be added into an existing project) </P> <UL> <LI>Web Part <LI>Custom Field <LI>List Definition (with optional Event Receiver) <LI>Content Type (with optional Event Receiver <LI>Module <LI>List Instance <LI>List Event Handler <LI>Template </LI></UL> <P><BR><B>SharePoint Solution Generator</B> </P> <UL> <LI>This stand-alone program generates a Site Definition project from an existing SharePoint site. The program enables developers to use the browser and Microsoft Office SharePoint Designer to customize the content of their sites before creating code by using Visual Studio.</LI></UL> <P>So go grab it, get your feet wet and let me know what you think...</P><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay up on WCM in MOSS...<P>This week I will be attending some training with <A class="" title="Andrew Connell (MS MVP)" href="" mce_href="">Andrew Connell</A>.</P> <P> For more info on the WCM offerings that Andrew Delivers with Ted Pattison group:</P> <P><A id=viewpost_ascx_TitleUrlOffice SharePoint Server 2007 Web Content Management Developer Training</A></P> <P><A class="" title="The Ted Pattison Group SharePoint Education Offerings" href="" mce_href="">SharePoint Education for Users, Administrators and Developers</A></P> <P> Anyway, Off to a week of fun in Texas...</P><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay, Silverlight, Games... Let the journey Begin....</P> <P.</P> <P>I look forward to hearing from you.</P><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay Yourself! Expression Suite 2 released!<P>Expression suite 2 has been released. For more information you can always use <A href=""></A>. This release brings new versions of all the expression products (Blend, Design, Web, Media, Encoder).</P> <P>A couple of notes of interest:</P> <P>Blend 2 supports Silverlight 1.0 applications natively now! Expression Web 2 supports PHP! Just a couple of things that stuck out to me :) Below are the links to the MSDN documentation, Tutorials and Feature descriptions for each product!</P> <DT><A id=ctl00_rs1_mainContentContainer_ctl00Expression Blend 2</A> <DT><A id=ctl00_rs1_mainContentContainer_ctl01Expression Design 2</A> <DT><A id=ctl00_rs1_mainContentContainer_ctl02Expression Encoder 2</A> <DT><A id=ctl00_rs1_mainContentContainer_ctl03Expression Media 2 for Macintosh</A> <DT><A id=ctl00_rs1_mainContentContainer_ctl04Expression Media 2 for Windows</A> <DT><A id=ctl00_rs1_mainContentContainer_ctl05Expression Web 2</A></DT> <P>I will be back with more in the near future!</P><div style="clear:both;"></div><img src="" width="1" height="1">JamesWay | http://blogs.msdn.com/b/jamesway/atom.aspx | CC-MAIN-2014-52 | refinedweb | 6,832 | 50.87 |
CGI::Application::Plugin::Session - Plugin that adds session support to CGI::Application
version 1.05
use CGI::Application::Plugin::Session; my $language = $self->session->param('language');
CGI::Application::Plugin::Session seamlessly adds session support to your CGI::Application modules by providing a CGI::Session object that is accessible from anywhere in the application.
Lazy loading is used to prevent expensive file system or database calls from being made if the session is not needed during this request. In other words, the Session object is not created until it is actually needed. Also, the Session object will act as a singleton by always returning the same Session object for the duration of the request.
This module aims to be as simple and non obtrusive as possible. By not requiring any changes to the inheritance tree of your modules, it can be easily added to existing applications. Think of it as a plugin module that adds a couple of new methods directly into the CGI::Application namespace simply by loading the module.
CGI::Application::Plugin::Session - Add CGI::Session support to CGI::Application
This method will return the current CGI::Session object. The CGI::Session object is created on the first call to this method, and any subsequent calls will return the same object. This effectively creates a singleton session object for the duration of the request. CGI::Session will look for a cookie or param containing the session ID, and create a new session if none is found. If
session_config has not been called before the first call to
session, then it will choose some sane defaults to create the session object.
# retrieve the session object my $session = $self->session; - or - # use the session object directly my $language = $self->session->param('language');
This method can be used to customize the functionality of the CGI::Application::Plugin::Session module. Calling this method does not mean that a new session object will be immediately created. The session object will not be created until the first call to $self->session. This 'lazy loading' can prevent expensive file system or database calls from being made if the session is not needed during this request.
The recommended place to call
session_config is in the
cgiapp_init stage of CGI::Application. If this method is called after the session object has already been accessed, then it will die with an error message.
If this method is not called at all then a reasonable set of defaults will be used (the exact default values are defined below).
The following parameters are accepted:
This allows you to customize how the CGI::Session object is created by providing a list of options that will be passed to the CGI::Session constructor. Please see the documentation for CGI::Session for the exact syntax of the parameters.
CGI::Session Allows you to set an expiry time for the session. You can set the DEFAULT_EXPIRY option to have a default expiry time set for all newly created sessions. It takes the same format as the $session->expiry method of CGI::Session takes. Note that it is only set for new session, not when a session is reloaded from the store.
This allows you to customize the options that are used when creating the session cookie. For example you could provide an expiry time for the cookie by passing -expiry => '+24h'. The -name and -value parameters for the cookie will be added automatically unless you specifically override them by providing -name and/or -value parameters. See the CGI::Cookie docs for the exact syntax of the parameters.
NOTE:.
If set to a true value, the module will automatically add a cookie header to the outgoing headers if a new session is created (Since the session module is lazy loaded, this will only happen if you make a call to $self->session at some point to create the session object). This option defaults to true. If it is set to false, then no session cookies will be sent, which may be useful if you prefer URL based sessions (it is up to you to pass the session ID in this case).
The following example shows what options are set by default (ie this is what you would get if you do not call session_config).
$self->session_config( CGI_SESSION_OPTIONS => [ "driver:File", $self->query, {Directory=>'/tmp'} ], COOKIE_PARAMS => { -path => '/', }, SEND_COOKIE => 1, );
Here is a more customized example that uses the PostgreSQL driver and sets an expiry and domain on the cookie.
$self->session_config( CGI_SESSION_OPTIONS => [ "driver:PostgreSQL;serializer:Storable", $self->query, {Handle=>$dbh} ], COOKIE_PARAMS => { -domain => 'mydomain.com', -expires => '+24h', -path => '/', -secure => 1, }, );
This method will add a cookie to the outgoing headers containing the session ID that was assigned by the CGI::Session module.
This method is called automatically the first time $self->session is accessed if SEND_COOKIE was set true, which is the default, so it will most likely never need to be called manually.
NOTE that if you do choose to call it manually that a session object will automatically be created if it doesn't already exist. This removes the lazy loading benefits of the plugin where a session is only created/loaded when it is required.
It could be useful if you want to force the cookie header to be sent out even if the session is not used on this request, or if you want to manage the headers yourself by turning SEND_COOKIE to false.
# Force the cookie header to be sent including some # custom cookie parameters $self->session_cookie(-secure => 1, -expires => '+1w');
This method will let you know if the session object has been loaded yet. In other words, it lets you know if $self->session has been called.
sub cgiapp_postrun { my $self = shift; $self->session->flush if $self->session_loaded;; }
This? }
In a CGI::Application module:
# configure the session once during the init stage sub cgiapp_init { my $self = shift; # Configure the session $self->session_config( CGI_SESSION_OPTIONS => [ "driver:PostgreSQL;serializer:Storable", $self->query, {Handle=>$self->dbh} ], DEFAULT_EXPIRY => '+1w', COOKIE_PARAMS => { -expires => '+24h', -path => '/', }, SEND_COOKIE => 1, ); } sub cgiapp_prerun { my $self = shift; # Redirect to login, if necessary unless ( $self->session->param('~logged-in') ) { $self->prerun_mode('login'); } } sub my_runmode { my $self = shift; # Load the template my $template = $self->load_tmpl('my_runmode.tmpl'); # Add all the session parameters to the template $template->param($self->session->param_hashref()); # return the template output return $template->output; }
CGI::Application, CGI::Session, perl(1)
Cees Hek <ceeshek@gmail.com>
This library is free software. You can modify and or distribute it under the same terms as Perl itself.
Cees Hek <ceeshek@gmail.com>
This software is copyright (c) 2013 by Cees Hek.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | https://metacpan.org/pod/release/FREW/CGI-Application-Plugin-Session-1.05/lib/CGI/Application/Plugin/Session.pm | CC-MAIN-2014-23 | refinedweb | 1,118 | 51.07 |
Fortran 90 namelist parser
Project description
A Python module and command line tool for parsing Fortran namelist files
Documentation
The complete documentation for f90nml is available from Read The Docs.
About f90nml
f90nml is a Python module and command line tool that provides a simple interface for the reading, writing, and modifying Fortran namelist files.
A namelist file is parsed and converted into an Namelist object, which behaves like a standard Python dict. Values are converted from Fortran data types to equivalent primitive Python types.
The command line tool f90nml can be used to modify individual values inside of a shell environment. It can also be used to convert the data between namelists and other configuration formats. JSON and YAML formats are currently supported.
Quick usage guide
To read a namelist file sample.nml which contains the following namelists:
&config_nml input = 'wind.nc' steps = 864 layout = 8, 16 visc = 1.0e-4 use_biharmonic = .false. /
we would use the following script:
import f90nml nml = f90nml.read('sample.nml')
which would would point nml to the following dict:
nml = { 'config_nml': { 'input': 'wind.nc', 'steps': 864, 'layout': [8, 16], 'visc': 0.0001, 'use_biharmonic': False } }
File objects can also be used as inputs:
with open('sample.nml') as nml_file: nml = f90nml.read(nml_file)
To modify one of the values, say steps, and save the output, just manipulate the nml contents and write to disk using the write function:
nml['config_nml']['steps'] = 432 nml.write('new_sample.nml')
Namelists can also be saved to file objects:
with open('target.nml') as nml_file: nml.write(nml_file)
To modify a namelist but preserve its comments and formatting, create a namelist patch and apply it to a target file using the patch function:
patch_nml = {'config_nml': {'visc': 1e-6}} f90nml.patch('sample.nml', patch_nml, 'new_sample.nml')
Command line interface
A command line tool is provided to manipulate namelist files within the shell:
$ f90nml config.nml -g config_nml -v steps=432
&config_nml input = 'wind.nc' steps = 432 layout = 8, 16 visc = 1.0e-4 use_biharmonic = .false. /
See the documentation for details.
Installation
f90nml is available on PyPI and can be installed via pip:
$ pip install f90nml
The latest version of f90nml can be installed from source:
$ git clone $ cd f90nml $ pip install .
Package distribution
f90nml is not distributed through any official packaging tools, but it is available on Arch Linux via the AUR:
$ git clone $ cd python-f90nml $ makepkg -sri
Volunteers are welcome to submit and maintain f90nml on other distributions.
Local install
Users without install privileges can append the --user flag to pip from the top f90nml directory:
$ pip install --user .
If pip is not available, then setup.py can still be used:
$ python setup.py install --user
When using setup.py locally, some users have reported that --prefix= may need to be appended to the command:
$ python setup.py install --user --prefix=
YAML support
The command line tool offers support for conversion between namelists and YAML formatted output. If PyYAML is already installed, then no other steps are required. To require YAML support, install the yaml extras package:
$ pip install f90nml[yaml]
To install as a user:
$ pip install --user .[yaml]
Contributing to f90nml
Users are welcome to submit bug reports, feature requests, and code contributions to this project through GitHub. More information is available in the Contributing guidelines.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/f90nml/ | CC-MAIN-2021-43 | refinedweb | 579 | 57.06 |
Hi all,
How to read toast message in android version 7.
Thanks
How to read toast message in android
Hi all,
Hi,
I am also facing the same issue. Any Solution?
Thanks
Solution found.
Use mobile screen(s) captured, then using the approach here to automate:
Do you have any sample scripts here?
@Chew Mong Leng do you have any samples?
Hi U.G.,
You can try this for Android toasts:
import com.kms.katalon.core.mobile.keyword.internal.MobileDriverFactory import io.appium.java_client.AppiumDriver AppiumDriver<?> driver = MobileDriverFactory.getDriver() def toast = driver.findElementByXPath("//android.widget.Toast[@text='Added to cart']")
Hope this helps,
Chris
@Chris_Trevarthen This code didn’t work.
Can not capturer Toast message on Android. were you able to capture Toast message?
I’m able to capture the toast message on an Android emulator using API v27. Note that my tests are using “uiautomator2”, which I think is what’s required for capturing the toasts. I didn’t have to do anything special to set that up, though, it seemed to be the default. You should be able to see in your logs the
automationName property of Desired Capabilities whether uiautomator 2 is being used.
I wonder if you could log out what Appium thinks is on the screen at the time when the toast appears. If you want to see some more detail about exactly what Appium/Katalon detects on the screen you can log out the XML contents of the screen. You should be able to do that by putting the following import statements at the top of your test case:
import com.kms.katalon.core.logging.KeywordLogger import com.kms.katalon.core.mobile.keyword.internal.MobileDriverFactory import io.appium.java_client.AppiumDriver
Then in the code for your test, add the following right after the toast is expected to appear:
AppiumDriver<?> driver = MobileDriverFactory.getDriver() KeywordLogger log = new KeywordLogger() log.logInfo(driver.getPageSource())
When the test runs, you should see an entry in the Log Viewer for “Statement - log.logInfo(driver.getPageSource())” that shows the start of the XML for the screen. If you tap on it, you’ll see all of the XML for the screen on the right-hand pane.
I would check that XML to see if the toast shows up there. If not, then you might consider adding a
MobileBuiltInKeywords.delay() of a second and try printing the screen again to see if the test check is firing before the toast appears.
Hope this helps,
Chris
My XML isn’t picking up the toast message. I try adding the delay but nothing is showing up.
is your value for automationName - “uiautomator 2” with a space or “UiAutomator2”?
AppiumDriver<?> what goes inside the "<?>"
–What version of Appium do you have?
When logging my Desired Capabilities, the automationName shows as
uiautomator2 (all one word, all lower case). Here’s what my full Desired Capabilities output looks like:
{"desiredCapabilities": {"app":"/Users/chris/mobile-beta.apk", "autoGrantPermissions":true, "automationName":"uiautomator2", "bootstrapPath":"/Users/chris/WebDriverAgent", "deviceId":"emulator-5554", "deviceName":"emulator-5554 (Android SDK built for x86 - Android 8.1.0)", "fullReset":false, "newCommandTimeout":1800, "noReset":true, "platform":"ANDROID", "platformName":"Android", "udid":"emulator-5554", "usePrebuiltWDA":true, "useXctestrunFile":true }
If you walk through your app with the Katalon Mobile Spy, are you able to see the toast, especially are you able to capture it as a Test Object when it shows on your screen? If not, that might be a sign of some other incompatibility with an Android SDK version.
– Chris
Object Spy Isn’t seeing Toast message object. – I will continue my investigation and see what I can find and up this this forum
I’ve been reading some Appium github posts from people who were unable to view toast messages, and it seems to point to having older versions of Android SDK. I wonder if you’re able to update your version of Android Studio to the latest, could you see the toast messages in the Spy Mobile then?
– Chris
Did that yesterday but I think I uninstall my VDs and reinstall them and try again.
I updated/reinstalled Android SDK and all my VD.
I am running 5.10 katalon studio
I am running 1.10 Appium - uiautomator2 driver
Still not seeing - toast message
That’s interesting - I would not have expected Appium 1.10 to work at all (it’s been my experience that Katalon didn’t work with anything higher than Appium 1.8.1). I’m using Appium 1.8.1 and Katalon Studio 5.10.1. Any chance you could try with Appium 1.8.1?
NOTE: Appium 1.8.1 isn’t the same as Appium Desktop 1.8.1 - it’s a bit confusing with the version numbers. If you’re using Appium Desktop, you’ll need Appium Desktop 1.6.3.
– Chris
Sorry guys was not on this forum actively.
@Zarashima, @U.G.Chinthaka_Devind, here the sample code that works on basic OCR:
First you make use Katalon API to capture some pictures/snapshots:
SnapTotal = total snapshots
ScrSnapDir = path to store snapshots as you would need to clear this prior snapshot, sorta like working directory for OCR.
for (def index : (0…SnapTotal)) {
String filename = Mobile.concatenate([
ScrSnapDir,
‘\’,
‘file’,
index,
‘.png’] as String[], FailureHandling.STOP_ON_FAILURE)
Mobile.takeScreenshot(filename, FailureHandling.STOP_ON_FAILURE)
}
You need Tess4j (tess4j-4.0.1.jar, in my case) for the basic OCR as here I not involving yet OpenCV (see another topic created today). Here the basic codes:
Tesseract instance = new Tesseract();
instance.setDatapath(TessDataDir);
instance.setLanguage(TessDataLang);
Since Tess4j v4alpha, the whitelist seems broken; thus, left out from here.
Basically variables, you can assign via Katalon Profiles:
TessDataLang = ‘eng’
TessDataDir = ‘D:\Dev\tools\tessdata.ocr’ <- this host the Eng data dictionary (eng.traineddata)
Get files from:
Code to perform simple OCR, where imageFile is individual full-path-filename that you snapped earlier that you can perform a loop scan to stop when the TEXT you looking for found.
try {
String result = instance.doOCR(imageFile);
}
catch (TesseractException e) {
return ‘Error while reading image’
}
Notes:
- result basically containing TEXT that from OCR.
- If no matched TEXT found after all snapped files processed and you sure one of the image is containing the TEXT that needs recognized; this you would need OpenCV to further enhanced the image for more ‘finer’ before passing to doOCR().
Here the list of Java imports used and you need to place all the depending JAR files imported to Katalon Project or manual copy to Drivers folder (restart Katalon) you are sorted.
import net.sourceforge.tess4j.Tesseract as Tesseract;
import net.sourceforge.tess4j.TesseractException as TesseractException;
Let me know any further help needed.
Cheers.
At least once, you would need the Mobile simulator for generating the snaps.
For the above simple OCR, you don’t need Appium or any Device-simulator to up, you can even create a simple UT just for OCR testing to one or some picture files.
You can make use, e.g., Chrome Headless for the UT.
@GeneralChingChicken, there are a few ways you can take photo of the toast message:
Manually and the toast message stay ‘long’ enough for the snap, e.g., Spy Mobile.
Automatically using loop, see my sample code step-1 (above).
That to perform right after some ‘simulation’ steps of which surely will trigger the toast message to appear, you just snap for some pictures immediately after your/the submission.
Before you get to either option 1 or 2, ensure you can manually simulate the toast-message successfully for starter.
Cheers.
Still no luck,
I did what you mention, and ran object spy but still no luck. No clue whats going on.
@GeneralChingChicken, you cannot use latest Appium version, but to downgrade it.
Try this command for specific version:
npm install -g appium@1.8.1
Check Katalon supported versions here.
Also, read here further details in relating to above npm command.
After npm install appium, you can then installing appium-doctor as I find this useful to help you resolved any other dependent issues or installer.
Cheers.
To help diagnose further, could you share the console log when you run your test? I’m especially interested in what Appium thinks is on the screen at the time the toast shows up (that’s the output from the
driver.getPageSource() command I mentioned above. However, having the full console log would be helpful.
Also, I realized that I didn’t answer your question from above about what what version of Appium or what goes in the “?” for
AppiumDriver<?>.
I’m using Appium Server 1.8.2-beta (although 1.8.1 should be fine for Android).
As for the “?”, you just leave it as-is - it’s sort of a wildcard when accessing some types of classes.
Thanks,
Chris | https://forum.katalon.com/t/how-to-read-toast-message-in-android/11191 | CC-MAIN-2021-43 | refinedweb | 1,462 | 57.87 |
# Protocol for communication between iframe and the main window
From time to time, developers need to establish communication between several browser tabs to be able to send messages from one tab to another and receive responses. We have also faced this need at some point.
Some solutions already exist (like, for instance, BroadcastChannel API). However, its browser support [leaves a lot to be desired](https://caniuse.com/#search=BroadcastChannel), so we decided to use our own library. When the library was ready, that functionality was no longer required. Nevertheless, another task emerged: communication between an iframe and the main window.
On closer examination, it turned out that two-thirds of the library would not have to be changed — only some code refactoring was necessary. The library is a communication PROTOCOL that can work with text data. It can be applied in all cases in which text is transferred, such as iframes, window.open, worker, browser tabs or WebSocket.
How it works
------------
Currently, the protocol has two functions: sending messages and subscription to events. Any message in the protocol is a data object. For us, the main field in that object is **type**, which tells us what kind of message it is. The **type** field is an [enum](https://en.wikipedia.org/wiki/Enumerated_type) with the following values:
* 0 — sending a message
* 1 — sending a request
* 2 — receiving a response.
### Sending a message
Sending a message doesn't imply a response. To send a message, we create an object with the following fields:
* **type** — event type 0
* **name** — user event name
* **data** — user data (JSON-like).
On receiving a message on the other side with the **type** field **= 0**, we know it is an event, with an existing event name and data. All we have to do is broadcast the event (almost a standard [EventEmitter](https://tproger.ru/translations/event-emitter-javascript/) pattern).
How it works in a simple schema:

### Sending a request
Sending a request implies that within the library, a request **id** is created and the library will wait for a response with the **id**. Upon successfully receiving a response, all auxiliary fields will be removed from it, and the response will be returned to the user. Also, the maximum response time can be set.

As for requests, this is a bit more complicated. To respond to a request, you need to announce the methods that are available in our protocol. This is done with the **registerRequestHandler** method. It accepts the name of a request for a response and a function that returns the response. To create a request, we need an **id**, and we can basically use **timestamp**, but it is not very convenient to adjust. So, this is a class ID that sends a response + response number + string literal. Now we create an object with the following fields: **id**, **type = 1**, **name** as request name and **data** as user data (JSON-like).
On receiving a request, we check if we have an API for responding to this request, and if we don't, we return an error message. If we have an API, we return the result of executing the function from **registerRequestHandler**, with the respective request name.
For the response, we create an object with the fields: **type** = 2, **id** as the ID of the message to which we respond, **status** as a field that says if this response is an error (if we don't have an API, or the handler incurred an error, or the user returned a rejected promise, or another error occurs (serialise)), and **content** as response data.
So, we have described the operation of the protocol, which executes the **Bus** class but has not explained the process of sending and receiving messages. For that, we need class adapters with three methods.
* **send** is a method that is basically responsible for sending messages
* **addListener** is a method for subscribing to events
* **destroy** is a method for deleting subscriptions when deleting **Bus**.
Adapters. Execution of the protocol
-----------------------------------
To launch the protocol, currently, only the adapter for working with iframe/window is ready. It uses [postMessage](https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage) and [addEventListener](https://developer.mozilla.org/en-US/docs/Web/API/EventTarget/addEventListener). It's pretty straightforward: you need to send a message to **postMessage** with a correct **origin** and listen to messages over **addEventListener** on the **"message"** event.
We encountered a few nuances:
* You should always listen to responses on YOUR window and send them on the OTHER window (iframe, opener, parent, worker, etc). If you try to listen to a message on the OTHER window and the origin differs from the current one, an error will occur.
* On receiving a message, make sure that it was directed to you: the window can accommodate many analytics messages, WebStorm (if you use it) and other iframes, so you need to be sure the event is in your protocol and intended for you.
* You can't return a **[promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise)** with a [Window](https://developer.mozilla.org/en-US/docs/Web/API/Window) copy, because **promise** when returning the result, will try to check if the result has the **then** method. If you don't have access to the window (for instance, a window with another origin), an error will occur (although not in all browsers). To avoid this issue, it would be enough to wrap the window in the object and put an object into the promise that has a link to the correct window.
Usage examples:
---------------
The library is available in NPM and you can easily install it via your package manager — [@waves/waves-browser-bus](https://www.npmjs.com/package/@waves/waves-browser-bus)
To establish two-way communication with an iframe, it is enough to use this code:
```
import { Bus, WindowAdapter } from '@waves/waves-browser-bus';
const url = 'https://some-iframe-content-url.com';
const iframe = document.createElement('iframe');
WindowAdapter.createSimpleWindowAdapter(iframe).then(adapter => {
const bus = new Bus(adapter);
bus.once('ready', () => {
// A message from iframe received
});
});
iframe.src = url; // It's preferable to assign a url after calling WindowAdapter.createSimpleWindowAdapter
document.body.appendChild(iframe);
```
Inside iframe:
```
import { Bus, WindowAdapter } from '@waves/waves-browser-bus';
WindowAdapter.createSimpleWindowAdapter().then(adapter => {
const bus = new Bus(adapter);
bus.dispatchEvent('ready', null); // A message has been sent to the parent window
});
```
What's next?
------------
We have a flexible and versatile protocol that can be used in any situation. Next, I plan to separate the adapters from the protocol and put them into separate npm packages, and add adapters for worker and browser tabs. I want writing adapters executing the protocol for any other purposes to be as easy as possible. If you want to join the development process or have ideas regarding the library's functionality, you are welcome to get in touch in the [repo](https://github.com/wavesplatform/waves-browser-bus). | https://habr.com/ru/post/456016/ | null | null | 1,179 | 54.63 |
It’s official, Xiaomi is the king of affordable smartphones. The mobile giant just confirmed the official status of it’s Redmi Note 2 smartphone which sports the currently high-end topmost Mediatek processor, the Helio x10 chipset, and guess the price? Just 125USD (799 yuan). Devices running MediaTek’s Helio X10 chipset are known to cost at least twice the price of Xiaomi Redmi Note 2, example include another top smartphone Meizu MX5, HTC One M9 and even a family of Huawei P8.
Xiaomi Redmi Note 2 features a 5.5″ with the signature Xiaomi design language, with offerings of various color options. The device would have two variants, a 16GB and 32GB versions. The 32GB version is expected to be a bit more expensive than the 16GB, with an official MSRP of 999 yuan or $155, and will bfeature a higher clockspeed.
Other impressive features of the Redmi Note 2 include; 2GB of RAM, a powerful 3,060mAh battery which should give you a high 24 hours of hectic usage, Android 5.0 Lollipop and some great photographic properties with a 13MP rear shooter as it’s main camera, and a 5MP front shooter for selfies.
Availability of the device is expected soonest with the first orders being expected to be made available tomorrow – August 14th in China. There’s somewhat of a bad news though, the Redmi Note 2 will officially be sold in China only (at least initially), while India should be next in line, following Xiaomi’s gallant entrance to the huge sub-continent where it now ‘makes phones’. Other European, American and African countries would be expected to
import via 3rd party sites. | https://thegloor.com/xiaomi-redmi-note-2-with-super-powerful-helio-x10-chip-costs-125/ | CC-MAIN-2019-04 | refinedweb | 279 | 59.03 |
The EPiServer CMS Change Log System (referred to simply as ”Change Log” in the rest of this tech note) is a facility where changes to an EPiServer CMS site can be logged. For example, all changes to pages, files and directories are currently logged in the Change Log to support the Mirroring and OnlineCenter features in EPiServer CMS 6 and also act as a general audit mechanism.
The Change Log can be viewed from EPiServer Admin Mode. On the Config tab under the Tool Settings heading you will see the Change Log menu item. Select the View tab to view the Change Log. The Change Log can be filtered by change date, category, action and changed by. (e.g. Mirroring).
Due to probable changes to the Change Log API's in future releases of EPiServer CMS, no programmatic interface is officially support in the current release. For those of you who are curious the classes and interfaces for the Change Log can be found in the EPiServer.ChangeLog namespace (EPiServer.dll). Breaking changes are likely in the next version so if you use the API it is at your own risk. | https://world.episerver.com/documentation/Items/Tech-Notes/EPiServer-CMS-6/EPiServer-CMS-60/Change-Log/ | CC-MAIN-2020-16 | refinedweb | 191 | 69.41 |
Because source files always contain more than 20x more code than the header files, I would like to follow a different approach here in explaining the code. I will break down the source file into blocks, one by one.
As we mentioned earlier, any C++ file or even header file must start with the
include statements. You don't have to include everything; some of the
include statements will be there by default but others might be needed while you are building up the code.
Even if your game example is different and you wanted to have different functionalities, you might need to include more headers.
#include "Bellz.h" #include "Gladiator.h" #include "GameDataTables.h" #include "PaperSpriteComponent.h" #include ...
No credit card required | https://www.oreilly.com/library/view/mastering-unreal-engine/9781785883569/ch02s05.html | CC-MAIN-2019-13 | refinedweb | 123 | 66.64 |
Originally posted @ visit and download the sample code:
Do you need to check if the route name given exists in Laravel 8? Sometimes we need to check the route name if exists. For example, if you have a dynamic menu that will call the available routes in your Laravel 8 application then show it as a menu. But if you don't check the route it will throw an error if one of the routes was deleted/removed.
Using Route:has('route_name') provided by Laravel Framework it will help us to check and determine if the route exists.
Okay here is a simple solution:
<?php namespace App\Http\Controllers; use Illuminate\Http\Request; use Illuminate\Support\Facades\Route; class TestController extends Controller { public function index() { if(Route::has('route_name')) { //do something here } } }
Now you can validate if the route name exists in your Laravel application. I hope this tutorial can help you. Kindly visit here if you want to download this code.
Happy coding :)
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/codeanddeploy/how-to-check-if-route-name-given-exists-in-laravel-8-aj5 | CC-MAIN-2022-27 | refinedweb | 166 | 64.51 |
Today, I will write about the remaining 10 rules to performance. Ten rules seem to be a lot but only two of them have actual content.
Here are the remaining rules to performance:
Let's have a closer look at rule Per.11 and rule Per.19.
The example shows the simple gcd algorithm which calculates the greatest common division at runtime. gcd uses for its calculation the euclidean algorithm.
int gcd(int a, int b){
while (b != 0){
auto t= b;
b= a % b;
a= t;
}
return a;
}
By declaring gcd as constexpr gcd can quite easily make a function which can run at compile time. There are only a few restrictions on constexpr functions in C++14. gcd must not use static or thread_local variables, exception handling, goto statements, and all variables have to be initialised. Additionally, variables have to be of literal type.
Let's try it out.
// gcd.cpp
#include <iostream>
constexpr int gcd(int a, int b){
while (b != 0){
auto t= b;
b= a % b;
a= t;
}
return a;
}
int main(){
std::cout << std::endl;
constexpr auto res = gcd(121, 11); // (1)
std::cout << "gcd(121, 11) = " << res << std::endl;
auto val = 121; // (3)
auto res2 = gcd(val, 11); // (2)
std::cout << "gcd(val, 11) = " << res2 << std::endl;
std::cout << std::endl;
}
Declaring gcd to a constexpr function does not mean that it has to run at compile time. It means that gcd has the potential to run at compile time. A constexpr function has to be executed at compile if used in a constant expression. (1) is a constant expression because I ask for the result with a contexpr variable res (2). (3) is not a constant expression because res2 is not a constant expression. When I change res2 to constexpr auto res2, I would get an error: val is not a constant expression. Here is the output of the program.
Once more, here is the key observation. You can use a constexpr function at runtime and at compile time. To use it at compile its arguments have to be constant expressions.
You don't believe me that the line (1) will be executed at compile time? Here is the proof: the assembler instructions to line (1) generated from gcc 7.3 with maximum optimisation level. I created the output with the help of the Compiler Explorer from Matt Godbolt.
The function call gcd(121, 11) boils down to its result: 11.
Templates are often used to make the decision at compile time. There is a nice example of this idea in the C++ Core Guidelines. A popular technique is to provide a handle for storing small objects on the stack and big objects on the heap. Here is the example:
template<typename T>
struct Scoped { // store a T in Scoped
// ...
T obj;
};
template<typename T>
struct On_heap { // store a T on the free store
// ...
T* objp;
};
template<typename T>
using Handle = typename std::conditional<(sizeof(T) <= on_stack_max), // (1)
Scoped<T>, // first alternative (2)
On_heap<T> // second alternative (3)
>::type;
void f()
{
Handle<double> v1; // the double goes on the stack
Handle<std::array<double, 200>> v2; // the array goes on the free store
// ...
}
How does it work? std::conditional in line (1) is a ternary operator from the type-traits library. In contrast to the ternary operator at runtime (a ? b : c) std::condition will be executed at compile time. This means if std::conditional<(sizeof(T) <= on_stack_max) is evaluated to true, the first alternative is used at runtime. If not, the second alternative.
a ? b : c
What does that mean? If you read an int from memory more than the size of one int is actually read from memory. An entire cache line is read from memory and stored in a cache. On modern architectures, a cache line has typically 64 bytes. If you now request an additional variable from memory and this variable is on previous cache, the read directly uses this cache and the operation is much faster.
A data structure such as std::vector which stores its data in a continuous memory block, is a cache line data structure because each element in the cache line will typically be used. A std::vector has a size and a capacity and can grow only in one direction. The capacity is greater than its size and indicates when it is necessary to allocate memory. This argumentation also applies to std::vector and std::array although std::array has no capacity.
My argumentation to std::vector will not hold for a std::list or a std::forward_list. Both containers consist of nodes which are double or single linked.
The std::forward_list can only grow in one direction.
std::deque is something in between because it is kind of a double linked list of small arrays.
This was the theory of cache lines. Now I'm curious. Does it make a difference to read and accumulate all elements from std::vector, a std::deque, std::list, and std::forward_list. The small program should give an answer.
// memoryAccess.cpp
#include <forward_list>
#include <chrono>
#include <deque>
#include <iomanip>
#include <iostream>
#include <list>
#include <string>
#include <vector>
#include <numeric>
#include <random>
const int SIZE = 100'000'000;
template <typename T>
void sumUp(T& t, const std::string& cont){ // (5)
std::cout << std::fixed << std::setprecision(10);
auto begin= std::chrono::steady_clock::now();
std::size_t res = std::accumulate(t.begin(), t.end(), 0LL);
std::chrono::duration<double> last= std::chrono::steady_clock::now() - begin;
std::cout << cont << std::endl;
std::cout << "time: " << last.count() << std::endl;
std::cout << "res: " << res << std::endl;
std::cout << std::endl;
std::cout << std::endl;
}
int main(){
std::cout << std::endl;
std::random_device seed; // (1)
std::mt19937 engine(seed());
std::uniform_int_distribution<int> dist(0, 100);
std::vector<int> randNumbers;
randNumbers.reserve(SIZE);
for (int i=0; i < SIZE; ++i){
randNumbers.push_back(dist(engine));
}
{
std::vector<int> myVec(randNumbers.begin(), randNumbers.end());
sumUp(myVec,"std::vector<int>"); // (2)
}
{
std::deque<int>myDec(randNumbers.begin(), randNumbers.end());
sumUp(myDec,"std::deque<int>"); // (3)
}
{
std::list<int>myList(randNumbers.begin(), randNumbers.end());
sumUp(myList,"std::list<int>"); // (4)
}
{
std::forward_list<int>myForwardList(randNumbers.begin(), randNumbers.end());
sumUp(myForwardList,"std::forward_list<int>"); // (5)
}
}
The program memoryAccess.cpp first creates 100 Million random numbers between 0 and 100 (1). Then it accumulates the elements using a std::vector (2), a std::deque (3), a std::list (4), and a std::forward_list (5). The actual work is done in the function sumUp (6). I assume that Linux and Windows use a quite similar implementation of std::accumulate; therefore the access time of the elements is the dominant factor for the overall performance.
template<class InputIt, class T>
T accumulate(InputIt first, InputIt last, T init)
{
for (; first != last; ++first) {
init = init + *first;
}
return init;
}
I compiled the program with maximum optimisation and executed it on Linux and Windows. I'm not interested in the comparison between Linux and Windows because that would be a comparison between a desktop PC and a Laptop. I'm interested in the relative performance numbers of the four containers. Here are they:
To get the big picture. Here are my observations:
I don't want to overestimate my performance numbers but one key observation is obvious. The more cache line aware the container is, the faster is the access time of the elements: std::vector > std::deque > (std::list, std::forward_list).
That was it to performance. With the next post, I will start to write about the rules to concurrency. I'm quite curious.
Thanks a lot to my Patreon Supporters: Eric Pederson, Paul Baxter, Sai Raghavendra Prasad Poosa, Meeting C++, Matt Braun, and Avi Lachmish. 256
All 2993009
Currently are 131 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Read more... | http://modernescpp.com/index.php/c-core-guidelines-the-remaining-rules-to-performance | CC-MAIN-2019-51 | refinedweb | 1,302 | 57.57 |
Objective of the Module
Introduction.NET offers a variety of collections, such as ArrayList, Hashtable, Queues and Dictionaries. Collections are abstractions of data algorithms. An ArrayList abstracts a dynamic array, a Hashtable collection abstracts a lookup table, a Queues collection abstracts queues and so on. In addition, collections implement the ICollection, IEnumerable and IClonable interfaces. The detailed specification for each collection is found under the System.Collection namespace.ArraysList CollectionAn ArrayList is a dynamic array and implements the IList interface. Each element is accessible using the indexing operator. Whereas the traditional array has a fixed number of elements, ArrayList elements can be added or removed at run time.We can an object from the ArrayList using a general type of syntax as follows:
Here you can use the new keyword to create an instance of the ArrayList object. You don't need to specify the size. Once you have an empty ArrayList object, you can use the Add() method to add elements to it as in the following:
Each new item in the ArrayList is added to the end of the list, so it has the largest index number. If you wanted to add an item to the middle of the list, then you can use Insert() with a numeric argument as follows:
You can also remove members of the ArrayList using either Remove() or RemoveAt() methods as in the following.
Benefits of ArrayLists
Limitations of ArrayListsThe flexibility of an ArrayList comes at the cost of performance. Since memory allocation is very expensive, the fixed number of elements of the simple array makes it much faster to work with.Note: an ArrayList is slower and more resource-intensive than a conventional array.Simple ArrayList ExampleThe following example shows an array list with an undefined size. Elements are adding dynamically depending on the requirements. We are adding elements via the Add() method as well as using the Insert() method at a specific location. Later we are displaying all the elements by iterating using a for loop.
After build-up and the running of this program, the output of the following program is as in the following: Figure 1.1 – Array ListHashtablesIn some respects, Hashtable objects are quite similar to an ArrayList object except they are not required to use a numerical index. Instead, we can use a text key that can be numeric. You can create a Hashtable object using the same syntax as an ArrayList as in the following:
Once it is created, you need to specify the key/value pairs. Remember that the key is like an index for the entry and the value is the data we are storing. We store each elements using the Add() method with the following syntax as in the following:
A Hash table with numbers or dates for the keys are written as in the following:
Note: you need to convert DateTime index values when they are used as an index for a Hashtable.To read elements, you just need to specify the key and value is returned. The following code puts the value “india” into a variable named Country.
Benefits of Hashtable
Limitations of ArrayLists
Simple Hashtable ExampleThe following Hashtable program manipulates with a date index. First we define a Hashtable object and add some predefined dates to the object. Thereafter we create a DateTime type variable to store user input and finally we display the output.
After successfully compiling the program mentioned previously, the output is as in the following: Figure 1.2: HashtableBitArrayBit values are 1 and 0, where 1 represents true and 0 is false. A BitArray collection provides an efficient way to store and retrieve bit values. It allows us to perform bitwise operations as well as count the total number of bits. The contents of this collection are not Objects but Boolean values because BitArray can hold a large number of Boolean values called bits. One of most significant features of a BitArray is that they are resizable and that is useful if you don't know the number of bits needed in advance. You can create a BitArray object using the same syntax as defined for other collections. A BitArray provides several constructors. Here we are defining 7 bits in length in the BitArray as in the following.
Note: The static members of BitArray are thread-safe whereas instance members are not.Simple BitArray ExampleThe following program shows.
The following is a display of the results of the initialized and configured bits: Figure 1.3 – Bit ArrayThe BitArray collection implements the properties and methods of the ICollection interface. The following table shows the properties and methods of BitArray:
View All | http://www.c-sharpcorner.com/UploadFile/ajyadav123/overview-of-collections-in-net/ | CC-MAIN-2017-43 | refinedweb | 774 | 54.42 |
System.Collections.Concurrent.ConcurrentQueue<T>.GetEnumerator():
> The enumeration represents a moment-in-time snapshot of the contents of the queue.
> It does not reflect any updates to the collection after GetEnumerator was called.
Snapshot semantics allow a degree of sanity when using ConcurrentQueue<T> from multiple threads.
Unfortunately, Mono's ConcurrentQueue<T>.GetEnumerator() does not implement snapshot semantics:
> IEnumerator<T> InternalGetEnumerator ()
> {
> Node my_head = head;
> while ((my_head = my_head.Next) != null) {
> yield return my_head.Value;
> }
> }
If you're using `yield return`, you're NOT getting snapshot semantics.
The same is true of ConcurrentDictionary`2, ConcurrentStack<T>, etc. All of the Concurrent types should be reviewed for snapshot semantics.
I fail to see the relationship between using yield return and having snapshot semantics.
Under which case you think it will fail to return a proper snapshot of the queue at the time InternalGetEnumerator was called?
@Kumpera: Consider the following app:
using System;
using System.Threading;
using System.Collections.Concurrent;
class App {
static ConcurrentQueue<int> Q = new ConcurrentQueue<int> ();
const int Max = 100;
public static void Main ()
{
var a = new Thread (Add);
var r = new Thread (Remove);
var l = new Thread (Log);
a.Start ();
r.Start ();
l.Start ();
a.Join ();
r.Join ();
l.Join ();
}
static void Add ()
{
for (int i = 0; i < Max; ++i) {
Console.WriteLine ("A: {0}", i);
Q.Enqueue (i);
}
}
static void Remove ()
{
for (int i = 0; i < Max; ++i) {
int v;
while (!Q.TryDequeue (out v)) {
}
Console.WriteLine ("R: {0}", v);
}
}
static void Log ()
{
for (int i = 0; i < Max; ++i) {
Console.WriteLine ("L: {0}", string.Join (", ", Q));
}
}
}
It creates 3 threads to add, remove, and log the contents of the queue. As per snapshot semantics, we should expect that the logged output should never be "invalid".
> $ mono --version
> Mono JIT compiler version 3.8.0 ((no/62a857e Wed Aug 13 00:46:20 EDT 2014)
> $ mcs bxc-22180.cs
> $ mono bxc-22180.exe
> A: 0
> L:
> ...
> R: 0
> ...
> L: 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 31, 32, 33, 34
> ...
Actual output will vary on every execution, but *clearly* the above L: message is invalid: we only ever add one value of 0, so there CANNOT be 27 entires for 0, not with a proper snapshot.k
A different run, a different result, but still some invalid output:
> L: 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 21, 22, 23, 24, 25, 26, 27, 28, 29
If you run the test and see multiple entries with the same value, or entires that are not constantly increasing, you're seeing the bug.
Fixed in master | https://bugzilla.xamarin.com/22/22180/bug.html | CC-MAIN-2021-39 | refinedweb | 455 | 66.13 |
Hey this is my first post here so high everybody! OK, now that greetings are out of the way I can tell you guys (or gals) my problem. So, I've compiled everything without error and i can't run my program until I type something and press enter. There is more to the progrm than just typing one thing and I even put 'cin.ignore()' at the end. I'm not sure what to do. (Oh and by the way my program is a text-based game).
-Arby
p.s. Here's my code
I know it's a stupid idea me and my friend were high (am I allowed to say that?) when we thought of it but I was bored and decided to do it.I know it's a stupid idea me and my friend were high (am I allowed to say that?) when we thought of it but I was bored and decided to do it.Code:
#include <iostream.h>
using namespace std;
int main()
{
int b;
char string[3];
//going to make a text game called "Mrs. Jones Pro Student";
cout<<"Please enter your initials \n";
cin.getline ( string, 3, '\n' );
cout<<"Welcome "<< string<<"! You are about to begin the exciting adventure of MRS. JONES PRO STUDENT!!!\n"
"Mrs. Jones is an evil religion teacher at St.Catharine's elementary school.\n"
"She is obbsesed with making sure that her students keep their shirts tuck in!\n"
"Are you ready to begin?\n";
cin>> b;
if ( b == 1 )
{cout<<"Than let us begin.\n";}
else if ( b == 0 )
{cout<<"THE END\n";}
else
{cout<<"I don't understand, please use 1 for yes and 0 for no\n";}
cin.ignore();
} | http://cboard.cprogramming.com/cplusplus-programming/65692-cant-run-program-printable-thread.html | CC-MAIN-2015-18 | refinedweb | 286 | 94.05 |
In version 3.0 of my address book I'm utilizing tkinter. The only problem thus far I'm having is that I have a very ugly display on a child window that shows the search result for a query. I'm getting
{Shoiukamp0@gmail.com[/email], Phone: (111)555-2222, Address: 122 C---- Ct., Essex, VT']}
The issue is that I'm looking for the text without the curly braces and list brackets, I tried substringing it but then nothing is displayed. Below are the segments of code related to this, note that searchfor for is a class method, also not that I'm aware that modifying the class attributes is not the best way to do this, but it worked best for me to make the child's label(this strategy is not my issue at hand).
def searchfor(self): query= self.search.get().capitalize() if query in ab.book: Interface.result=query.capitalize(),':',ab.book[query] else: Interface.result='Query not found.' print(Interface.result) show() class Display_contact(Frame): '''Show a contact''' def __init__(self,master): super(Display_contact, self).__init__(master) self.grid() self.create_widget() def create_widget(self): self.contact=Label(self,text=Interface.result) self.contact.grid()
How could I go about making Interface.result ab.book[query] without the {[]}?
Thanks for the advice!
ugh, pardon my 1am grammar in the title
', '.join(Interface.result)
duh, of course... I'm very disappointed in myself. Thanks pyTony.
oh, almost. I'm still getting the []
changed Label to: self.contact=Label(self,text=''.join(Interface.result))
just a reminder that it's originally this:
'Email: %s\n, Phone: %s\n, Address: %s' % (self.email.get(),self.phone.get(),self.adrs.get())
How about to change
Interface.result=query.capitalize(),':',ab.book[query]
from tuple to
Interface.result=query.capitalize()+':'+' ,'.join(ab.book[query])
thanks, I'll try that, but I can't at the moment because of a semi-related issue that, for unknown reasons just appeared.I'm getting this
Traceback (most recent call last): File "E:\AddressBook\abclass.py", line 103, in <module> adb=pickle.load(f) EOFError
for this:
import pickle f=open('ABOOK.dat','rb+') adb=pickle.load(f) ab=AddressBook(adb)
idk what this is. Any idea why that would happen, especially when it was fine before?
ABOOK.dat only contains a dictionary
Did you check the size of file, maybe it have been truncated to 0 length (eg opened for writing and closed without writing).
I'm actually positive that happened size=0KB. rrrr. how do I fix this?
had it fixed for a moment. The answer for anybody with this problem in the future is to open it interactively, place something in it, and close it.
>>> import pickle >>> pickle.dump({'Joshua':'Me'},f) >>> f.close()
note that f was already assigned to open('filepath.extension','wb+')
but now, after truncating it to 0 by an error in the code again, this is not working...
You must save it again or restore from backup, I think. You could try to put reading the old pickle inside try.. except EOFError: # make empty dictionary block.
pyTony, that does work.
import pickle try: f=open('ABOOK.dat','rb+') adb=pickle.load(f) ab=AddressBook(adb) except EOFError: f=open('ABOOK.dat','wb+') empdict={} pickle.dump(empdict,f) f.close() f=open('ABOOK.dat','rb+') adb=pickle.load(f) ab=AddressBook(adb)
Thanks, and that'll be good for future reference for others.
I meant you let normal saving routine to do the saving
import pickle try: f=open('ABOOK.dat','rb+') adb=pickle.load(f) ab=AddressBook(adb) except EOFError: adb={} ab=AddressBook(adb)
Have you by the way followed advice in by signature linked post and looked code in code snippets section? i have posted quite a few myself like.
I haven't but I will now, thanks | http://www.daniweb.com/software-development/python/threads/379750/fixing-an-label-ugly-display-on-tkinter | CC-MAIN-2014-15 | refinedweb | 644 | 52.87 |
I wrote a simple .py program for
def main(): print ("Hello World")
That's it. I saved it as hello.py
But when i try to run it
I get this error:
>>> hello.py Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> hello.py NameError: name 'hello' is not defined
Please Help me. I need the power of python. This happens with every program. I can define functions in the command-line and run them fine, but I can't do advanced stuff.
Any help would be appreciated.
P.S: Even if the code is error free does not work. | https://www.daniweb.com/programming/software-development/threads/185273/can-t-run-py-programs | CC-MAIN-2017-26 | refinedweb | 104 | 88.02 |
The specific prescriptive nature of the PA means of course that not every customer will be able to deploy it (for example, customers without multiple datacenters). And some of our customers have different business requirements or other needs which necessitate a different architecture. If you fall into those categories, and you want to deploy Exchange on-premises, there are still advantages to adhering as closely as possible to the PA, and deviate only where your requirements widely differ. Alternatively, you can consider Office 365 where you can take advantage of the PA without having to deploy or manage servers.
The PA removes complexity and redundancy where necessary to drive the architecture to a predictable recovery model: when a failure occurs, another copy of the affected database is activated.
The PA is divided into four areas of focus:
- Namespace design
- Datacenter design
- Server design
- DAG design
Namespace Design
In the Namespace Planning and Load Balancing Principles articles, I outlined the various configuration choices that are available with Exchange by using regional namespaces. Ultimately your decision depends on your network topology and the associated cost with using an unbound model; for example, if you have datacenters located in North America and Europe, and Safety Net 192GB of memory
- a battery-backed write cache controller
- 12 or more large form factor drive bays within the server chassis
Additional drive bays can be deployed per-server depending on the number of mailboxes, mailbox size, and the server’s scalability.
Each server houses a single RAID1 disk pair for the operating system, Exchange binaries, protocol/client logs, and transport database. The rest of the storage is configured as JBOD, using large capacity 7.2K RPM serially attached SCSI (SAS) disks (while SATA disks are also available, the SAS equivalent provides better IO and a lower annualized failure rate)..
Database Availability Group Design
Within each site resilient datacenter pair you will have one or more DAGs..
Note: While your environment may not use IPv6, IPv6 remains enabled per IPv6 support in Exchange.
Witness Server Placement
Ultimately,.
Data Resiliency number) when availability is compromised..
To protect against accidental (or malicious) item deletion, Single Item Recovery Exchange 2016 offers a wide variety of architectural choices for on-premises deployments, the architecture discussed below is our most scrutinized one ever. While there are other supported deployment architectures, they are not recommended.".
Regarding my previous comment, this is actually mentioned in the article and was in the 2013 version too. I should learn to read the entire article before commenting on it. :)
No worries Jetze! It is good to see more flexibility being added into Exchange. I have been pretty satisfied with Exchange 2010 and really look forward to Exchange 2016. I will be happy when Exchange gets away from ESE and goes to SQL hopefully in my lifetime…
Thanks really good article.
You’re funny, Brad_Voris. I can save you the wait, we have no plans to move to SQL.
Ross
Yet… :)
Hey Ross. I feel like there’s always confusion with regards to the Exchange PA articles and wondering if you can update them. I think you should replace the word "Disk" with "Volume" where appropriate..
@MichaelCramer, to us a ‘disk’ or a ‘volume’ in a PA design is a single individual physical hard drive. If you had 10 drives in a server being used for mailbox database/log storage then those would be 10 ‘volumes’ or 10 ‘disks’, each individually Bitlockered
and not striped or mirrored together in any way shape or form.
@Brian Day)
@MichaelCramer – JBOD simply means just a bunch of disks. Many attribute that to also mean that the disks are combined together to form a larger unit, but that isn’t required. We use each disk independently. In other words a volume = disk.
Ross
The idea about using JBOD is that fault tolerance is established in the DAG, with multiple database copies, not with fault tolerant disk arrays which make your storage cost go needlessly higher. If a single disk crashes you have at least two other copies
of each database and log stream stored on the lost disk. Simply replace it and reseed. Your end users don’t notice much of anything, and your hardware configuration and budget are less complex.
Thanks guys for your explanations. Just real quick, though.
Does that mean the Exchange "On Disk" view could look something like this in the PA?
Can’t delete old link.
So the PA could/would look something like this "on disk"?
@MichaelCramer – yes, though I would recommend using the calculator to create the layout.
Yeah that is unfortunate.
I still prefer working with Exchange over IBM (Lotus) Notes/Domino…
I know its been discussed many times(SQL over ESE)… a person can dream can’t they?
Thanks Ross.
I’ve seen rumors that Exchange 2016 can be used with 196GB RAM per Server. This article still lists 96GB – so the same as it was in 2013. Are 96GB still the recommended size?
@Peter Forster – Currently, our recommended maximum CPU and memory guidance for Exchange 2016 is identical to Exchange 2013 – 24 cores and 96GB of memory. As we deploy and get run-time on new infrastructure in Office 365, we may adjust those requirements.
Ross
Question about remote access and networking:.
@pjhutch, with the exception of Edge Transport, it hasn’t been supported to put Exchange Servers into the DMZ since Exchange 2003. See here:
Many organisations choose to deploy Exchange behind a reverse proxy solution, such as TMG or WAP. | https://blogs.technet.microsoft.com/exchange/2015/10/12/the-exchange-2016-preferred-architecture/ | CC-MAIN-2018-17 | refinedweb | 920 | 54.22 |
This is a package to manage sessions stored in encrypted files (with AES), using UUIDv4 for client identification for Express.js. This package it's a newer version writted from zero based of session-crossover package (now deprecated). This package is developed with typescript and contains all required
*.d.ts definitions inside it.
npm install --save suspicious-session
express();instance, and use the middleware as follows:
import express from 'express'; import { suspiciousSession } from 'suspicious-session'; // Create the express instance const app = express(); // Use the middleware app.use(suspiciousSession({ path: './data', // Where the sessions will be stored name: 'i-see-you', // [ default = 'session-id' ] Name of the cookie to create maxAge: 15, // [ default = 30 ] Time of session duration (in minutes) algorithm: 'aes-256-ccm', // [ default = 'aes-128-ccm' ] AES algorithm do you want to use /** An optional object in case if you want to change the way * of the library creates the cookies. */ cookieOptions: { secure: false } }));
The core of the package resides in
req.session, which contains the necessary methods to manage the current sessions. All operations about sessions are available in that object. These are some examples of usage:
app.get('/create', async (req, res) => { // Create a new session await req.session.create(); // Add inside an object await req.session.current().save({ id: 543, nick: 'nadja', typeUser: 4 }); // Ends the request res.end(); });
app.get('/rewind', async (req, res) => { // Get if this connection has an active sesion const exist = !!req.session.current(); // Rewind the expiration time if (exist) { req.session.rewind(); } // Ends the request res.end(); });
app.get('/destroy', async (req, res) => { // Get if this connection has an active sesion const exist = !!req.session.current(); // Destroy the current session if (exist) { await req.session.destroy(); } // Ends the request res.end(); });
app.get('/read', async (req, res) => { // Get the current active session const current = req.session.current(); if (current) { // Read the session content const data = await current.load(); res.json(data); } else { // Return null res.json(null); } });
The configuration of the library it's simple, but quite flexible. As you see at the top, the parameters are just the necesary for simple implementations, but considerates some cases when you could need an specific behavior. The options are according to this interface:
export interface Options { /** * The path of the folder in where `session-crossover` will adds the new sessions to be created. * If the folder doesn't exists, the library will be create the folder while implements the * middleware in the `express` instance. */ path: string; /** * The name of the cookie which the session's encrypted UUID will be stored in the client. By default * the name it's `"session-id"`. */ name?: string; /** * The lifetime duration (in minutes) of every session created. By default it's setted to * 30 mins of duration. */ maxAge?: number; /** * The AES-Algorithm to be used for encrypt the data and the cookie value. By default, the algorithm * used is `"aes-128-ccm"`. */ algorithm?: AESAlgorithm; /** * An optional object with a custom configuration of the cookie generated, in case if you need to * set an specific parameter. */ cookieOptions?: CookieOptions; }
this.algorithm:
This parameter tells to the library which AES encryption algorithm do you want to use for encrypt the sessions. By default, use
"aes-128-ccm", but if you want to use another algorithm, these are the available:
"aes-128-ccm"
"aes-128-gcm"
"aes-192-ccm"
"aes-192-gcm"
"aes-256-ccm"
"aes-256-gcm"
"chacha20-poly1305"
this.cookieOptions:
In certain cases, it's probably that you want to create the cookies with a different settings than the default used by the library. The default values are:
const cookieOptions = { httpOnly: true, sameSite: 'strict', secure: true, path: '/', };
If you want to override some values, simply you can add only the parameter do you want to change (keeping the default values intact). For example, in case when you only need to change the
"secure" parameter to
false, then:
app.use(suspiciousSession({ path: './data', maxAge: 15, cookieOptions: { /** * This override the default value * "secure: true" to "false". */ secure: false } }));
...or in other cases when you need to add a parameter without a default value, simply you can add that value as follows:
app.use(suspiciousSession({ path: './data', maxAge: 15, cookieOptions: { /** * The parameter "signed" doesn't has a * default value assigned. */ signed: true } }));
The available values to set are (see the express.js for details): | https://openbase.com/js/suspicious-session | CC-MAIN-2022-27 | refinedweb | 721 | 56.35 |
I had to write a program that read the encrypted message... ":mmZ\dxZmx]Zpgy"
This was accomplished. The completed program is below, however it is set up now so that a user will input an encrypted message and then the program will decrypt the message. I dont want this. pretty much I want to run the program and have it automatically consider ":mmZ\dxZmx]Zpgy"
as the message that is already input. How do I do that?
Code:#include<iostream> using namespace std; int main() { char in[100]; char out[100]; int key,count; cout<<"Enter encrypted code: "; cin.getline(in,100); for(key=1;key<=100;key++) { for(count=0;in[count]!='\0';count++) { if(in[count]-key<32) out[count]=in[count]-key-32+127; else out[count]=in[count]-key; } cout<<"Key "<<key<<" Decoded Message: "; cout<<out<<endl; } system("pause"); return 0; } | https://cboard.cprogramming.com/cplusplus-programming/137958-programming-input-question.html | CC-MAIN-2017-30 | refinedweb | 145 | 59.9 |
Databrowse¶
Deprecated since version 1.4: This module has been deprecated.
Databrowse is a Django application that lets you browse your data.
As the Django admin dynamically creates an admin interface by introspecting your models, Databrowse dynamically creates a rich, browsable Web site by introspecting your models..Changed in Django 1.4: Please see the release notes
Since Django 1.4, it is possible to register several models in the same call to register().
It doesn’t matter where you put this, as long as it gets executed at some point. A good place for it is in your URLconf file (urls.py).
Change your URLconf to import the databrowse module:
from django.contrib import databrowse
...and add the following line to your URLconf:
(r'^databrowse/(.*)', databrowse.site.root),
The prefix doesn’t matter – you can use databrowse/ or db/ or whatever you’d like.
Run the Django server and visit /databrowse/ in your browser.
Requiring user login¶
You can restrict access to logged-in users with only a few extra lines of code. Simply add the following import to your URLconf:
from django.contrib.auth.decorators import login_required
Then modify the URLconf so that the databrowse.site.root() view is decorated with django.contrib.auth.decorators.login_required():
(r'^databrowse/(.*)', login_required. | https://docs.djangoproject.com/en/1.4/ref/contrib/databrowse/ | CC-MAIN-2015-40 | refinedweb | 211 | 60.92 |
I.
Hi
Thanks for your reply. I really didn't catch what you said.
a moderator?
how can i get in touch?
I've already posted an order, on the freelance section but so far was not successful.
Just wait for a moderator to reply to your thread or you can send the problem to service desk yourself.
I think this is a feature request to enhance the MT5 optimization tool to have "Conditional Parameter Optimization", rather than simply doing code changes.
I doubt code changes will produce the effect that you want, and I also doubt in the short to medium term, you will have the solution ready to your issue.
There will always be a forum moderator around. He is more experienced in issues than any of the forum members.
input bool Psar_Active=... #define STEP_DEFAULT=0.02; input double step=STEP_DEFAULT; #define MAX_DEFAULT=0.20; input double max=MAX_DEFAULT; : if(!Psar_Active && ( step != STEP_DEFAULT || max != MAX_DEFAULT )) return INIT_PARAMETERS_INCORRECT;
I have a problem like this in MT4.
I have some boolen parameter in a expert. One of them If I do not set any option for optimization and the default value of this parameter is false, a result will be displayed, while if it is True, the result will not show any result and optimization work correctly.
when "Send Pending order?" parameter is false I have this result...
I have the exact same need as the OP:
If certain boolean input values are false, it negates the need for certain other inputs to be tested at all.
Currently wasting time, computing resources, and yes even cloud network budget ($) testing useless/irrelevant input parameters.
... Being that it has been some 3 years since this was posted, perhaps there have been some updates regarding this much needed feature (or a suitable workaround)?
Thanks much! =)
I used Williams code to "develop" below. I thought I could share with everyone who is visiting this topic to find solution.
Just modify as required obviously, it's just an example of how this could be implemented.
input group "Trailing Stop"; const input int iprofitToTrigger = 30; const input double iprofitIncreaseToTriggerAgain = 0.05; const input double iprofitToTake = 0.2; const input bool iincreaseProfitToTake = true; const input double iincreaseUntil = 0.7; const input double iincreaseBy = 0.05; // Check input parameters bool CheckOptimisationParameters() { // in the OnInit check for true/false and return ( INIT_PARAMETERS_INCORRECT ); if false // this ends EA early and stops optimiser from running this set of parameters. Copy below to OnInit(): // if( CheckOptimisationParameters() == false ) return ( INIT_PARAMETERS_INCORRECT ); // input group "Trailing Stop"; if ( iprofitToTake + iincreaseBy >= 1 ) return (false); if ( iprofitToTake + iincreaseBy >= iincreaseUntil ) return (false); if ( iincreaseUntil >= 1 ) return (false); return ( true ); } int OnInit() { if( CheckOptimisationParameters() == false ) return ( INIT_PARAMETERS_INCORRECT ); return(INIT_SUCCEEDED); }
You agree to website policy and terms of use
Hello Everyone
I have a problem in optimization passes. I've searched a lot but non of questions nor the answers in the forum was the
one solving this problem.
The problem is that i have some bool parameters in my inputs which when they are set as false there is
no need to optimize some other parameters.
for example if the "Psar_Active" parameter is set to false my EA won't use Parabolic SAR indicator in it's decision making process,
so optimizing the "step" and "max" (two other input parameters) of the indicator is useless (only when the Activation parameter is false).
unfortunately i couldn't find a way for this. so many of the optimization results are useless
and this make the whole optimization much less effective.
Lets assume that "Step" Parameter has 5 steps and "Max" Parameter has 4 steps the total steps considering the
"Psar_Active" parameter (which has two steps) is currently resulting to 40 passes (5*4*2), which 19 of them are useless
(the ones with "Psar_Active" ==false). if this problem could be solved the total passes should be reduced to 21 (5*4*1+1).
I would be great-full if we can figure this out. thank in advance for your help. | https://www.mql5.com/en/forum/298982 | CC-MAIN-2022-21 | refinedweb | 668 | 54.32 |
How to write a Logstash input plugin
Logstash is an event processing pipeline, which features a rich ecosystem of plugins, allowing users to push data in, manipulate it, and then send it to various backends.
While there is a multitude of plugins currently available for Logstash, perhaps, the one that fits your exact needs has yet to be created. That’s where you come in…To that end, this tutorial is to help walk you through the process of building your own.
We’re going to use the source code from the
stdin plugin to go through the process of creating an input plugin for Logstash. This tutorial expects a certain amount of Ruby knowledge, but, hopefully, you will find it fairly easy to follow.
Anatomy of an input plugin
For this example, we’ll be using the Logstash stdin input plugin. While this is a basic plugin, the design principles and requirements apply to any input plugin. This document will refer to the source code by line (with links to the relevant block) and will show inline examples.
The following is a line-by-line breakdown of the example plugin. Clicking the links will take you to highlighted parts of the code.
Encoding
It seems like a small thing, but please don’t omit adding
encoding: utf-8
to the top of your input plugin. Logstash depends on things being in UTF-8, so we put this here to tell the Ruby interpreter that we’re going to be using the UTF-8 encoding.
Require
A Logstash input plugin requires some parent classes that can be referenced through the indicated statements. The following
require statements are mandatory:
require "logstash/inputs/base"
require "logstash/namespace"
You may also need the
socket require statement if your plugin is going to obtain the local hostname by way of a
Socket.gethostname call (more here).
Of course, the plugin you build may depend on other code, or even gems. Just put them here along with these Logstash dependencies.
Plugin Body
Here, we’ll cover many subsections one by one.
Defining the Plugin
- The class name.
- Your class will need to be a sub-class of
LogStash::Inputs
- Your class needs to extend the
LogStash::Inputs::Baseclass
- You should name your class in a way that closely mirrors the plugin name, e.g.
LogStash::Inputs::Stdin
config_name
- This is the name your input plugin will call inside the input block.
For example, if I set
config_name "my_plugin", my corresponding Logstash block would look like this:
input { my_plugin {...} }
milestone
- Milestones are defined here. Basically,
- Milestone 1: it’s new and under development and may change
- Milestone 2: more stable, more backwards-compatible, more-widely used
- Milestone 3: even more backward compatible & this is likely enforced by automated tests.
- Milestone 0: Infrequently used, means that the plugin is under-supported.
- If it’s a new plugin, please start out by setting
milestone 1
Setting defaults
- Default codec
- Codecs are for processing input (and output) streams. You can serialize (or deserialize) the stream into a format Logstash expects. You may even want to write a codec if you have a specialized format to go with your input (not in the scope of this document).
- Setting a default codec is advised, especially if you expect only one kind of input format
- Codecs currently in code are here.
- Plugin initialization
- The register method is like the
initializemethod. It was originally created to enforce having
supercalled, preventing headaches for newbies. (Note: It may go away in favor of
initializeand with some enforced testing to ensure
superis called.)
publicThis means the method can be called anywhere, not just within the class. This is the default behavior for methods in Ruby, but it is called explicitly here anyway.
- Assign instance variables here. In the example,
@hostis defined here.
- Calling
fix_streaming_codecswill automatically override the assigned codec if you use the
plainor
jsoncodecs. This may or may not be needed for your plugin.
The
run method: Where the action is
- The flow here is fairly straightforward:
- Read data in
- Decode data by way of a codec
- Loop through resulting events
- For each event:
decorate(Add
@timestamp,
@versionfields, if not present)
- Any other processing happens here…
- Append that event to the queue
- Catch errors, if any (the
rescuecall)
Cleanup
How do I make my own plugin from this?
Since you now have a good overview of how a plugin is built, and what the flow looks like, you should be able to envision your path to a plugin that does what you want.
Most of what you will want to do will be in the
run method, or, at least, will be accessed from the
run method. You can write other methods, include other required gems or code, and basically get your plugin to do anything you want so long as you:
- Read the data in
- Decode that data with a codec (even if you use the
noop, don’t-do-anything codec).
- Loop through the resulting events and:
decoratethem with
@timestampand
@version, in addition to your other fields
- Do your magic here!
- Append your event to the queue. This is how your event makes it into the rest of the Logstash pipeline.
- Catch any errors that may come up
- Teardown when it’s closing time.
Testing
Write unit and integration tests to ensure that your plugin behaves as expected. Tests for existing plugins are in the Logstash code in the path spec/ in the input, codec, filter, and output directories. These files provide excellent examples from which to derive your own tests.
If you would like to submit your plugin to the greater Logstash community, please be sure to include tests! A few examples of input plugin tests (ranging from simple to more complex) are:
Summary
This is a simple example of how you could write your own Logstash input plugin. Your final product can be as simple or as complex as your needs require. Once you find how easy it can be to write your own input plugin, you are empowered to make Logstash work for you in new and exciting ways! And once you get your new plugins working, we’d love to hear about them? Just drop us a line on Twitter so we can share in your awesomeness.
Happy Logstashing! | https://www.elastic.co/blog/write-logstash-input-plugin | CC-MAIN-2017-09 | refinedweb | 1,050 | 69.62 |
How to create html table from code behind?
In this article I will explain how to create the html table from c# code behind and how to show database record in it.I am using .net framework 4 and sqlserver 2008 r2. you can work with any version you have.here I am creating a simple html table which will show the record.You can set the style of the table so that it looks well.
First of all thanks for taking your interest in my article.Here I am describing how to create html table from code behind and store data in it.
STEP 1:
first of all take the Asp.net empty web application from project template and give it any name.
STEP 2:
Now add a new webform to the website by right clicking the project name>add new item>choose webform>
give the name to the webform as Default.aspx
and click add.
STEP3:
as your .net set up is complete now open sqlserver management studio.
Create a database having name html and add a table having name thtml to it.
thtml contains three columns id,name,rollno. fill some data in the table.
for creating database open new query window and paste the below query:
create database html
select this query and execute this will creat the database having name html.
Now we have to add the table to it.so right click your database name select new query and create the table with below query:
create table thtml(id int,name varchar(50),rollno varchar(50))
select the full query and click execute.this will create the table having name thtml of three columns id,name,rollno.We will show the record of this table into html table.
now fill some data in it with
insert into thtml(id,name,rollno) values(1,'firstname','123')
insert into thtml(id,name,rollno) values(2,'hello','786')
insert into thtml(id,name,rollno) values(3,'ram','89')
insert into thtml(id,name,rollno) values(4,'seema','97')
insert into thtml(id,name,rollno) values(5,'kabir','234')
Now your database work is complete
STEP 4:
AS our database work in complete,back to .net open default.aspx.cs.
here we will write our code.
First of all add two name spaces at the top of page:
using System.Data;
using System.Data.SqlClient;
..
now we are creating a function which will create your html table like below.I hope you are able to write your connection string.If not don't worry.Go to view select server explorer.right click dataconnection and click Add Connection and then add the connection give the name of your server in server name(as it shows while you open sqlserver management studio) and select the authentication mode which you selected while creating the database in sqlserver then choose your database name and click ok.
now right click this connection then go to properies and copy the full connection string form there.
...
";
public void createtable()
{
SqlConnection con = new SqlConnection(@"Paste Your ConnectionString here");
con.Open();
SqlCommand cmd = new SqlCommand("select name,rollno from thtml", con);
SqlDataAdapter da = new SqlDataAdapter(cmd);
con.Close();
DataTable dt = new DataTable();
da.Fill(dt);
//now we filled our data table
//below is the code to create html table
string t = string.Empty;
t = "
Response.Write(t);
}
now we call this function on the page load event like below:
protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
createtable();
}
}
..
Now your work is complete just run the application.you will se the html table on the web browser which shows your record.I am attaching the sample with it.
Hope you like this article.
Thanks and Regards
Shalini Rathore | http://www.dotnetspider.com/resources/45624-How-to-create-html-table-from-code-behind.aspx | CC-MAIN-2019-04 | refinedweb | 622 | 67.65 |
Name:
[171] Florian Mayer
Member: 96 months
Authored: 7 videos
Description: I'm a student from Vienna, the federal capital of Austria. Please visit `My Blog <>`_ ...
Decorators: Real World Example [ID:878] (2/3)
in series: Advanced Python
video tutorial by Florian Mayer, added 09/08
> second video about decorators is going to show you a real-world example on how they can be used. A decorator is marking functions as deprecated. Sorry for the abrupt end, though I went on just trying to find out how to enable line-numbers. And yes, the line-number is right.
import warnings def deprecated(func): def new_func(*args, **kwargs): warnings.warn("This function is old", DeprecationWarning, 2) return func(*args, **kwargs) return new_func @deprecated def i_am_old(foo, bar):: 1082 , Thanks!
again a very nicq video :P good explained =D
Good work
Thank you.
A very illustrative use case
Hi Florian,
The video ended abruptly before I think you done with your presentation.
The source line calling your function printed out. Was this your intention?
It's best if you can prepare your presentation so the text remains in a fixed position on the screen, so the viewer will not lose his place in your presentation. It may be possible to eliminate previous text that is not used in the presentation. It is very difficult for the viewer when the text scrolls, especially when it scrolls right of the screen. You have the luxury of moving the screen back if you are trying to find your place, a viewer doesn't have this luxury. It's best to plan ahead and set up your environment before you are in the middle of a demonstration. Such as explaining to the viewer that this example will require functions from the warning module and then show the import before you get to the point in the presentation when you are focusing on the main thrust of your presentation. It is distracting to have to have an after though taken care of in the middle of the presentation. It is much smoother for the viewer if this planned for and taken care of upfront so the viewer can concentrate on what you are really trying to show them.
Thanks,
Good Luck,
Bruce
; ^ >
The introduction to deprecators done in the first video was simple and brief, and I really liked the part where you compare the old way of doing decorators. Nice work! :)
I also really liked the deprecation decorator, it'll definitely be useful!
Wonderful fonts, easy to the audience's eyes.
The ShowMeDo Robot says - this video is now published, thanks for adding to ShowMeDo. | http://showmedo.com/videotutorials/video%3Fname%3D3370010 | CC-MAIN-2015-06 | refinedweb | 440 | 62.07 |
Last month, we discussed application servers, and in particular, the GlassFish app server. This month we will look at how to get started with GlassFish version 2 on Debian GNU/Linux. For other distributions, the procedure will be almost the same.
Installation of GlassFish
GlassFish v2 requires JDK 1.5 or a later version. So if you are using Fedora 9 or Ubuntu 8.4, you already have OpenJDK 6. You can check the version of Java with the
java -version command at a terminal prompt. This command will print the available version of Java on your system. For example, on my system the available version of Java is 1.4.2 as can be seen from the following terminal output:
mca05@debian:~$ java -version java version "1.4.2" gij (GNU libgcj) version 4.1.2 20061115 (prerelease) (Debian 4.1.1-20) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copyright conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
So I have to install JDK 1.5 or later before installing and configuring GlassFish. Verify that your
/etc/apt/sources.list file has the following entries and you are connected to the Internet:
deb etch/updates main contrib non-free deb etch main contrib non-free deb-src etch main contrib non-free
Then simply run the following command:
apt-get install sun-java5-jdk
It will install JDK 1.5 or above on your system. Now that we have successfully completed the basic requirements, download the GlassFish jar file and issue the following command:
java -jar -Xmx=256 glassfish-installer-v2ur2-b04-linux.jar
This command will show a licence agreement window—it’s dual licensed under CDDL and GPL. After accepting the licence it will create a directory named glassfish in your current working directory. After the installation is complete, we need to set it up before being able to use it.
Configuration and set-up
To set up GlassFish navigate to the glassfish directory from the terminal. Here you will see a file named setup.xml. This file is the input file for ant builder scripts that will configure GlassFish. setup.xml contains the following configuration property elements:
<property name="domain.name" value="domain1"/> <property name="instance.name" value="server"/> <property name="admin.user" value="admin"/> <property name="admin.password" value="adminadmin"/> <property name="admin.port" value="4848"/> <property name="instance.port" value="8080"/> <property name="orb.port" value="3700"/> <property name="imq.port" value="7676"/> <property name="https.port" value="8181"/> <property name="glassfish.license" value="LICENSE.txt"/> <property name="glassfish.zipfile" value="../glassfish-image-pe.zip"/> <property name="glassfish.release.name" value="glassfish"/> <property name="glassfish_pe.release.name" value="glassfish-pe"/> <property name="glassfish.class" value="glassfish.class"/> <property name="persistence-location" value="${install.home}/../glassfish-persistence"/> <property name="glassfish-persistence.zipfile" value="glassfish-persistence-image.zip"/> <property name="install.home" value="${basedir}"/> <property name="glassfish_persistence.class" value="glassfish_persistence.class"/> <property name="jarpack-task.jar" value="${install.home}/lib/Pack200Task.jar"/> <property name="adminpassfile" value="${install.home}/passfile"/> <property name="asadmin.prefs.file" value="${install.home}/config/asadminenv.conf"/> <property name="asadmin.default.profile" value="developer"/> <property name="asadmin.default.secure" value="false"/>
Before setting up the server, you need to make some changes in the property values according to your preferences. First of all, change the default
admin.password property, which by default is set as ‘adminadmin’ (see line 5 in the above snippet). You need to remember this password, as it will be used for login and deploying applications later. You can also change the ports on which the admin and applications will listen. Change them accordingly if they conflict with services that listen on the same ports.
After making changes in the configuration file, it’s time to start the build process. I will recommend a set-up process such that in the future you can easily update GlassFish with its new releases.
We will move
glassfish to
/opt/glassfish-v2, and then create a soft-link (
ln -s /opt/glassfish-v2 /opt/glassfish) such that in the future we can easily upgrade it. Make the ant builder script executable by issuing the following command:
chmod u+x /opt/glassfish/lib/ant/bin/ant
Now, to start the build process, issue the following command:
/opt/glassfish/lib/ant/bin/ant -f setup.xml
You should soon get a ‘build successful’ message as shown in Figure 1.
To start the server and check whether it is installed and configured successfully, issue the following command:
/opt/glassfish/bin/asadmin start-domain
…and point your browser to. You should get a login page as shown in Figure 2. If you don’t, then recheck the whole procedure for the possible error.
For login, use the password you have applied for the admin.password property earlier.
Developing a simple Web application
In this section I will develop a simple dynamic Web application based on the Model View Controller (MVC) design pattern, which will generate prime numbers for its clients. An MVC design pattern divides an application into the following components:
- Model: The content or the actual data (or logic)
- View: The presentation of data to the clients
- Controller: It controls the application—for example, calling the model for business logic and then forwarding the request to ‘view’ for its presentation.
The Web application will be based on servlets and JSP. In very simple terms, servlets are server side applications that can dynamically extend the functionality of a Web server, and JSP is a presentation technology based on servlets. If you want to know more details about servlets and JSP then I’d recommend doing a Google search for the answers.
Typical Web application development on an app server requires a directory structure like in Figure 3. Here,
Approot is the root directory for our Web application. It contains the default page that will be served by the server to its clients.
In addition, it contains a special directory named
WEB-INF. This is the directory where our Web application-specific files will be stored.
WEB-INF contains the following files and folders:
- The deployment descriptor file named
web.xml.
- A
classesfolder that contains the model and controller classes of our MVC Web application
- A
libfolder that contains third-party jar files
- A
jspfolder that is JRUN specific; it will not be used in our sample application.
For our prime number generator Web application, create a directory structure as shown below:
- The top-level directory
Approotcontains the default page
index.html, the
imgdirectory (which contains the images used in
index.html),
result.jsp(view of our MVC Web application), and the
WEB-INFdirectory.
WEB-INFcontains classes and the
web.xmlfile (a deployment descriptor file used by the server to map servlet names to its classes).
There is no need of
lib and
jsp directories, as we are not using third-party
jar files or
jrun.
The default page for the prime number generating Web application,
index.html, looks like what’s shown below:
<html> <!-- Some sample html--> <head> <title>Prime number generator Web Service</Title> </head> <body> <h1 align="center">Prime number generator Web Service</h1> <center> <img src="images/index.bmp" width="130" height="150"/> </center> <center> <form method="GET" action="PrimeSearcher.do" enctype="multipart/form-data"> <b>Enter the number till which you need prime numbers:</b> <input type="text" name="number" size="5"/><br/> </center> <br /> <center> <input type="SUBMIT" value="GENERATE PRIMES"/> </center> <br /> <br /> <br /> </form> </body> </html>
This
index.html is presented to clients who access the Web application. As you can see in the above snippet, the method is
GET and the action servlet is
PrimeSearcher.do.
PrimeSearcher.do is just a name given to our controller servlet and
.do is just a convention. The
GET method indicates that whenever a client clicks the GENERATE PRIME button, the
doGet method of the servlet is called.
The deployment descriptor file,
web.xml looks like what’s given below:
<web-app <servlet> <servlet-name>PrimeSearcher</servlet-name> <servlet-class>Controller</servlet-class> </servlet> <servlet-mapping> <servlet-name>PrimeSearcher</servlet-name> <url-pattern>/PrimeSearcher.do</url-pattern> </servlet-mapping> </web-app>
The deployment descriptor file
web.xml is specific to a Web application, and the app server uses it to map servlet names to its classes. Each Web application has its own web.xml file. It maps the servlet names to servlet classes—for example, in our application,
PrimeSearcher.do is mapped to
PrimeSearcher, and
PrimeSearcher is mapped to
Controller class.
The ‘view’ part of our Web application is a JSP page (
result.jsp) that generates output for its client. It looks like the following:
<%@ pagePrime Number generator View</h1> <p> <% Iterator it= (Iterator)request.getAttribute("primes"); out.append("primes are:"+"<br>"); while(it.hasNext()) { out.println(""+it.next()+"<br>"); } %> </body> </html>
This
result.jsp is internally translated into a servlet by the container. The JSP gets the attribute named
primes attached to the
HttpRequest object by the
Controller. This
primes attribute is an iterator that contains the prime numbers generated by the model class.
The
Controller code is shown below:
/*********controller.java**************/ import java.io.*; import java.util.*; import javax.servlet.*; import javax.servlet.http.*; public class Controller extends HttpServlet { public void doGet(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException { Long num= new Long(req.getParameter("number")); Iterator it= model.generatePrime(num.longValue()).iterator(); req.setAttribute("primes", it); RequestDispatcher view= req.getRequestDispatcher("view.jsp"); view.forward(req, res); } private Model model = new Model(); }
It is a simple servlet class that extends
HttpServlet. It takes the help of the
Model class to generate the prime numbers. This
Model class contains the actual logic to generate primes. The code looks like this:
/***************Model.java*************/ import java.util.ArrayList; import java.util.Iterator; import java.util.List; public class Model { List generatePrime(long l) { long number=2; List primes= new ArrayList(); primes.add(number); long sqrt ; while(number<l) { number++; if(checkPrime(number)==true) { primes.add(number); } } return primes; } boolean checkPrime(long l) { long sqrt=(long) Math.sqrt(l); for (long i = 2; i <= sqrt; i += 1) { if (l % i == 0) return false; else ; } return true; } }
This
Model class contains the actual logic to generate primes, and returns a list of generated prime numbers to the
Controller.
To understand the entire functionality, assume that when a client requests our Web application to generate primes, then the request will be transferred to the controller class that gets the number entered by the client from the
HttpRequest object. This number is passed to a model class, which actually generates prime numbers and has the real logic of our application. The model class returns a list that contains prime numbers. The Controller class gets the iterator of this list and attaches this iterator as an attribute to the
HttpRequest object, and forwards the remaining functionality to the ‘view’ of our application, which is a JSP file. ‘View’ gets this iterator object and generates a page containing the generated prime numbers.
Deploying the Web application on GlassFish
First of all, compile the Controller and Model classes using
javaee.jar in
classpath. The GlassFish
lib folder contains this
javaee.jar, in Figure 4.
Then go to the Web application branch. Check the radio button for “Local packaged file or directory…” and click the ‘browse’ button (Figure 5).
It will show a ‘directory chooser’. Choose the prime generator (folder) that contains our Web application. Then click on the OK button. That’s it! You have successfully deployed the application and will see a page like what’s shown in Figure 6.
Running the Web application
Finally, it’s time to run the application. Simply click the launch link (see Figure 6).
You will see the default page of our application as shown in Figure 7.
Enter the number up to which you need prime numbers to be sorted out. Click the “Generate Primes” button and you will get the primes as shown in Figure 8.
That’s it for now and happy development with GlassFish and Java EE! | http://www.opensourceforu.com/2009/02/glassfish-part-2-getting-started-with-the-app-server/ | CC-MAIN-2015-18 | refinedweb | 2,023 | 51.14 |
system()
Execute a system command
Synopsis:
#include <stdlib.h> int system( const char *command );.
This means that any command that can be entered to the OS can be executed, including programs, QNX Neutrino.
For example, assume that status is the value returned by system(). If WEXITSTATUS( status ) returns 255, either the specified command returned a termination status of 255, or the shell didn't exit (i.e. it died from a signal or couldn't be started at all) and the return value was 255 due to implementation details. For example, under QNX Neutrino and most Unix systems, the value is 255 if status is -1, which indicates that the shell couldn't be executed. WEXITSTATUS() is defined in <sys/wait.h>.
For information about macros that extract information from the value returned by system(), see Status macros in the description of wait() .
When an error has occurred, errno contains a value that indicates the type of error that has been detected.
Examples:
#include <stdlib.h> #include <stdio.h> #include <sys/wait.h> int main( void ) { int rc; rc = system( "ls" ); if( rc == -1 ) { printf( "shell could not be run\n" ); } else { printf( "result of running command is %d\n", WEXITSTATUS( rc ) ); } return EXIT_SUCCESS; } | https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/s/system.html | CC-MAIN-2021-17 | refinedweb | 204 | 57.67 |
.
Complete source code
If you want to jump straight to the complete source code:
End-to-end setup guide
Part 1: Azure AD configuration
Scripted solutions are linked when available. Some steps are manual due lack of Azure tooling support for the operation or information is highly specific to your own environment.
1. Create security groups in Azure AD that will be used for your sample application.
I would recommend creating one group per role. For example if I want to have application roles defined for standard users and admin users, I would make two new AAD security groups called MyAppUsersGroup and MyAppAdministratorsGroup.
Script solution: new-rbac-groups.ps1.
2. Assign at least one user to each application role security group created in step 1.
Manual step / no script.
3. Create and configure the Azure AD app registration.
- Create a new Azure AD app registration with the following options:
- Supported account types: Accounts in this organizational directory only (Single tenant).
- Redirect URI:
- Type: Single-page application (SPA)
- Address: local debug address, usually
- Define custom app roles in your application manifest (instructions). These roles should reflect the same roles you created groups for in step 1.
- Under the Expose an API section of the app registration:
- Set the Application ID URI (using default format: api://<appid>)
- Define one application scope (name doesn’t matter, but make it so both admins and users can consent).
Script solution: new-app-registration.ps1.
4. Create and configure the service principal (enterprise application).
- From the app registration’s Overview page, click on the link below menu item Managed application in local directory.
- Script solution: new-app-serviceprincipal.ps1.
- From the enterprise application’s Properties page, set the User assignment required property to Yes. This ensures that only pre-authorized users can sign in.
- Manual step / no script.
- From the enterprise application’s Users and groups page, assign the AAD groups from step 1 to the corresponding application roles defined in step 3.
- Manual step / no script.
Part 2: ASP.NET Web API setup
1. Create the dotnet core project.
Create a new ASP.NET Core 3.x Web API project. If you are creating the project from the Visual Studio template, you don’t need to specify any authentication mechanisms. Just select the Web API template type. You will need to add a nuget package reference to Microsoft.Identity.Web.
2. Startup.cs configuration.
Under the ConfigureServices() method of Startup.cs, you will need to configure both Authentication and Authorization services:
// add authentication support (bearer token validation). // configuration values are pulled from "AzureAD" section of app settings config. // this uses Microsoft Identity platform (AAD v2.0). services.AddMicrosoftIdentityWebApiAuthentication(Configuration, "AzureAd"); // add authorization support. services.AddAuthorization();
Under the Configure() method of Startup.cs, you will need to add those services to the request pipeline:
app.UseAuthentication(); app.UseAuthorization();
3. Controller configurations.
Add an Authorize attribute for each controller or route that you want to enable authorization for. In the roles string, specify the app manifest defined role that should have access to the route. Example:
[Route("api/[controller]")] [ApiController] public class TestController : ControllerBase { [HttpGet()] [Route("standard")] [Authorize(Roles = "MyAppUsersRole")] public async Task<ActionResult<string>> StandardRoleGet() { // implementation code } }
4. App settings configuration
Finally we want to add an AzureAd section to the appsettings.json (and any environment specific configs) that provides tenant authority information and app registration details. Example:
"AzureAd": { "Instance": "", "ClientId": "-- use client ID of app registration --", "TenantId": "-- use Azure tenant (directory) ID --", "Audience": "api://[-- use client ID of app registration --]" }
Part 3: React client setup
1. Create the react project and add dependencies.
I like to bootstrap things with create-react-app. Here is a command to generate a React-Typescript project:
npx create-react-app web-client --template typescript
At a minimum you will need the MSAL client. An http client like Axios is also helpful since we are making web calls.
npm install @azure/msal-browser npm install axios
2. Configure MSAL objects and call MSAL APIs.
The following is a high-level overview. For full context, view the sample application source that provides wrapper classes around these core ideas.
- Create a wrapper class that holds your MSAL PublicClientApplicationObject and the AccountInfo of the currently logged in user.
- The public client app object constructor will require a config object where you can specify your Azure AD client app ID, the tenant URI, and the redirect URL (usually app home/index).
- Have a UI button that calls <msal-app>.loginRedirect().
- The user is immediately redirected to the Microsoft Identity Platform so they can sign in.
- In the login request payload, specify the exact scope created in your app registration.
- Example: “api://<app-client-id-guid>/Scope.Name”
- In your main index.(js|ts|tsx) file, ensure that you call <msal-app>.handleRedirectPromise().
- This is invoked on all page loads to handle either (1) a redirect back from the identity platform, or (2) pull cached user details.
3. Call a protected web API resource.
- Before each call to a protected web API resource, call <msal-app>.acquireTokenSilent() to fetch a token.
- If the current access token is still valid it will be used, otherwise MSAL will attempt to fetch a new one silently.
- The scope specified in the payload must match the scope created in your app registration.
- Pass in the access token from MSAL into the authorization header for API calls to the protected web API resource.
Q&A
1. Why does my call to the /Authorize endpoint fail with an error message saying that the client_assertion or client_secret is missing?
A: Authorization code flow only works if your app registration’s redirect URI is of type “SPA”. If it is incorrectly set to type “Web” (which is the default), then the /Authorize call will fail.
2. Why can’t I assign users or groups to my app registration roles?
A: Make sure that your app registration wasn’t registered as a public client, because public clients do not support RBAC. Also note that assigning Groups to App Roles is only available with Azure AD Premium. If you have standard Azure AD then you will need to assign users directly to the app roles.
3. I asked for multiple scopes in a token request but only received some of the scopes, why is that?
A: Each call to MSAL’s loginRedirect or acquireToken operations must only specify scopes for a single resource. If you specify scopes for multiple resources (for example one scope for Graph and one for your ASP.NET web API), only the scopes from the first resource are returned. More information here.
4. Why did the example instructions create one App registration scope if we had multiple app roles defined?
Scopes are linked with the app registration itself and not with specific user roles. All authorized users (regardless of role) are given the application scopes defined in the app registration. As a result, our API is concerned with making sure that users have the correct client/audience and that they are authorized via roles.
5. Why is there only one app registration instead of two (split for client app and web api app)?
This flow should work fine in either a single-app or multi-app registration setup. I choose single app here for simplicity because:
- I only have one web client now and I don’t plan on adding multiple clients for this same API later.
- The API itself isn’t acting as a client to another resource.
6. How can I examine the contents of a token returned from the identity platform? is a great tool for decoding the token to examine claims for debugging purposes.
Screenshots
Very nice article. How do we handle CORS issue when the call originates from (react) to asp.net core webapi running under IIS 10. I seem to have the problem even after enabling CORS in IIS and allowing specific sites
Generally speaking I would verify the following:
1. You have added a valid CORS config to the services.AddCors() method in Startup.ConfigureServices()
2. You are actually using the CORS config by calling app.UseCors() in Startup.Configure().
3. The order you the call the Use() methods in Startup.Configure() is correct. Adding things out of order may cause them not to work.
If none of those things are the issue, I would recommend opening up a new question on stackoverflow where you can post your code samples.
I have setup AD as per the steps above. However, at the moment only the client gets authenticated and I am getting error (code 500) when trying to call the “AdminRoleGet” end point. I have enabled the “IdentityModelEventSource.ShowPII” for more info and I am getting the following:
“System.InvalidOperationException: IDX20803: Unable to obtain configuration from: ‘– use Azure tenant….”
“HttpResponseMessage.Content: ‘{“error”:”invalid_tenant”,”error_description”:”AADSTS90002: Tenant ‘– use azure tenant (directory) id –‘ not found. This may happen if there are no active subscriptions for the tenant…”
Below is my appSettings.json:
“AzureAd”: {
“Instance”: “”,
“ClientId”: “b29e4a17-26d8-48d2-a904-80gt547bcab4”,
“TenantId”: “ed1f9778-fef2-4985-9c5f-d4detyyy349a”,
“Audience”: “api://b29e4a17-26d8-48d2-a904-80gt547bcab4”
}
and below is the values I set in the React evironment:
REACT_APP_MSAL_CLIENT_SCOPE = “”
REACT_APP_MSAL_TENANT_AUTHORITY_URI = “”
REACT_APP_MSAL_CLIENT_ID = “b29e4a17-26d8-48d2-a904-80gt547bcab4”
BTW, I had to change the value for REACT_APP_MSAL_CLIENT_SCOPE from “api://{clientId}” to the one above as it is what I can read in my Azure scope. Have already changed the “Audience” from “api://…” to my scope, but still same error!
Any help is much appreciated
The failure you are seeing is because its trying to query tenant details for the unsubstituted placeholder values. The tenant configuration that you applied in appsettings.json would also needs to be applied to appsettings.development.json. Visual Studio launching the Web API in debug mode uses the development config file, not the regular config file (for the deployed service).
LikeLiked by 1 person
thanks for your input. all is working fine. Just a quick question on the client side configuration. I can see a comment in the env.development as “IMPORTANT: do not store secret values in this file.”. Where should I exactly store the clientId, tenantId and other bits for the client to be secure?
LikeLiked by 1 person
Regarding the client config and secrets: your Azure AD tenant ID and client ID are not actually considered secrets. These are public details that an unauthenticated client requires before they can even log in– otherwise they wouldn’t know which AAD tenant/client to authenticate against. You can also see these public details when you redirect to the Microsoft identity platform. The tenant ID and the client ID are directly in the URL. These values are certainly safe to check into source control and store in the .env files.
A better example of an actual secret that you should NOT store in the .env files would be something like an API key, or an account password.
LikeLiked by 1 person
Hi Keith, thanks for your helpuf post.
I’m getting the following error:
ServerError: invalid_resource: AADSTS500011: The resource principal named api:/ was not found in the tenant named 3cxx3d01-0000-0000-0000-d97d7xxx8af8. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You might have sent your authentication request to the wrong tenant.
The tenant ID is correct. Any idea what could be the issue?
LikeLiked by 1 person
Hi there– I haven’t seen this particular error before, but based on the message it seems like the app registration portion of the process is misconfigured (was it accidentally configured as multi-tenant?), or the web-client’s configuration is incorrect. I would triple check those particular setup steps, and if you are still having trouble I would reach out to the MSAL Q&A forum that Microsoft has here:.
LikeLiked by 1 person | https://keithbabinec.com/2020/09/27/oauth-2-0-authorization-code-flow-with-a-react-spa-asp-net-core-web-api-rbac-roles-and-msal/?like_comment=2495&_wpnonce=1cbd0189aa | CC-MAIN-2022-05 | refinedweb | 1,983 | 57.57 |
Thank you very much. It really helped. By the way, can one do the following? select * from [a-query] where ....... So, select from a query instead of a table... Thanks anyway you guys, for answering the previous question. Arjen > DD wrote: > >> Hello, >> But I cannot even connect to the access database (see below). >> Could anyone explain it to me as simple as possible please. I'm using >> Windows XP, ActivePython 2.3.2 build 230 and Microsoft access(XP?) > OK, as several others have said, one solution (and the one I know) > is via ODBC. I use Win2K, not XP; there will be slight differences. > > First, we create and ODBC (Open Data Base Connection) DSN (Data Source Name) > I use (forgive the notation; I don't know a generally accepted one): > <Start> > <Settings> > <Control Panel> > <Administrative Tools> > <Data Sources (ODBC)> > Gives a tabbed window labeled "ODBC Data Source Administrator". > Choose tab "User DSN" (or "System DSN") and press the "Add..." button. > It asks you to: > "Select a driver for which you want to set up a data source." > > If you cannot find the menus I discuss above, explore using: > <Start> > <Help> > Got to tab "Search" and type "ODBC" without the quotes. Check out > the various subjects that follow. > Choose some version of "Microsoft Access Driver (*.mdb)" -- my > machine offers the choice in a couple of human languages; I > inevitably go for English (my native language), but your > performance my vary. > Having chose the correct driver, I click "Finish" > This takes me to a dialog "ODBC Microsoft Access Setup" > That wants a Data Source Name (name by which I will later refer > to the DB) and a Description. I always choose a simple DSN > such as "Exploratory" and skip the Description, but I am on a home > machine. Remember the DSN namespace is "flat" -- no directories; > all the names you use must be distinct. So for my example we use > "Exploratory" for the DSN and "None of your business" for the > description. In the "Database" box, I click "Select...", which > will make a file chooser appear. I then navigate to my access > database's file (c:\imports\dbxx.mdb in this case) and click OK. > Under the options section, I'm clicking "Read Only", but I'm paranoid. > I now click OK here and again at the "ODBC Data Source Administrator" > window, and this dialog session is at an end. > > From your attempts, I see you have loaded the win32 extensions, so > the following should work work for you as well (changing names as > appropriate -- I have a table named category): > > import odbc > conn = odbc.odbc("Exploratory") > cursor = conn.cursor() > cursor.execute("SELECT COUNT(*) FROM category") > print "lines in category = %s" % cursor.fetchone() > cursor.close() > conn.close() > > If you get this far you are in business. You can use idle (or your > other favorite python interactive environment) to explore the results > you get. Know the difference between fetchone, fetchall and fetchmany, > understand setautocommit, and have fun. > > I personally used mxODBC > > > to do a lot of Database work at one job; Lemburg dots 'i's and > crosses 't's to give you a great exploratory database environment. > You get column names for all row returned and so on. The odbc > package that came with win32 does the bare bones. The difference > is well worth the price in time you save, in deployed systems > (especially those given away) you may have to give up the features > Marc-Andre provides. > > -Scott David Daniels > Scott.Daniels at Acm.Org | https://mail.python.org/pipermail/python-list/2003-November/212951.html | CC-MAIN-2016-30 | refinedweb | 584 | 66.13 |
Type: Posts; User: 2kaud
Note though, that this isn't consider 'good practice'. Usually reinterpret_cast highlights some design issue and such code is often considered 'flakey' and 'fragile'. Use at own risk!
You can't use static_cast on pointers. Use reinterpret_cast.
Have you considered using std:: optional? There is also a nullopt value.
If not that, then perhaps return say a std:: pair - one element value,...
Well FWAInfo.h must have an include which then includes FWAInfo.h somewhere or another already included file - causing the circular dependency. You''ll need to find what in FWAInfo,h is causing the...
[Also asked and answered here ]
Yes I know - but I don't know of any. We have a tool which shows the #include hierarchy - but not what each include defines.
Yeah - use a pen and paper and work through the includes!
Seriously, you need to work through the includes/code, find each of the different defined types (classes/structs etc), note their name...
Well that is fairly easy to determine by finding out the include dependencies for each .cpp/.h file uses. Once you have that, then you can ascertain if there any circular dependencies and which .h...
I suggest you create a new solution/project in VS2022 and add the required .cpp/.h et al files, set this solution/project properties as required and then compile. You shouldn't then have any...
You could read it into a std::string as std::string can contain nulls. Something like:
std::ifstream ifs;
std::string buf(2048, 0);
ifs.open(m_FilePath, std::ifstream::in |...
buf is sized at 2048, but 20481 chars are tried to be read??
Well it seems to! eg WSA error code WSAEDESTADDRREQ (10039) is "Destination required"
if you use 10039 in my code above, it shows:
A required address was omitted from an operation on a socket....
This is what I used when I did Windows programming:
#define WIN32_LEAN_AND_MEAN
#define NOMINMAX
#include <windows.h>
#include <string>
#include <iostream>
You might find this article of interest
For a constexpr function to return a constexpr value, any functions it calls must also be marked as constexpr. ie. can be evaluated at compile time. Unfortunately, sqrt() isn't one of them.
See...
The above is Pascal. Is there a c++ question somewhere in this?
If you want this translated to c++, it's not that different. var in a function parameter means 'pass by ref'. lo,,hi for an array.... | https://forums.codeguru.com/search.php?s=b6f6087757f97b3ed56f9f9adb803d48&searchid=22292580 | CC-MAIN-2022-05 | refinedweb | 410 | 68.47 |
Redux
The redux_dev_tools library allows you to create a
DevToolsStore during dev mode in place of a normal Redux
Store.
This
DevToolsStore will act exactly like a normal
Store at first. However, it will also allow you to travel back and forth throughout the States of your app or recompute the State of your app by replaying all actions through your reducers. This works perfectly with Hot Reloading!
You can combine the
DevToolsStore with your own UI to travel in time, or use one of the existing options for the platform you're working with:
-
import 'package:redux/redux.dart'; // Create typed actions. You will dispatch these in order to // update the state of your application. enum Actions { increment, decrement, } // Create a Reducer. A reducer is a pure function that takes the // current State (int) and the Action that was dispatched. It should // combine the two into a new state without mutating the state passed // in! After the state is updated, the store will emit the update to // the `onChange` stream. // // Because reducers are pure functions, they should not perform any // side-effects, such as making an HTTP request or logging messages // to a console. For that, use Middleware. int counterReducer(int state, action) { if (action == Actions.increment) { return state + 1; } else if (action == Actions.decrement) { return state - 1; } return state; } // A piece of middleware that will log all actions with a timestamp // to your console! // // Note, this is just an example of how to write your own Middleware. // See the redux_logging package on pub for a pre-built logging // middleware. loggingMiddleware(Store<int> store, action, NextDispatcher next) { print('${new DateTime.now()}: $action'); next(action); } main() { // Create the store with our Reducer and Middleware final store = new Store<int>( counterReducer, initialState: 0, middleware: [loggingMiddleware], ); // Render our State right away render(store.state); // Listen to store changes, and re-render when the state is updated store.onChange.listen(render); // Attach a click handler to a button. When clicked, the `INCREMENT` action // will be dispatched. It will then run through the reducer, updating the // state. // // After the state changes, the html will be re-rendered by our `onChange` // listener above. querySelector('#increment').onClick.listen((_) { store.dispatch(Actions.increment); }); } render(int state) { querySelector('#value').innerHtml = '${state}'; } | https://pub.flutter-io.cn/documentation/redux/latest/index.html | CC-MAIN-2020-40 | refinedweb | 374 | 58.08 |
Has anyone else tried using MSSOAP 2.0 against axis/jboss 3.0? I'm able to deploy my session beans and expose them as web services, but there seem to be some type conversion issues when using MSSOAP.
The problems seems to be that mssoap doesn't include type attributes in the parameter elements of the soap call. Looking at the log file generated by jboss, axis seems to attempt the type conversions (I see messages like 'Trying to convert java.lang.String to double') but then I get an exception from RPCProvider.processMessage().
I can get around it by either passing all parameters coming into jboss as strings and doing the type conversions myself, or modifying the wsdl to specify the parameter types as anyType. The problem with the the second approach is that mssoap stupidly binds the namespace for the parameter type separately for each parameter, and soap is slow enough already. I'm concerned about performance with large complex types.
I've tried rebuilding axis from source so I could add some log messages to track down the problem, but when I drop my axis.jar & jaxrpc.jar into the server/config/lib directory, I can't access the services through soap anymore. Is the axis.jar provided with jboss modified?
Thanks,
Pete
I read in the forums somewhere that the axis.jar was a modified version. I don't believe that you can do what you are trying to do. I can't give any details but I know I read it on this site.
Ron | https://developer.jboss.org/thread/70727 | CC-MAIN-2018-13 | refinedweb | 261 | 65.52 |
This week I had the pleasure to get to know more about the infamous
eval() function, as it was my first instinct to use it for arithmetic evaluation. We ended up using a third-party library for that matter, but I'd like to shed some light on that topic.
So
eval() is a global Javascript function that evaluates a string as a command. If a string represents an expression, the function will evaluate it as an expression:
eval('1+1') // returns 2
and if it is a statment or a sequance of statements, it will evaluate the last statement:
eval('var x = 5; alert(x)') // triggers alert
Also, I need to mention that
eval() uses the scope in which it is called.
Risks
The main reason you should avoid using it is a security. Evaluating JavaScript code from a string is hazardous. A string may consist of malicious code that will be run on the user's machine, and the scope, where
eval() was called will be exposed for possible attacks.
Especially it is dangerous if you try to evaluate the user's input, that can lead to infinite loops, deleting files or steeling administrative cookies.
For example evaluating the string in Node.js would delete all the files in the current directory:
const valueFromInput = `require("child_process").exec('rm -rf ./*')`; eval(`console.log("User input: ${valueFromInput}")`);
The second reason is performance. Calling
eval() will be slower than using alternatives, because it has to call JavaScript interpreter, which will convert evaluated code to the machine language. That means if you run the code more than once, the browser will have to interpret the same code again, which is highly inefficient.
Moreover, changing the type of the variable through
eval() will force the browser re-run the code. And using variables that are not in scope with
eval() will require for the browser to do an expensive lookup to check whether the variable exists.
Alternatives
The most simple alternative is to use
windows.Function(). It creates a global scope function from the string. This way, you can write your custom parser to evaluate the code from the string. It is less likely for the possible attacks to do harm comparing with
eval().
function parse(str) { return Function(`'use strict'; return (${str})`)() } parse('4.12345 * 3.2344 - 9') // 4.336886679999999
In the example above, we are creating a function from the string using strict mode. Which makes it easier to write "secure" JavaScript, for example, you can't use undeclared variables.
Other alternatives that do not use
eval():
- expression-eval - JavaScript expression parsing and evaluation.
- math.js - Extensive math library for JavaScript and Node.js
Sum Up
eval() function is rarely used in the modern JavaScript because of its high vulnerability and performance reasons. Misusing the function can lead to running malicious code on the user's machine and data loss. There are third-party alternatives that will fool-proof a string evaluation to JavaScript, and for simple usage, you can use global
Function() to write your own custom evaluation function.
Discussion (4)
Hi Linas,
While using Function, I am getting issue as "args is not defined".
As my string is in the object and the value is "hello, my name is ${arg[0]}."
I am also passing the arg to this runtime object. Don't know what is the issue and how argument will be fetched. Could you help me with this ?
With eval() it works fine
I'm running into an issue where I keep getting,
Unexpected token 'const'
I think it's because you are using
constin the string.
In the expression, you are returning an expression. You cannot return a
const. Just pass in a simple expression like
2+5or something like a ternary operator
Thanks man! Save me a ton, I'm building a calculator app with React and keep getting the "eval can be harmful" "no-eval" warning. Thanks! | https://dev.to/spukas/everything-wrong-with-javascript-eval-35on | CC-MAIN-2022-21 | refinedweb | 653 | 63.49 |
When you’ve got more than one person working on a complex bit of software, you generally need a specification (spec) for new features. The bigger the team, the more you need a spec. The more complex a feature, the more you need a spec.
According to stereotypes, big firms usually lean too hard on specs, to the point where they might spend more time writing the specs than coding the feature:
“The button will be ten pixels from the left margin and will conform to the usability guidelines sheet 201-a. It will be labeled “Join Game” and will – after a confirmation popup as outlined in the interface framework – begin polling the designated server in request for an open slot. If no slot is found, then the fallback behavior […]”
Meanwhile, little indie houses have a slightly less formal approach:
Bruce: Can you add a button that will let players join the game?
Barbara: Sure.
Stuff gets done either way, but sometimes indies are a little slapdash and sometimes big firms are a little too bureaucratic. On Good Robot, our spec is usually a sentence or two in the shared Google doc that we use as a universal to-do list.
But this week I ran into something that I realized was too complicated for that. It was one of those features that sounded obvious and simple in the meeting, but then became mysterious when I sat down to write the damn thing. (This is the point of a spec: To reveal the unknowns BEFORE coding begins. This is important in big firms, since once you’ve begun coding you’ve ALREADY been allotted a fixed time budget, which means this is a bad time to begin figuring out what you need to do.)
So I proposed a spec. And then I thought I’d post it here, just to show what the non-exciting bits of game development look like. This is pretty informal as these things go, but it should give you an idea of what needs to be hammered out before you start coding.
Also note that I’ve replaced the names of the stuff in our game with DOOM references to make it easier to follow.
Drop Tables Discussion
“Add a drop table” has been lurking on my to-do list for several weeks now. This is actually a complex enough topic that we need to define a spec. It turns out this is a bigger feature than we might have anticipated in meetings. Every time I’ve tried to start on it, I ended up with more questions on what we needed and what I was supposed to be doing.
Abstract:
We will be adding a new file called drops.ini or drops.xml, which will allow artists to design things to be dropped in the game. (Things that the player can pick up.) We want the system to be robust enough that it feels unpredictable and varied on repeated playthroughs, yet controllable enough that the game doesn’t feel like random chaos. Specifically, we want to be able to ensure things like, “Player will be offered at least 1 half-decent weapon before reaching point N.”
- Right now there are two main things that need to result in drops. We should go ahead and assume that more “causes” might be added later.
- When you destroy a robot.
- When you destroy a machine. (Boxes are “machines”, in the parlance of the game’s internal workings.)
- A drop can deliver a number of things. Given our discussions so far, It seems like it should be possible for a single drop to create multiple things:
- Weapon pickup. (Of a specific type. Ex: “AlienBlaster”) This might be too naive a design. I’ll talk more about this at the end.
- Money. (Of a certain range of values. Ex: Between $10 and $20.)
- Robots. (Of a given type and number. Ex: 5 instances of “Imp”)
- MAYBE projectiles? This would let a boss release a swarm of missiles as it dies. I don’t know how these would be aimed, but it’s here for discussion.
- A drop must have some sort of probability associated with it.
- This probability ought to work on a per-item basis. So, maybe the “Cyberdemon” boss has 100% chance of dropping some imps, a 50% chance of dropping some PinkyDemons, and a 1% chance of dropping the BFG 9000.
- For simplicity of parsing, I THINK this probability should be expressed as a percentage (50%) and not a ratio (1:2). The first is simply much easier to write, but the latter is better if we plan to have very very small chances (say 1 in 10,000’s), since percentages like “0.01%” aren’t as intuitive. (And I worry about floating-point shenanigans.)
- Drops will be named.
- So in our file, the artist will name a drop something descriptive. “Level1Trash”, “Level2MiniBoss”, “BellaAndJacobOTP”, or “foo”. Whatever.
- The name will be referred to by the robots and machines in their respective files: “drop=Level2MiniBoss”
- If we go crazy, I suppose we could add something insane like recursive drops. So “Level2MiniBoss” has a 50% chance to create “Level1Trash”, which itself might contain references to other drops. This is powerful, but it’s also an exercise in juggling dynamite. It’s only slightly more effort to code, but you can create a lot of needless complexity and bugs with this. Protip: I’m not going to build in a system to detect infinite recursion, so if you screw up you might crash the game. Choose wisely.
Proposal
My proposal, which needs to be discussed before I begin work on it:
I’m 99% sure that this needs to be in XML. The problem is hierarchical, which means using .ini files would be clumsy. Let’s just start with this assumption.
The definition starts with the name of the drop. It is then followed by one or more entries of things that might be dropped. Example:
For each entry:
There is the name of the drop: Either the name of a robot, the name of a weapon, or the word “cash”.
There is a minimum and maximum number to drop. If you omit these, it will default to a min and max of 1. (This is good for weapons, since it doesn’t make any sense to drop more than one of the same weapon.)
Then there is the value “chance”, which will probably be a percent. Note that it will roll this chance and THEN roll again for min and max. So in the example above, it might roll the dice for Imps. The roll comes up positive, so it rolls again to see HOW MANY. Then it might roll zero, meaning no imps actually appear. Keep this in mind.
You can use this system to create bell curves. For example, if you don’t want cash drops to be completely random, then you could do this:
Both cash drops will always happen, and will always drop between 1 and 6 coins. This is the exact thing you get when rolling 2d6 in a tabletop setting: Most values will be in the mid range, and values of 2 or 12 will be rare.
Robots already have a value for how much money they will drop. I don’t see any reason to REMOVE that system, but we might simply transition to using this new system for that. Or maybe we’ll just limit this new system for specials and bosses.
One final note is that this doesn’t allow us to drop weapons from lists of possibilities, which is what vending machines use. Maybe we also need to add a systems of “weapon pools” of similar-powered weapons. So each entity can be a weapon, a weapon pool, money, or a robot. (And maybe projectiles.)
This needs a lot of discussion. It’s probably the most complex item on my list right now, so the sooner we hammer this out, the better.
Conclusion
So this is what I sent to the rest of the team (minus the animated Gifs) this morning. This afternoon we’ll have our meeting and we’ll see if what I’ve proposed fits with the needs of the project.
The Best of 2015
My picks for what was important, awesome, or worth talking about in 2015.
Do It Again, Stupid
One of the highest-rated games of all time has some of the least interesting gameplay..
The Best of 2014
My picks for what was important, awesome, or worth talking about in 2014.
78 thoughts on “Good Robot #38: Spec’ing a Feature”
Finally something that I have more than enough experience to comment on. Your first example of corporate overlordship is a good one because it defines something else you don’t specifically mention: Spec Standards. At my company, we have Specs (we call them Use Cases) which follow their own standards to such a point where it’s almost ANOTHER programming language. I actually sometimes (as a developer) have a hard time reading the specs and need our BSA (person who writes the specs) to read it for me because there’s some obscure rarely-used logic to the spec notation.
This leads us to writing SUMMARIES in plain English of our SPECS…wee paperwork….
I write those ridiculous use cases for a living and I’m sorry that they are so annoying to read. You’re quite right that this stuff is basically code, but the language it’s written in is legalese. The other guy’s Contracts team is looking through the specs, trying to find vague, undocumented or half-baked features, so they can get you to build more for the same amount of money. Your testing team is trying to write test cases, ideally automated ones which can expect the same output 100% of the time, so they need everything in exacting detail.
Once we spent almost a week in meetings to determine what printing options would be supported. A one-line comment like “The Print button opens the system print options dialog, depending on the user’s current printer” is enough for a developer to understand what you mean, but when the customer submits a support ticket requesting that you add an image rotation feature because you said you supported printing in portrait or landscape, you will need those 3 pages of nonsense.
We get those all the times. Customer opens a Production Defect saying that the system isn’t doing a thing that sounds completely reasonable. The first response is to pass it off to the BSA to comb the requirements.
Technically, the system isn’t supposed to do anything NOT specified in the Use Case, so if it’s not in the Use Case, then it’s a New Business Request (read: pay us more money to do it). If it IS in the Use Case, then it’s a defect (read: we don’t get paid to fix it).
And, of course, the BSAs get dinged for too many “missed requirement” NBRs and the Testers get dinged for too many “missed defects”, so then they fight back and forth over if it’s an NBR or a DEF…Mmmm…politics…
For point 4 and infinite recursion, detecting it shouldn’t be too hard. Since this is (presumably) data that’s loaded once at start-up, you can traverse the loot drop graph after loading all the data with a topological sort algorithm, ensuring that there are no cycles. Way simpler than detecting loops in the graph when generating loot.
Detecting during run-time “merely” requires passing around a set of all nodes you have visited so far and simply stop the current traversal whenever you see you’ve visited before.
It does require to be thought of when you start writing the code and is greatly simplified by already having a decent set implementation hanging around, but can usually be faked by abusing a hash table.
Even if there are loops, infinite recursion has zero probability if the total outgoing probabilities for each node are less than 1. So no loops would be more restrictive than absolutely necessary.
On the other hand, you probably do want to drop some things with probability 100%.
I want a phoenix with 100% chance to drop itself after defeat.
But only if you beat it with The Good Incinerator.
It can still have a nonzero probability for any finite number of recursions. Which is effectively the same thing because it runs out of memory in finite time.
In the spirit of the original frame driven processing, I’d put the drops on a stack and on each frame process 1 (or N) of them. An infinite drop recursion would prevent the stack from shrinking, but the game would continue to run. To guard against exponential stack growth, I’d check the stack size when adding drops, logging an error at some small size and skip adding the drop at some much larger size.
Ooh, fun! I’ve written systems like this before (and sometimes gone overboard and tried to make everything modular and interchangeable, complete with object inheritance and custom scripting code to execute).
I’d say let a drop include the option of dropping an item chosen from a set. That lets you organize objects on hierarchies of rarity, giving (say) a miniboss a 1:100 chance of dropping a legendary weapon, which is then chosen from a set of legendaries based on the rarity and power of that weapon. Plus you can simplify the system and only define weapons in a single place (the containing set).
I’d do this with a drop-wide “all or one” setting – if its a one-item set, add up the probabilities, take a random polling, and see which item you polled. If it’s a multi-item set, work as before.
Also let drops drop other drops. Makes things simpler. (Though still limitedly recursive)
You could re-use the same logic, to have monsters dropped from pools. Anything really. Maybe the level-1 boss drops either X cash, or something from the fire-pool, where the fire-pool is either a fire-based weapon, a handful of fire-imps, or a fire-shield.
Obligatory’);DROP TABLE Robots;
Boring?! This is the best part of software development! Solving the puzzles!
I get bored about 40% of the way through coding a similar but different feature for the 173rd time. But figuring out a tricky problem or speccing a new feature is always a nice fun little puzzle to solve.
One final note is that this doesn't allow us to drop weapons from lists of possibilities, which is what vending machines use. Maybe we also need to add a systems of “weapon pools” of similar-powered weapons.
Why not add logical hierarchies, i.e. <OR Chance=70 < AlienBlaster Chance=70 /> < BetterAlienBlaster Chance=30 /> />?
This would also allow you to define certain sets of things that should drop together via an AND.
What you’ve got there is invalid XML, but here’s a valid version of the same idea:
<WeaponPool RandomTotal="100">
<AlienBlaster Min="1" Max="69" />
<SuperBlaster Min="70" Max="100" />
</WeaponPool>
Roll a random number from 1 to RandomTotal, and select the weapon dropped depending on which Min/Max it falls within.
Allow a result for “no drop” and you’ve got an easy way to simply express a 1 in 10,000 chance:
total = 10000
awesome item = min=1, max = 1
no drop, min = 2, max = 10000
You could go further and just omit the ‘no drop’ line altogether.
Some thoughts….
Rather than specify chance as a number for each drop, you might want to use an enum (e.g. “very rare”, “rare”, “uncommon”, “average”, “common”, “likely”, “very likely”, “certain”). The advantage of this is that it’s a LOT easier to tune a playtesting issue like “it seems like the drops are way too generous” if you didn’t have to twiddle 100 different numeric values one at a time. Might also consider this with the cash values.
Separately, every drop could in theory happen on every event. Is that the way you want it to work? For example, a boss can drop a weapon AND money. A possible extension would be some kind of “wheel” system (at a level above the individual drops), where exactly one of N possible drops will be triggered.
Example (angle brackets don’t seem to work easily in wordpress, nor does indenting…)
[PhatLewt]
[Imp Min=”0″ Max=”6″ Chance=”50″/]
[Cash Min=”0″ Max=”20″ Chance=”100″/]
[wheel]
[AlienBlaster weight=”1″ chance=”100″/]
[BFG9000 weight=”1″ chance=”100″/]
[Cash weight=”8″ Min=”0″ Max=”500″ Chance=”100″/]
[/wheel]
[/PhatLewt]
This would yield a 50% chance of imps, a 100% chance of a small amount of cash, and ONE of the (AlienBlaster, BFG9000,Big pile o’ cash) set. The “weight” param would only matter in a wheel, and determine how likely each option is (with these weights, 10% chance of blaster, 10% chance of BFG, 80% chance big cash pile).
This might be useful in a situation where “I want to drop exactly 1 weapon, but I want it random which one gets dropped”
I really think nested drop tables are the way to go, and I would generalize between the drop table and a wheel. Need not even be recursive, just allow a top-level drop table to contain drop tables as regular entries, but no deeper. Also, I would make it so
<DropTable Name="Lvl1Boss" Max=2 Roll=10 >
<Stack Archetype="Bomb_01" Min=1 Max=3 Weight=1 />
<DropTable Weight=5
<Stack Archetype="Laser_lvl10" />
<Stack Archetype="Minigun_lvl10" />
<Stack Archetype="Shield_lvl10" Weight=2 />
<Stack Archetype="Nothing" />
</DropTable>
<Stack Archetype="Money" Min=100 Max=500 Weight=10 />
</DropTable>
This is how it would work at runtime:
An enemy assigned this drop table dies. The drop table starts at the top element, rolls a number between 1 and the value of the “roll” attribute, and if the result is lower or equal the weight of the item, the drop table yields this item (or multiples according to the defined min and max of the item, which both default to 1).
The drop table then proceeds to the next item, rolls again, compares the weight to the roll. It stops once the maximum number of drops has been met, or the end of the table is reached. The attribute “max” for drop tables defaults to 1.
This gives us a few nice opportunities to play with. Drop tables are evaluated top to bottom. In my example, if the first roll yields a 1, 1-3 bombs drop. This consumes one of the “Max=2” possible items this table can produce. The next item is a nested drop table, which is then evaluated by the same rules that all drop tables follows, minus the option to contain another nested table.
If both some bombs have been dropped and the nested drop table has produced something, then the outer drop table has reached its maximum and no money is dropped. A nested drop table may itself yield empty, depending on the roll and weight values of the table and its items; if weight of the inner table itself is not met, or none of its contained items is met, then no “Max=2” drop from the outer table is consumed. If the special “Nothing” element is met, however, this does consume a drop.
If only bomb or a weapon, or neither, has dropped, the player is guaranteed some money as a consolation price. Jackpot is when no bomb drops, an item drops, and money drops.
The inner drop table has the special attribute value “Roll=Auto”. This means it will automatically determine its Roll-value by the sum of all weights contained within its own entries, and it only rolls a single time, not each time for each entry. Since it contains 5 items, 3 with a weight defaulted to 1, 1 with an explicit weight of 2, the game rolls 1-5: 1 yields a laser, 2 yield a minigun, 3 and 4 yield a shield; a 5 yields nothing. The Archetype=”Nothing” entry is no accident, it is special value with weight and everything, when its value is rolled it yields nothing, but unlike a roll that does not meet any defined item, meeting an Archetype=”Nothing” item does consume a drop. Key to all of this is the top-to-bottom evaluation.
You can of course also basic things like having a special rare drop and a common normal drop, like this 1% chance to drop a level 10 laser:
<DropTable Name=”RareMeansRare” Roll=Auto>
<Stack Archetype=”Laser_lvl10″ />
<Stack Archetype=”Laser_lvl5″ Weight=99 />
</DropTable>
Insert this into another drop table and you have an entry that says “in this place, drop a level 5 laser, only very rarely a level 10 one”.
Shamus’ idea of using two similar entries to emulate the effects of a 2d6 are possible in this system, too:
<DropTable Name=”2d6_cash” Max=2 Roll=1>
<Stack Archetype=”Money” Max=6 />
<Stack Archetype=”Money” Max=6 />
</DropTable>
It rolls 1-1, which always meets the default weight=1, and will output at most 2 items, each of which are defined as 1-6 units of cash. :)
Also, some of the options, especial the “Nothing” element, may seem redundant at first glance, as you could produce similar effects by different means. The following drop tables would evaluate to the same results, but they can express different designer intention, or just modes of thought (everyone of us has a different way to wrap their head around probabilities). Each of the following has a 1/3 chance to drop 1-100 units of cash.
<DropTable Name=”A” Max=1 Roll=3>
<Stack Archetype=”Money” Max=100 Weight=1/>
</DropTable>
<DropTable Name=”B” Max=1 Roll=Auto>
<Stack Archetype=”Money” Max=100 />
<Stack Archetype=”Nothing” Weight=2 />
</DropTable>
<DropTable Name=”C” Max=1 Roll=Auto>
<Stack Archetype=”Nothing”/>
<Stack Archetype=”Nothing”/>
<Stack Archetype=”Money” Max=100 />
</DropTable>
Also, keep in mind, as I said above, yielding a “Nothing” result consumes a drop, but failing to meet a weight does not do so, therefore allowing to produce exceptions where a table mostly meant to drop a given number of items, may sometimes yield less. We can thus emulate a probability for a table’s own Max=N value:
<DropTable Name=”1-3d6_cash” Max=3 Roll=100>
<Stack Archetype=”Nothing” Weight=50 />
<Stack Archetype=”Nothing” Weight=50 />
<Stack Archetype=”Money” Max=6 Weight=100 />
<Stack Archetype=”Money” Max=6 Weight=100 />
<Stack Archetype=”Money” Max=6 Weight=100 />
</DropTable>
If the first “Nothing” weight is met, one of the Max=3 drops is consumed. If the second one is met, only a single drop remains, which is consumed by the first money drop already. So you can have 1d6, 2d6, or 3d6, and by adjusting the weight of the nothing element, you can directly control the probability for each number of dice, independently of the sizes of the dice.
Addendum: To further illustrate the flexibility of this system, here is the first example, but changed so that when bombs drop, the chance of 3 is very low only:
<DropTable Name=”Lvl1Boss” Max=2 Roll=10 >
<DropTable Weight=1 Max=3 Roll=100>
<Stack Archetype=”Bomb_01″ Weight=100 />
<Stack Archetype=”Bomb_01″ Weight=50 />
<Stack Archetype=”Bomb_01″ Weight=1 />
</DropTable>
<DropTable Weight=5 Roll=”Auto” >
<Stack Archetype=”Laser_lvl10″ />
<Stack Archetype=”Minigun_lvl10″ />
<Stack Archetype=”Shield_lvl10″ Weight=2 />
<Stack Archetype=”Nothing” />
</DropTable>
<Stack Archetype=”Money” Min=100 Max=500 Weight=10 />
</DropTable>
The inner drop table for the bombs always yields 1 bomb, with 50% a second bomb, with 1% another bomb (technically it can drop the third but not the second, but you get the idea). Like in the original example, it is the weight of the drop table for the bombs that determines whether bombs drop at all. And if 3 bombs drop, it still consumes only 1 drop of the outer table, as the inner table is only regarded as a binary “yields”/”no yield” situation (again, with the special “Nothing” element counting as a “yield”). If you want to have multiple bombs count as individual items, you justput them into the outer table as individual, separate items. :)
Re: floating point — just because it’s WRITTEN as a decimal percentage doesn’t mean you have to STORE it as floating point. Personally, I’d probably parse it into an int value equating to parts per thousand or per ten thousand, depending on how granular you want to get.
Yeah, but then I’d have to climb down into the XML reader and write my own “text float to int” parsing code, and that’s a lot of effort for a single special case.
As a rule of thumb, for probabilities in code I just work out the smallest probability that I _think_ I want (say, 1 in 1,000) and add a digit (1 in 10,000) because you will always later wish you could cut it in half, and then code in integer numbers of these. You never have to convert it to a float, because you just code your “die roller” to generate an int from 1 to DIE_SIZE and compare it to your integer value. Because yeah; I hate floating-point maths on computers. We’ve got a print billing system here at work that keeps trying to tell me that Bob from Accounting owes 1.100000007 dollars, which is just dumb.
If I had a dollar for every time it was a good idea to store monetary values as floats, I’d have 0.00000000003 dollars.
COBOL actually has a special data type for storing monetary amounts. Which is part of why people still use it for accounting systems.
You can totally use floats for monetary values, because rounding errors can be fixed.
On the other hand, overflow exceptions cannot be fixed. For example when someone is trying to calculate a bigger monetary value in YEN.
And this is why people keep making the same mistake.
Floats have a lot more ways of going wrong than you think.
For example, the value “0.10” cannot be precisely stored in a float – or a double, or indeed any size floating point you care to choose!
So paying in ten cents a few thousand times gives a different answer depending on whether you are working in floating point or fixed point arithmetic!
That’s before you start considering what happens when you add a small number to a big number…
Or you hit the limit where the remaining precision in your floats are larger than the smallest monetary unit you want to handle. If you’re handling cents, that happens at around 2^46 dollars.
If you instead decided to use the Maltese Lira (I think they use the Euro, now), you would hit the limit somewhere around 2^43 Maltese Lira (their smallest coin was the milli, thousand millis to the Lira).
Incidentally, since you mentioned source, here’s some:
#include <stdio.h>
int main(int argc, char *argv[])
{
float a = 9000000000.;
float b = 2.;
printf(“%fn”, a + b);
return 0;
}
Fixing rounding errors implies detecting rounding errors, which is hard when you can’t store the number 1/10 precisely. Try using single precision floating point on a 32-bit computer to calculate 9 billion + 2. (hint: 8999999488.000000) The solution isn’t to bury your errors. The solution is to use larger, more precise types.
Complaining that single precision floating point on 32-bit systems is a straw man? You bet it is. But then, so is a 32-bit integer. I can do 64-bit math on a 486, and the largest 63-bit number (don’t forget the sign bit) is 18,446,744,073,709,551,615. That’s 18 quintillion, which is more than the GDP of Earth in ZWD (about 8 quintillion).
I can only talk about .NET here, but it is definitely doable using its DOUBLE and DECIMAL types.
But I doubt i am allowed to post the source code here, so the rest is left as an exercise for the reader.
You’re never going to be able to represent things to arbitrary precision with perfect accuracy. There are a finite number of possible representable values and an uncountably infinite number of points between zero and one.
I think you went a bit off topic there, i was talking about monatary values here and there are definitely a finite number of cents between 0 and 1 dollar :)
Uh, firstly finance often does track fractions of a cent. It adds up. Secondly, if you’re using a generic floating-point data type instead of a specialized fixed-precision one, it will not be able to accurately represent all of your possible values. Additionally, when dealing with large numbers, precision errors will creep to the left of the decimal point. If you’d have overflow errors in dealing with Yen as an integer, a floating-point representation of dollars is going to have massive precision errors because one yen is about equal to one cent. The rate fluctuates, but that’s a good heuristic if you want a general impression of how much a price denominated in yen is (IIRC it’s about 80 yen to one dollar at the moment).
Your double type is going to be more precise than a float, but it’s still mapping an infinite space to a finite number of values, and the fact that you’re only concerned about a finite number of points in that infinite space does not mean that the data format will map the exact set of finite points you want. Additionally, a double takes up as much space in memory as a 64-bit integer.
DOUBLE, no. DECIMAL, yes. Don’t try to use floating point to represent money. That’s the short version of what I said above.
That’s what longs are for.
I’d abandon the notion 1.0 = 100% entirely. Just define an arbitrary value to roll, and which values yield which results. If the roll value is defined in the XML, designer can then pick a value that makes sense for their case, as opposed to cramming some values where they don’t fit. E.g. instead of having to assign 33%, 33% and 34% for 3 items meant to drop with equal probability in a system from 1-100, just let them define the roll as 1-3, and assign a weight of 1 to each item. :)
Why is this post two gifs with no text?
;D
I was just going to make the opposite remark. Both here and in previous Good Robot entries, Shamus has implied us being distracted by the moving gifs. While I appreciate them as looks at the game as a work in progress, they don’t distract for a second, and I hardly notice them unless I decide to look at the pretty pictures.
Shamus, you underestimate the amount of advertisement we have to ignore on a daily basis; I’m by now completely programmed to read a text and not notice the flashy screaming colorful “attracting” images or videos surrounding it.
Also, mouse scroll wheels. :)
(1) you seem to have lost track of “Specifically, we want to be able to ensure things like, “Player will be offered at least 1 half-decent weapon before reaching point N.”” Isn’t *that* the hard part? To some extent it’s going to imply inversion of control: rather than explicit spawns, you’ll be dropping “30% chance of a tier 1 weapon”, and then you’ll have a system to guarantee that at least 3 in 10 possible tier 1 weapon drops *actually* happens, as well as that at least 1 in 5 native tier 1 weapon drops is promoted to tier 2.
(2) it doesn’t at all follow that XML is necessary here, although I guess if your choices are that and .ini files then XML is the better of the two.
The easy approach is to have a drop set of ‘half decent weapons’ whose probability of dropping goes up over time and approaches 100% at point A.
I’d probably put in an incremental percentage value that gets multiplied by the number of previous drops that haven’t paid out. And zero out the previous drops number when you have a successful drop. I think this plays into having to classify the individual dropped items into categories and specify drops by categories rather than specific items.
Make the gambler’s fallacy come true! That’ll screw with ’em.
The other approach is to treat drops not as a percentage, but each source as a deck of cards – one way or the other, the ace will come up eventually….
I’d just make a certain enemy that shows up in one specific room that’s guaranteed to drop a random decent weapon. If you stick it in a room with a bunch of the base random-loot versions of the enemy, players won’t notice that that’s why they always get a good drop in the first 2 levels. It’s a brute-force solution, but it solves the problem.
This could be solved with logic (IF player has no good weapon THEN drop random good weapon). But then your loot table becomes its own programming language…
Technical specs can be annoying, but man, do they ever make your life easier when you get into something and realize it’s more complex than you initially thought. I was a fly-by-the-seat-of-my-pants sort of girl when I was in college and when I first started working, but I’ve been thoroughly converted. Seat-of-my-pants works fine when you’re making some minor change, but when you’re doing BIG stuff, complicated problems or dealing with different interacting systems or having multiple developers work on the same system at the same time… yeah, it falls apart pretty fast.
Plus, when you’re like me and you’re making stuff for internal customers in your company, it’s reeeeeal nice six months down the road to be able to pull out the document that THEY approved and go, “Well, the reason I didn’t implement Feature X is that you jolly well didn’t ASK for Feature X!” And because they’re internal customers, I’m allowed to say that. :)
(this week’s infuriating customer conversation: “But when I click the delete button, it DELETES THE DATA, leaving no record!” “Yes… yes that’s what you asked for… you said there was too much unnecessary data so you wanted to be able to clean it up…” “But it DELETES it!” Luckily they were appeased by adding dire warnings to the “are you sure you want to delete” dialog. I wanted to add a laughing skull on the side for extra warning-ness, but I managed to restrain myself.)
Ugh, that kind of bullshit is the worst. I recently saw someone had left a negative review of the Firefox NoScript addon that basically boiled down to “it stops scripts from working”. No shit. The fact that that will break some websites had apparently not occurred to them.
Fun fact: I read this spec here before I saw it in person among our other development files. Making games is weird sometimes.
In other news, this looks very usable and I like the implications it has for rarer item drops – …I say as I realize the one filling out the game’s worth of tables will be me, probably in just a few hours.
You know how I keep talking about planning things out in advance before coding them? This is that in action, and people are already suggesting changes to avoid technical issues or features you might want to add well before having a nice, entirely finished system you will need to knock holes in.
Your first weapon reference was “alien blaster” instead of “minigun” or “plasma rifle”.
For shame, Shamus. For shame! :p
You’ll definitely want weapon pools, IMO.
It’s such a nice thing to be able to say “Okay, this enemy type has a chance of dropping a common laser weapon, and THIS type has a chance of dropping an uncommon projectile weapon” without having to go through and manually set the groups every time you want to do this. If you don’t implement it now, you’ll almost certainly do it later when you get tired of specifiying drops like this:
[PhatLoot]
[exclusive]
commonBlastLaser=0.05
commonBeamLaser=0.05
commonBurstLaser=0.05
commonSpreadLaser=0.05
[/exclusive]
[/PhatLoot]
On literally every object that can drop a weapon.
That doesn’t even work. Four independent 5% chances is not the same thing as a 20% chance of one.
Who says it’s supposed to be a 20% chance? I’m not sure what the net chance there actually is, which is part of the problem, and the exclusive block there presumably means there may be at most one drop so they aren’t independent. Which may mean they don’t have equal probability, depending on implementation (they could be rolled independently and have one of the successes selected randomly with probability based on relative chance, or they could be rolled in order until one succeeds, for instance).
The odds there are actually about 18.5%. Some “intuitive” probability errors are actually not that far off.
I can’t tell if your abusing XML for simplicity of explanantion, or you are genuinely using it that way? Names should NOT be element types, you should have an attribute name=”fat loot” instead.
Sorry, but this irritates my inner programmer like writing “Pi is exactly 3” in front of a mathematician.
That was my immediate response too. To make an XML vocabulary scalable it makes sense to use the element names (structure) for class information, and attributes for instance values.
However, the virtue of XML is its simplicity, so if this usage doesn’t introduce any ambiguity and will never become more complex, then why not? What’s abuse in one context can be simplicity in another.
The .14 is silent, everybody knows that.
as is the .00159….
I assume the correct format would be something like:
{item name=”PhatLewt” /}
instead?
Which means you’re going to have all of these “item” entries where the word “item” is completely superfluous. Like, items are the only thing that can go there. This contributes to the ongoing verbosity of XML. It doesn’t do anything to help artists maintain the files.
I’d expect “weapondrop name=’phat loot'”. That would be useful for stuff like a special ability that boosts weapon drop rates. Probably the most maintainable way of adding that.
Of course, that isn’t in the spec.
{weapon name=”alienblaster/}
{robot name=”imp”/}
{cash/}
Or even {currency name=”cash”/} in case you want to add other currencies in the future.
This is part of the value of XSDs – they can be annoying to write and excessively picky, but the version you include in the post would require a new schema every time you added a weapon or monster to the game, while using a more general form wouldn’t. Also it lets all of the enemy and weapon tags share the same allowed sub attributes rather than needing to duplicate it every time. And whatever library you’re using for reading XML is going to be really good at getting a list of nodes and make it a bit harder to iterate over all nodes.
Another way to think about it – do you have a class (or struct or whatever) for every robot and weapon type? I imagine you have a robot struct with a name property. Why should your xml be different?
Also incidentally it’d make it possible to have weapons and enemies with the same name, which someone might do accidentally (unless you prevent that somehow). For that matter, how do you know if something refers to a weapon, a robot, or cash? What if someone names a robot cash because it reminds them of the singer?
Another way to look at it : in C#, i prefer that the class which will interpret the xml for the program to be able to generate it. So my version would look something like that :
{Loots}
{Loot name=”ImportantLoot”}
{Robots}
{Robot}
{Name}”Louis”{/Name}
{Number}3{/Number}
{Probability}50{/Probability}
{GroupLoot}1{/GroupLoot}
{/Robot}
{Robot}
{Name}”John the mighty Archidemon”{/Name}
{Number}1{/Number}
{Probability}20{/Probability}
{GroupLoot}2{/GroupLoot}
{/Robot}
{Robot}
{Name}”Patrick the medium Archidemon”{/Name}
{Number}1{/Number}
{Probability}10{/Probability}
{GroupLoot}2{/GroupLoot}
{/Robot}
{/Robots}
{Weapons}
{Weapon}
{type}”Laser”{/type}
{power}
{min}2{/min}
{max}5{/max}
{/power}
{spread}3.5{/spread}
{Probability}30{/Probability}
{GroupLoot}3{/GroupLoot}
{/Weapon}
{/Weapons}
{Money}
{min}300{/min}
{max}300{/max}
{Probability}30{/Probability}
{GroupLoot}3{/GroupLoot}
{/Money}
{Loots}
{Loot}
{name=”MediumLoot”/}
{number=”2″}
{Probability}30{/Probability}
{GroupLoot}3{/GroupLoot}
{/Loot}
{/Loots}
{/Loot}
{/Loots}
I add a new property, GroupLoot, to know if the loots are exclusive or not. In this example, you can have Louis and John the mighty Archidemon gang up on you, but not John the mighty Archidemon and Patrick the medium Archidemon.
The Loot class is quite easy to parse :
an array of class weaponLoot (properties enum type, power power, double Spread, int Probability, int GroupLoot).
an array of robotsDropped
a class of MoneyDropped
an array of LootDropped (not the same class as Loot because you need the probability, number and group loot property)
I recommand making an interface/Abstract class IDrop, which implement the properties Probability, number and GroupLoot.
Good god, that’s hard to parse without indentation. Seriously, I can barely make it out at all.
I know I’m really late to this, but I’m hoping you still read comments on old-ish posts as they come in. XML is awful to write by hand, so don’t require your artists to do it. Make a nice little editor tool that knows the schema and what kinds of things can go in what kinds of fields. Have it validate its input so that when your artists make typos there’s a better chance they get caught, and then make it spit out XML in all its verbosity.
In fact, you don’t have to write the little editor program yourself. There are already programs out there that will take a schema and auto-generate generators. I can’t recommend any because I don’t really do XML-y work, but Google ought to be able to.
I think that you absolutely need a way to specify “Exactly one of the following:”
The golden example will be “We need this boss to drop one level 5-10 weapon.”
Then you create a table called Weapon_Tier2, that has a row for each weapon in that level range, and a row (with a small weight) for Weapon_Tier3.
(That's also why you need to have drop tables recurse, so that you can create suitably trivial chances for something to drop well outside of its weight class.)
It also expands well for when the sequel or expansion adds things other than weapons to the drop table; you just add Equipment_Tier2, which weights between Weapon_Tier2, Engine_Tier2, and Shield_Tier2.
Actually, point four can be used to do that. Be a bit clunky, though.
How would you use that to produce always exactly one item from the list, rather than merely on average one?
1) These posts are my absolute favourite thing on the site. I’ve been reading for…wow, for over 6 years now. I don’t comment much, but I just had to point out what a fan I am of your posts about programming.
2) I’m not a programmer, but just as you removed “min” and “max” when it’s exactly one, wouldn’t it also be cleaner to remove “chance” if it’s definitely going to drop?
Cant….take….eyes….away….from….gifs….
I’d be tempted to have something like a list of items and you can guarantee one (or more) items from that list would drop. For example:
{WeaponReward}
{List Drops=”1″ Chance=”100″}
{LaserBlaster}
{MissleLauncher}
{LightningShield}
{/List}
{Cash Min=”0″ Max=”100″ Chance=”100″/}
{/WeaponReward}
That would give you a 100% chance of getting only one of those three weapons, plus 0-100 cash.
Tying swan song attacks like projectiles into the drop system sounds a little hacky. Something that works really well until something terrible you didn’t think of comes up a couple months later. Or if you decide to change the drop system later, it may break swan songs (e.g. if you decide for balance reasons enemies can only drop one thing, suddenly it can’t fire missiles on death anymore without refactoring).
But it’s game dev, sometimes these things happen (and I know it was just a thought and not a done deal in the proposal).
If you like XML, use JSON. It’s the same thing for 99% of all applications, but just less verbose.
JSON is so much harder to parse in a language like C++ that it may as well be impossible. Or so I’ve been told by people trying to do it.
In C#, java and probably C++, you should not parse XML, html or JSON by yourself. There are good libraries for that which allow you to write a code which will be more readable, with fewer bug and probably faster than the hack that an average programmer will do in a few hours.
Wow, I would have figured you’d have better understanding of chance, Shamus. It doesn’t matter whether you roll chance or number first for your imps, because it’s commutative. (If you roll number first, you can still roll 0)
Also, 2d6 isn’t a bell curve, it’s triangular :P
And man, reading this now is uncanny. I’m screwing around with modding Oblivion, and have just been working heavily with the levelled lists. (And really missing the “drop everything” flag from FO3, b> | https://www.shamusyoung.com/twentysidedtale/?p=29572&replytocom=996596 | CC-MAIN-2020-10 | refinedweb | 7,746 | 68.3 |
Create a test program which has some memory leaks for understanding how to debug memory leaks with electric fence,
vim efence.c
=======
#include <stdio.h>
#include <stdlib.h>
int main()
{
char *ptr = (char *) malloc(1024);
ptr[0] = 0;
/* Now write beyond the block */
ptr[1024] = 0;
exit(0);
}
Now, we will need to install electric-fence package on ubuntu as,
$ sudo apt-get install electric-fence
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following NEW packages will be installed:
electric-fence
0 upgraded, 1 newly installed, 0 to remove and 261 not upgraded.
Need to get 21.6 kB of archives.
After this operation, 88.1 kB of additional disk space will be used.
Get:1 trusty/universe electric-fence i386 2.2.4 [21.6 kB] Fetched 21.6 kB in 1s (21.5 kB/s)
Selecting previously unselected package electric-fence.
(Reading database … 209879 files and directories currently installed.)
Preparing to unpack …/electric-fence_2.2.4_i386.deb …
Unpacking electric-fence (2.2.4) …
Processing triggers for man-db (2.6.7.1-1ubuntu1) …
Setting up electric-fence (2.2.4) …
Processing triggers for libc-bin (2.19-0ubuntu6.6) …
~/Desktop/devlab/c/linux$
Now, lets compile the program fr understanding without linking to electric fence,
~/Desktop/devlab/c/linux$ gcc -o efence efence.c
~/Desktop/devlab/c/linux$ ./efence
So, here when we execute the binary without linked with electric fence, it didn’t shown any error,
Now, lets link electric fence,
~/Desktop/devlab/c/linux$ gcc -o efence efence.c -lefence
~/Desktop/devlab/c/linux$ ./efence
Electric Fence 2.2 Copyright (C) 1987-1999 Bruce Perens <bruce@perens.com>
Segmentation fault (core dumped)
Here, as we see the binary after executation return “Seg fault” which identifies the issue, now to identify exact code, we will use gdb as below,
~/Desktop/devlab/c/linux$ gcc -g -o efence efence.c -lefence
~/Desktop/devlab/c/linux$ gdb efence
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.2) efence…done.
(gdb) r
Starting program: ~/Desktop/devlab/c/linux/efence
[Thread debugging using libthread_db enabled] Using host libthread_db library “/lib/i386-linux-gnu/libthread_db.so.1”.
Electric Fence 2.2 Copyright (C) 1987-1999 Bruce Perens <bruce@perens.com>
Program received signal SIGSEGV, Segmentation fault.
0x08048596 in main () at efence.c:8
8 ptr[1024] = 0;
(gdb) | https://www.lynxbee.com/linux-debugging-using-electric-fence-for-debugging-memory-leaks/ | CC-MAIN-2018-22 | refinedweb | 392 | 61.73 |
Visual Studio Hacks 437
Jim Holmes writes "Microsoft's Visual Studio is an elephant of an IDE. It's got tremendous power and adaptability, but it's difficult to dig through all the less-than-helpful documentation. It's also very tough to figure out which of the many available add-on tools are worthwhile to add. Visual Studio Hacks by James Avery is a terrific reference for helping get the most out of Visual Studio." Read on for the rest of Holmes' review.
Disclaimer: James is a friend who's helped me with starting a developers group, and I'm also working on an open source project with him. The possibility exists that I may work on a paying project with him at some time in the future; however, I haven't had any financial dealings with him so far. (Other than I still owe him a beer for coming to speak at one of our group's meetings.) For what it's worth, I spent my own money to buy this book from Amazon. End Disclaimer.
Avery's book is great both for new users of Visual Studio as well as the more experienced developer. Also, readers won't have to worry about buying a book which will be outdated when Microsoft releases its next version of Visual Studio in November. Tips and tricks are included for Visual Studio versions 2002, 2003, and 2005.
VS Hacks spreads 100 "hacks" across 13 sensibly delineated chapters. Each hack is clearly marked with its number in a blue box at the upper, outer corner of each page. Hacks are also marked with a thermometer icon representing the hack's relative complexity. One of my few complaints about the book is that the moderate and expert icons look too similar - but frankly I ignore these icons anyway, so the criticism's most likely wasted.
The introductory chapters on projects and solutions, navigation, and editor usage aren't introductory in skill level. Avery covers these topics in great depth, diving down to some useful, but less-than-obvious settings in VS's environment. Examples of this would include Hack #2: Master Assembly and Project References, where Avery shows how to add additional assemblies to the Add Reference dialog's list of .NET assemblies. This is a timesaver if you've got custom libraries you make frequent use of; adding the assemblies to the default list saves having to use the Browse button to search for the files every time you need to add them.
Some of the most uninteresting drudgework in development involves writing code for basic software elements such as business entities or data access layers. It's repetitive, it's template-like material, and it's boring. Documenting such work is every bit as tedious.
Hack #50 covers using CodeSmith to generate code via templates. Other hacks detail tying UML into the development process. Hack #81 covers using Visio for Enterprise Architects to generate code from UML diagrams. Hack #82 covers the opposite of that process: generating class diagrams via Visio's reverse engineering support. (UML's capable of much, much more than the simple drudgework of business entities or data access layers, and these hacks shouldn't be confused with anything more than a cursory introduction of how to tie UML via Visio into Visual Studio.)
One of the most useful sections is Chapter 5: "Debugging." This chapter focuses on getting the most out of Visual Studio's debugger capabilities. These hacks are critical helpers to good developers effectively use Visual Studio's debugger.
Avery covers the basics of setting up breakpoints, diving down to various options such as setting how often to break on specific break points, or setting conditional break points. He then moves on to troubleshooting breakpoints in Hack #37, and there's also great coverage on using Visual Studio to debug scripting code inside a browser session, working with SQL server, and attaching to a running process or one that's just about to crash.
I found the best content of this book in hacks focusing on making the most of tools both inside and out of Visual Studio. Hack #79 is a great section detailing how to stress test web applications using Visual Studio Enterprise Architect's Application Center Test. This hack makes it easy for readers to understand how to get detailed stress testing on a web application. Along this same line, Hack #80 shows how to make use of the Dotfuscator tool to obfuscate .NET assemblies to protect them from modest efforts at reverse engineering. (Like Java, .NET assemblies can be disassembled, revealing all your hard work and intellectual property.)
Other gems in this same arena include tools for running and debugging unit tests inside Visual Studio (#93), testing regular expressions (#100), and using tools which automatically generate documentation based on naming conventions in the source code (#69).
The mechanics of this book are great. The Table of Contents breaks down each chapter by its individual hacks, and the index is very detailed and clear. I also like how hacks are listed on the top of each page, making it quick to find something if you know the hack's number or name and don't want to fuss with the table of contents.
My sole complaint about the book (aside from the annoying thermometer icons I've already mentioned) is that it's not always clear which add on tools work with which version of Visual Studio.
The author maintains a website specifically for this book, complete with code and tool downloads. RSS feeds are also available to monitor any updates the author makes..
You can purchase Visual Studio Hacks from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Visual Studio hacks? (Score:3, Funny)
Interesting review (Score:4, Informative)
But informative none the less.
What about 6.0? (Score:2)
The "Hack" Culture (Score:3, Interesting)
Is this because people's needs are growing faster than industry's ability to provide them?
Re:The "Hack" Culture (Score:2)
It's probably that often, the industry doesn't know what people's needs are.
Re:The "Hack" Culture (Score:2)
Re:The "Hack" Culture (Score:5, Insightful)
In this case, most of the "hacks" look like normal VS features that many people are unaware of. These sorts of things are better described as "tips", but that hardly draws any attention now, does it?
If it was called How To Boot Your Walkman With Emacs, now that would be a hack.
Ultimate Killer App (Score:5, Informative)
Until Linux gets an IDE at least 75% as good as MSDev, top-notch large scale applications for Linux will remain few and far between.
Re:Ultimate Killer App (Score:2)
I think VS 2003 was lacking in a few areas, namely refactoring tools. Fortunately VS 2005 has refactoring tools integrated into it nicely.
Re:Ultimate Killer App (Score:4, Interesting)
I don't think a GUI platform can call itself complete until it's got an IDE that's worthy for programming.
The only one I've seen so far for Linux that's up to par (and just barely) is KDevelop, which is entirely useless to you if you don't use Qt, like myself.
And yes, I know about a lot of the alternatives, they just all suck so bad they aren't worthy of mentioning by name. Eclipse is better than most, but is java, and slow....
Re:Ultimate Killer App (Score:2, Insightful)
I disagree... I'm not a fan of your monolithic IDEs at all. My GUI is an IDE:
These tools combine into an "IDE" that is my desktop. I have the best-in-class for ev
Re:Ultimate Killer App (Score:5, Interesting)
Go take a look at what Visual Studio can do, you will see many features(incredibly powerful Intellisense being only one of them) that will save you time and sweat.
This message provided courtesy of a programmer who used to be big fan of XEmacs et. all until he discovered what Visual Studio(and other good IDEs) can do.
Re:Ultimate Killer App (Score:3, Insightful)
Re:Ultimate Killer App (Score:2)
Re:Ultimate Killer App (Score:4, Funny)
Re:Ultimate Killer App (Score:5, Interesting)
Re:Ultimate Killer App (Score:4, Insightful)
Besides, what if you know most of the signature, but can't remember which order the args come in? Don't really need the comments for that.
Finally, it just speeds up typing. Instead of typing out a whole name -- and very easily making a typo -- you usually just have to type a few characters.
Re:Ultimate Killer App (Score:3, Insightful)
Perhaps you should actually try intellisense before you talk out your ass. Intellisense does show you the comments associated with the method you're calling, as well as the return type, and the comments associated with each of the individual parameters. As you move through the parameters, intellisense updates the tooltop to show you the comments associated with each input parameter separately. People can blast
Re:Ultimate Killer App (Score:5, Insightful)
> its definition in the source so I can see the comments on the inputs.
Intellisense in VS *DOES* shows you the comments from the source. Why do you think that the source comments have to be entered in XML?? Why do think there's a whole thread of people saying how good it is even though it's an MS product? If you're not seeing the comments, then you haven't been following the templates or RTFM.
I hate to say it but it's fucking amazing - especially in VS.NET 2005.
The thread title is correct - it's Microsoft's Killer App.
> Besides, if you divide up the work correctly, you won't be using more
> than a few percent of the functions in a program,
Not all companies are big enough to "divide up the work". I have to do the entire project on my own. Which after 6 months is a LOT of functions. I guarantee it would take nearly everyone except genius autistic programmers more time to write large projects if if VS.NET didn't have any intellisense.
I can't see how it can be annoying. If you type quickly without pausing then it doesn't even appear. If you really do know what you're supposed to be typing then you probably won't pause because. The only reason I can think of to pause halfway though typing a function call is if you're having to go and find the function definition because you can't remember what the next argument is
Re:Ultimate Killer App (Score:3, Insightful)
Re:Ultimate Killer App (Score:2)
Is "intellisense" the thing that gives you a dropdown of method names and similar? If so, there are multiple existing ways to do similar things in both (x)emacs and vim (although I'm not familiar with the procedure for doing it in the latter, so a vim master will have to chime in). There's dynamic abbrevs which work with almost everything and, while not context sensitive, are generally good enough unless you don't know and haven't typed the method. But it works in any mode, so if you're typing a letter
Re:Ultimate Killer App (Score:3, Insightful)
Not being context sensitive is simply unacceptable for example. When you start using namespaces, many classes,
Regarding This is the power of a programmable editor. If there's a feature you want, you can add it. If there's a feature you want to change, you can modify it. If there's a misfeature, you can get rid of it.
Hint #2: You can write plugins for VStudio to
Re:Ultimate Killer App (Score:2)
Re:Ultimate Killer App (Score:2)
On the other hand, bad (as most of the free IDE's out there are in my experience) are exactly the opposite... they waste time, they lose code when they crash, they generally are a hassle... bu
Re:Ultimate Killer App (Score:2)
To say that VS.NET is the "ultimate killer app" sounds so outrageous that I'm tempted to think that a little astro-turfing might be going on here. VS.NET has a lot of features but it also has a lot of bugs. It's very slow when working with a large VB.NET solution. It also has periodic interop issues with VSS.
When VS.NET is acting up like that, I am grateful that I can just open up a text editor and edit the source directly like that.
Re:Ultimate Killer App (Score:2)
Autotools are a crime when it comes to building projects.. I can't tell you how many hours I've spent tweaking those scripts until I heard about Scons, and it's not even an ideal solution; A good IDE allows you to click one button and have your software project built. Wanna customize the build? Don't worry about memorizing those archaic tags,
Re:Ultimate Killer App (Score:2)
Re:Ultimate Killer App (Score:2)
Re:Ultimate Killer App (Score:2)
For me, a good intellisenseish feature in IDE A will cause me to use it over IDE B (without such a feature) in a heartbeat *no matter what* other feature IDE B has. I don't care if it runs like a dream, has the best debugger in the world, or whatever. If A has intellisense and B doesn't, about the only feature B could add that would make me switch is a telepathic "I'll write your code for you" feature.
Re:Ultimate Killer App (Score:2)
Perhaps it's the projects I've been placed on at work, but I always find myself fighting Intellisense when I'm doing windows coding. It'll pop up with the wrong things when I don't want to be bothered with it, and it'll NOT pop up when I don't know how to use something and just want to see an interface list.
Other times I just use gvim, for my solaris, mac, vms, linux and several web development projects. I fin
Re:Ultimate Killer App (Score:3, Interesting)
I've done a lot of Java work recently in Eclipse for instance, and think that the Intellisense in Eclipse is fantastic.
Re:Ultimate Killer App (Score:2, Insightful)
Re:Ultimate Killer App (Score:3, Funny)
Re:Ultimate Killer App (Score:3, Insightful)
Re:Ultimate Killer App (Score:2, Interesting)
I have to disagree there. My experience is mainly in Java development, but I have done some VC++ and VC#. In my experience, VS.NET 2003 simply doesn't compare to the likes of JBuilder and Eclipse in terms of features (and neither of them are perfect by any means). I'm not saying that it's a *bad* IDE, but every time I use it the lack of refactoring and code inspection tools drives me batshit insane in pretty short order.
To be perfectly honest, if you
Re:Ultimate Killer App (Score:2)
I wish I could easily use JBuilder to write apps for a PocketPC platform that can interface with PDA BCR hardware DLLs, but C# in VS2K3 makes it really easy.
I have a copy of JBuilder 2005 still in the shrink wrap that procurement finally sent me, and I get teary eyed wishing I could use it instead.
But for this sort of thing, Borland just can't come close.
Re:Ultimate Killer App (Score:2, Funny)
I have a fairly decent system, but VS drives to incredible slowness.
When I exit I have to re-boot to get the speed back.
Does any one else have this issue?
Seriously I'd like to know.
Re:Ultimate Killer App (Score:2)
Re:Ultimate Killer App (Score:2)
Re:Ultimate Killer App (Score:2, Insightful)
Re:Ultimate Killer App (Score:2, Informative)
>MSDev, top-notch large scale applications for
>Linux will remain few and far between.
I don't understand why you were modded as insightful. I'm not trying to be sarcastic, but how precisely is MS Visual Studio a "killer app"? What makes it so impressive in your opinion that no Linux development environment compares with it, or
even as you say, "75%" as good?
For example, my development environment of choice is xemacs which I consider hands down the ultim
Re:Ultimate Killer App (Score:2)
Re:Ultimate Killer App (Score:2)
Your other points I agree with wholeheartedly. (I'm currently getting into eclipse though, as I'm digging the refactoring stuff, and I rarely find myself developing in non-windowed environments or using the god-like emacs features that no one else has or probably ever will.
VS is differen
Re:Ultimate Killer App (Score:3, Insightful)
You haven't tried kdevelop or anjuta then, I assume? Not being familiar with VS (too poor, lol) I am not sure how they stack up feature wise, but I bet that both meet your "75%" criteria.
top-notch large scale applications for Linux will remain few and far between.
You mean like Open Office, Mozilla or Blender3d? We have the apps; it's the mindshare we're lacking (if we're lacking anything, which I doubt now that we have corporate sponsorship from novell a
Re:Ultimate Killer App (Score:2)
I just tried it, and it doesn't even seem to do auto code indenting. If i type if(a==b) { , any half decent IDE should indent the following line. KDevelop didn't. (I didn't look for an option to turn this on, I'll be fair.)
Re:Ultimate Killer App (Score:2)
Read: I don't know what I'm talking about, but I will comment anyways...
Re:Ultimate Killer App (Score:3, Insightful)
Re:Ultimate Killer App (Score:2)
Re:Ultimate Killer App (Score:2)
ROFLMAO.
Good one.
But you do bring up a good point that we need more - and better - IDE's for Linux development.
Re:Ultimate Killer App (Score:2)
Re:Ultimate Killer App (Score:2, Insightful)
etc. It gives you something to read while you're looking for your code.
God help you if you accidentally delete one of those comments.
Delphi (for example) keeps your source code as you wrote it. Like every freakin other IDE in the universe.
Oh, and how many versions of msvcr71.dll are there? (Seriously, I'm asking...MSDN's dll help database claims the file doesn't exist.)
Re:Ultimate Killer App (Score:5, Insightful)
Google search for visual studio bug [google.com] - 840,000 results
Google search for eclipse bug [google.com] - 1,480,000 results
Google search for emacs bug [google.com] - 1,170,000 results
Google search for slashdot bug [google.com] - 1,460,000 results
Google search for bright purple elephant bug [google.com] - 131,000 results
Congratulations! You have given us a meaningless statistic.
Re:Ultimate Killer App (Score:3, Insightful)
By the way, does Visual Studio ship with the source to the runtime libraries so's you can step the debugger through them? No? (Delphi does.)
Actually, VS also ships with sources for the Runtime Library - go check your install again.
Re:Ultimate Killer App (Score:2)
I don't think market penetration == killer app. It is certainly popular though.
>> I use it even when developing for Linux.
I usually do things the other way around. compile first with GCC, then under VS 6.0. IMHO the editors in linux are better, the tools faster and it's easier to write to ANSI standard.
VS does some things well, especially with bigger projects, but for most stuff it's a distant second choice.
not that again (Score:2)
Other people prefer Eclipse, and arguably, Eclipse has long surpassed Visual Studio both in terms of UI and in terms of functionality.
Yet other people find Emacs a far better tool.
And, frankly, compared to the environments people had 20 years ago for Smalltalk and Lisp, all those tools still seem extremely cumbersome.
Visual Studio may be Microsoft's "ultimate killer app", but that's because people like you don't know how to work with anything else,
Browse info for g++? (Score:4, Insightful)
Re:Browse info for g++? (Score:2)
Re:Browse info for g++? (Score:2)
Re:Browse info for g++? (Score:2)
Re:Browse info for g++? (Score:2)
Re:Browse info for g++? (Score:4, Interesting)
Torrent Link (Score:3, Funny)
Recommend: Best Kept Secrets in .NET (Score:4, Informative)
Best Kept Secrets in
by Deborah Kurata
This has plenty of good tricks for visual studio:
Chapter 1 - Hidden Treasures in Visual Studio
Chapter 2 - Doing Windows Forms
Chapter 3 - Code Tricks
Chapter 4 - Much ADO
Chapter 5 - Defensive Development
The biggest annoyance with DevStudio (Score:5, Insightful)
Even with this annoyance it's still better than help in XCode on the Mac. XCode 2.x is a big improvement but it's still hopeless compared to MSDE.
A second annoyance to DevStudio is the sheer mess of dockable windows. VC98 had it just about under control but since DevStudio 2002 it has become a disaster zone of tabs, splitters, pushpins, floaters and toolbars. Just trying to get all the relevant information onto the screen is hard enough. The pushpin model just works badly - either you pin a window to a frame or it annoys you by floating in and out at just the wrong point in time such as when you're mousing around.
A final irritation is that DevStudio is extremely primitive compared to a lot of Java suites. Eclipse is hopeless for visual design but it kicks DevStudio around the shop for sheer coding. Being able to hit Shift+Ctrl+R and rename all references to a class or variable everywhere in Eclipse is mindbogglingly useful. While I expect the next incarnation of DevStudio will allow you to rename a class, it's notable by its absence in the current releases. It's not like Eclipse just introduced this feature since JBuilder has had it for years.
Re:The biggest annoyance with DevStudio (Score:2)
Re:The biggest annoyance with DevStudio (Score:2)
Re:The biggest annoyance with DevStudio (Score:3, Informative)
Re:The biggest annoyance with DevStudio (Score:2)
And I also find the dockable windows do make it easy to get lost - or figure out which one disappeared.
Renaming references quickly is, admittedly, not very good, especially if you have code snippets that use them - it would be nice if it could at the v
Re:The biggest annoyance with DevStudio (Score:4, Interesting)
Ever try the Visual Editor for Eclipse? [eclipse.org]
It's quite nice.
Re:The biggest annoyance with DevStudio (Score:3, Informative)
Actually it does. Set the layout manager to null and you can put components at any location you want and set the size to anything you want. Of course, if you've got the ability to change font sizes (e.g. to support people with visual difficulties, which is a legal requirement in some places) then absolute layouts suck horribly...
Re:The biggest annoyance with DevStudio (Score:5, Interesting)
Also, you can now get content via the Internet or through the Local Help system, which means that you shouldn't ever run into "please insert a cd" messages again.
As far as our Tool Window situation goes, we have a way to go on improving this. I think we're doing better in terms of window management in Whidbey (VS 2005) than we did in previous releases. One cool new feature, the IDE Navigator (Ctrl+Tab) lets you navigate between every open document and tool window in the IDE in a fairly reasonable manner.
Also, we've added in either 5 or 6 refactorings to VS: you can perform operations like Extract Method or Rename Symbol now through that interface.
Finally, if you have feature requests or bug reports please post them on our Product Feedback Center. I happened to run across this post, but there's no guarantee that a Slashdot post will ever be seen by the team that owns a specific part of VS. The MSDN Product Feedback Center [microsoft.com] will let you submit issues or suggestions to us directly and will guarantee they're routed to the right people.
Cheers -- Aaron
Oh yeah, and I feel like I owe a VS "hack" now. When you're in the editor you can enable incremental search by pressing Ctrl+i and then typing a string to search for. The next instance can be jumped to by pressing Ctrl+i again.
Re:The biggest annoyance with DevStudio (Score:3, Interesting)
Re:The biggest annoyance with DevStudio (Score:3, Interesting)
While we're there, we should probably thank him for designing and creating one of the most popular and successful programming languages on the planet.
If we're being fair
;-)
Re:The Delphi parser (Score:3, Interesting)
However, since Eclipse is written in java, the jvm is already running, so it can just call up the appropriate javac classes and run the compiler in-process, removing the latency of starting up a new jvm. That's most likely why Eclipse is much more snappy at compiles (off the top of my head; I haven't written any Java in a while).
Alternatively they could be using IBM's jikes compiler, which is written in C
Is it all about other tools? (Score:4, Interesting)
A book on Visual Studio hacks has a lot going for it, but the examples in the review mostly sound like discussion of add-in tools, rather than VS itself.
Many people don't take advantage of even simple things like customising autoexp.dat for debugging, and don't know about undocumented UI tweaks like displaying a marker line at column N. Many of these little touches are what makes VS better than the alternatives for a lot of jobs, and why Microsoft never makes more of them I don't understand. Are things like the two examples above covered in the book?
Tomato... (Score:3, Informative)
Hacking add-ins in perl (Score:3, Interesting)
But has anyone noticed what a complete mess the Visual Studio add-in API is ? A hybrid mixture of DLL export functions and nearly-COM like objects... very 1993... I think it must count as the biggest hack in Visual Studio.
zerg (Score:3, Informative)
Related book. (Score:3, Interesting)
Most Requested VS Hack (Score:2)
That would be such a productivity improvement!
Re:Most Requested VS Hack (Score:2, Insightful)
ASP.net and sourcesafe. (Score:3, Informative)
Just my $1 and 2 cents.
The one VS.Net 2003 plug-in I cannot live without (Score:3, Interesting)
It adds real-time syntax highlighting, additional keyword coloring, superb code navigation features, and code refactoring features, among many other very useful items.
If you are using Visual Studio and doing any C# coding at all, you need to check out Resharper, from JetBrains... [jetbrains.com]. I frankly don't know how anyone lives without it (or at least something similar).
Version 2.0 (due out late this year) will also support 2005, Visual Basic, and ASP.Net coding
Re:Your number one IDE (Score:2)
(Okay, I know that it's not actually fully compliant. For instance, until 7.1, there was no support for partial template specialization. However, there is, to my knowledge, one fully compliant compiler+library implementation, and VC++ is as good as most now.)
Re:Your number one IDE (Score:3, Interesting)
But even when it comes to VS2003, last time I checked C# and
Portability (Score:5, Interesting)
Erm... We write highly portable libraries in C and C++ at work. They have to compile on Windows, Linux, Solaris, HP-UX, AIX, MacOS X, and often under several different toolsets on each OS.
Our whole dev team is free to develop using whatever software we find helpful. We have GUI guys and CLI guys, Windows guys and Linux guys, emacs guys and vi guys, etc.
The one thing almost everyone has in common is that they use Visual C++ as their primary IDE. That's not because we're ill-informed or haven't tried the alternatives, it's because most of us think it's the best IDE available to help us do our jobs.
Re:Windows programming is purposely vague.. (Score:5, Informative)
Re:Windows programming is purposely vague.. (Score:2)
Re:Windows programming is purposely vague.. (Score:2)
In the past, I didn't find either to be terribly useful, but (I hate to say it) Microsoft got it right this time. With MSDN for VS.Net, I actually find what I need in the documentation, rather than having to use google.
Re:Windows programming is purposely vague.. (Score:4, Insightful)
Re:Windows programming is purposely vague.. (Score:2)
Re:Windows programming is purposely vague.. (Score:2, Informative)
Re:Only on Slashdot (Score:2)
And as to the parent troll about Microsoft documentation, well that might have been true ages ago I find that most Windows API stuff is easy enough to find on MSDN, the documentation for
Re:Book should read (Score:2, Funny)
Re:One more... (Score:2)
Re:Yeah, but.... (Score:2)
Re:Yeah, but.... (Score:2, Informative)
# Begin Project
# PROP AllowPerConfigDependencies 0
# PROP Scc_ProjName
CPP=cl.exe
MTL=midl.exe
RSC=rc.exe
Change "cl.exe" to your favorite C++ compiler.
Re:Yeah, but.... (Score:3, Insightful)
I did something similar for Visula Studio 6 when I was working on a project for a couple of embedded platforms.
VS
It just depends on how much time you want to take.
~X~
Re:What does Microsoft use? (Score:3, Informative)
Re:Ah, the pity... (Score:3, Insightful)
*sigh* The trolls do rush to these threads. Meanwhile we who actually try things before denigrating them have found an extremely capable IDE, enjoying the capability to code in/edit Ada, APL, ASml, Caml, Cobol, Delphi, Forth, Eiffel, Fortran, Haskell, Lisp, Lua, Mercury, Mixal, ML, Mondrian, Nemerle, Oberon, Pascal, Perl, PHP, Prolog, Python, RPG, Ruby, Scheme, | http://books.slashdot.org/story/05/08/04/2010259/visual-studio-hacks | CC-MAIN-2013-48 | refinedweb | 5,061 | 62.07 |
08 September 2010 19:05 [Source: ICIS news]
WASHINGTON (ICIS)--The US Department of Energy (DOE) on Wednesday lowered its forecasts for oil and natural gas prices for the balance of this year and 2011, citing reduced expectations for ?xml:namespace>
The department said it expects the average price for
For 2011, the department expects global crude prices will average $82/bbl, a forecast that was $2 lower than its month-earlier prediction.
In its monthly short-term energy outlook (STEO), the department’s Energy Information Administration (EIA) said that it projected Henry Hub natural gas spot prices to average $4.54/MMBtu for the full-year 2010, down by $0.15 from last month’s forecast.
In 2011, the Henry Hub average spot price was expected to be $4.76/MMBtu, the administration said, lower by $0.22 from its August outlook of $4.98/MMBtu.
For both oil and natural gas pricing, the department said that “These projections reflect updated expectations for economic activity, with forecasted US real gross domestic product (GDP) growth of 2.8% in 2010 and 2.3% in 2011”.
In last month's outlook, EIA had predicted US GDP expansion for this year at 3.1% and at 2.7% in 2011.
The EIA’s lowered pricing forecasts for US crude and natural gas came in the wake of Commerce Department reports showing that second quarter US GDP growth was only 1.6% instead of the earlier output estimate of 2.4%.
In addition, the administration said it is lowering its outlook for recovery-driven global demand for energy.
“The 2011 world oil-consumption-weighted real GDP growth rate is also lowered, to 3.3% from the 3.6% level in last month’s outlook,” the EIA said.
However, the department indicated that it did not expect moderating oil and natural gas prices to last long.
“Despite a slight reduction in forecast global demand growth and the drop in world oil prices in recent weeks, the projected gradual reduction in global oil inventories over the forecast period should lend support to firming oil prices,” the administration said.
Similarly, the EIA indicated potential pressure for US natural gas prices, noting that domestic gas consumption is expected to increase 4% this year compared with 2009 while US output of natural gas is forecast to be 2.1% ahead of the 2009 level.
However, while domestic
LNG imports are predicted to average 1.25bn cubic feet (bcf) per day this year, revised downward from last month’s outlook for 1.35 bcf. In 2011, LNG imports are expected to ease to 1.32 bcf compared with the month-earlier forecast of 1.42 bcf.
( | http://www.icis.com/Articles/2010/09/08/9391990/us-lowers-oil-and-natgas-pricing-outlook-as-recovery-cools.html | CC-MAIN-2014-41 | refinedweb | 447 | 56.66 |
This class implements a real-time state machine executor. More...
#include <rtt/StateMachineProcessor.hpp>
This class implements a real-time state machine executor.
The default policy of the StateMachineProcessor is to call StateMachine::execute() of all loaded state machines in each step(). The parent state machines are executed before the children.
If you want to change this policy, subclass the StateMachineProcessor and override the virtual functions, such as step().
Definition at line 68 of file StateMachineProcessor.hpp..
Delete a deactivated State Machine and all its children. State Machine and all its children.
The method that will be executed once when this class RTT_API is run in a non periodic thread.
The default implementation calls step() once.
Reimplemented in RTT::Timer.
Set the task this interface is run in.
A Zero means no task is running it.
Reimplemented in RTT::ExecutionEngine.
Unload a deactivated State Machine and all its children. | http://www.orocos.org/stable/documentation/rtt/v1.12.x/api/html/classRTT_1_1StateMachineProcessor.html | CC-MAIN-2013-48 | refinedweb | 149 | 53.47 |
import X between submodules in a package
Discussion in 'Python' started by Donn Ingle, Dec 19, 2007.47
- TG
- Jul 20, 2006
Python submodules and name importsFrank Aune, Aug 23, 2007, in forum: Python
- Replies:
- 1
- Views:
- 797
- =?iso-8859-1?B?UOFkcmFpZw==?=
- Aug 23, 2007
Noddy with submodules?Torsten Mohr, Sep 7, 2009, in forum: Python
- Replies:
- 1
- Views:
- 284
- Gabriel Genellina
- Sep 8, 2009
Automatic import of submodulesMassi, Nov 25, 2011, in forum: Python
- Replies:
- 4
- Views:
- 237
- Shambhu Rajak
- Nov 28, 2011
Move modules to submodules question, Jan 11, 2013, in forum: Python
- Replies:
- 1
- Views:
- 102
- Peter Otten
- Jan 11, 2013 | http://www.thecodingforums.com/threads/import-x-between-submodules-in-a-package.562215/ | CC-MAIN-2014-23 | refinedweb | 105 | 63.22 |
. The settings service and Plasma are both complex programs, so combining them increases the chances that a bug in one can crash the other. So we put it in a different process, forcing one layer of indirection already.
Meanwhile Frederik Gladhorn and I were refactoring the storage layer for Connection settings so that it is independent of NetworkManager. One of the good things about NetworkManager's settings is that they are so comprehensive the classes I developed to configure them cover all of wicd's settings too. Frederik namespaced the general classes while I moved the DBUS code that is specific to NetworkManager 0.7 out of the libs/ directory. Since it is generated automatically from some .kcfg files by a modified kconfig_compiler and then extra stuff is patched into those files, this was quite a lot of work.
Our students from the University of Bergen, Anders, Peder and Sveinung, were busy working on the mobile broadband improvements for their degree group project. This includes a set of DBUS bindings for the ModemManager auxiliary interface of NetworkManager, which were used to successfully send an SMS and will support useful functions like retrieving cellular signal strength, a set of Qt widgets around libmbca, taking the pain out of configuring cellular data connections, and a test harness. We hope they will continue with KDE development after they graduate.
The status of Network Management as of Sunday 7 June then is that it doesn't even compile. I'm working on remedying that as soon as possible. If you do want to use Network Management from SVN, take a safe revision like r978079 until you hear otherwise.
We'd like to thank the Trolls for being great hosts and the KDE eV for sponsoring this sprint.!
We've continued planning this morning. The big goals for this meeting was to 'get Network Management finished' and 'make it usable on non-NetworkManager systems' but our discussions last night showed us that the current complexity of the applet prevents both goals - it takes me several days of getting up to speed with the code before I dare to try to code it and it's deterring Frederik from making significant changes. So we identified all the pieces and started juggling them last night over pizza until they landed in a way that makes sense.
The big picture is that most of the complexity will move from the Plasma applet into the KDED module. This module will abstract different network management systems by being replaceable. The module provides a simple list of the things to show in the applet's popup. Configuration UI and stored settings are to be shared - we think that the current (NM-derived) settings schema is comprehensive enough.
There's a temptation to write an über-system that models everything and allows any number of applet implementations but we're resisting that as it would never be finished. I'm a little bit disappointed that we won't be adding a lot of polish and nice to have features but a sprint is the ideal time to swing a large hammer at hard architectural issues that otherwise would stunt Network Management's growth.. TODOs include cleaning up UI glitches, fixing some exotic VPN types and auth types and deciding how to abstract different backends like wicd and ConnMan.
If you want to help out or just rubberneck, we're in #solid..
Looking forward, I've been tidying up my computers, installing openSUSE Factory (the alpha0 edition before anyone knows when 11.2 will really be done, and before everyone starts breaking things in earnest), deleting dozens of Build Service checkouts that were finished or forgotten about, and purging my unorganised piles into nice clean GTD lists. That's giving me some peace of mind to think about what to do for KDE on openSUSE 11.2. We'll be having an IRC meeting next Wednesday (1700UTC) to coordinate the team's efforts, but I'm starting to think about things I could do myself. Things like a return for KPersonalizer, a KControl-like treeview for System Settings, or helping tame the Plasma Activities/Zooming UI system into something usable. That and of course completing Network Management (oh, did I let slip the name we chose?). If anyone is already working in those areas, please let me know.
EDIT: Oh and I should point out that openFATE is of course open for business and waiting to receive your ideas. Find out how to use openFATE here..
Before I move it and start telling people about it, I want to decide on a final name. This is important as it's not just what appears in the UI, but also determines the names of files like config files for connections, KNotify settings, translation catalogs, none of which you want to mess about with after a release. So I'm looking for suggestions for and opinions about a good name.
Syndicate Blogs | http://www.kdedevelopers.org/blog/77 | crawl-002 | refinedweb | 827 | 58.62 |
A.
Lists that contain consecutive integers are common, so Python provides a
simple way to create them:
>>> range(1,5)
[1, 2, 3, 4]
The range function takes two arguments and returns a list that
contains all the integers from the first to the second, including the
first but not including]
Finally, there is a special list that contains no elements. It is
called the empty list, and it is denoted [].
With all these ways to create lists, it would be disappointing if we
couldn't assign list values to variables or pass lists as arguments ([]). The expression inside the brackets
specifies the index. Remember that the indices start at 0:
print numbers[0]
numbers[1] = 5
The bracket operator can appear anywhere in an expression. When it
appears on the left side of an assignment, it changes one of the
elements in the list, so the one-eth element of numbers, which
used to be 123, is now 5.
Any integer expression can be used as an index:
>>> numbers[3-2]
5
>>> numbers[1.0]
TypeError: sequence index must be integer
If you try to read or write an element that does not exist, you
get a runtime error:
>>> numbers[2] = 5
IndexError: list assignment index out of range
If an index has a negative value, it counts backward from the
end of the list:
>>> numbers[-1]
5
>>> numbers[-2]
17
>>> numbers[-3] = i + 1.
The function len returns the length of:
horsemen = ["war", "famine", "pestilence", "death"]
i = 0
while i < len(horsemen):
print horsemen[i]
i = i + 1
four:
['spam!', 1, ['Brie', 'Roquefort', 'Pol le Veq'], [1, 2, 3]]
As an exercise, write a loop that traverses the previous
list and prints the length of each element. What happens if
you send an integer to len?
in is a boolean operator that tests membership in a sequence.
We used it in Section 7.10 with strings, but it also works with
lists and other sequences:
>>> horsemen = ['war', 'famine', 'pestilence', 'death']
>>> 'pestilence' in horsemen
True
>>> 'debauchery' in horsemen
False:
>>> 'debauchery' not in horsemen
True
The for loop we saw in Section 7.3 also works with
lists.
The generalized syntax of a for loop is:
for VARIABLE in LIST:
BODY
This statement is equivalent to:
i = 0
while i < len(LIST):
VARIABLE = LIST[i]
BODY
i = i + 1
The for loop is more concise because we can
eliminate the loop variable, i.
Here is the previous loop % 2 == 0:
print number
for fruit in ["banana", "apple", "quince"]:
print "I like to eat " + fruit + "s!"
The first
example prints all the even numbers between zero and nineteen.
The second example expresses enthusiasm for various fruits..
The slice operations we saw in Section 7.4
also work on lists:
>>> list = ['a', 'b', 'c', 'd', 'e', 'f']
>>> list[1:3]
['b', 'c']
>>> list[:4]
['a', 'b', 'c', 'd']
>>> list[3:]
['d', 'e', 'f']
If you omit the first index, the slice starts at the beginning.
If you omit the second, the slice goes to the end. So if you
omit both, the slice is really a copy of the whole list.
>>> list[:]
['a', 'b', 'c', 'd', 'e', 'f']
Unlike strings, lists are mutable, which means we can change
their elements. Using the bracket operator on the left side
of an assignment, we can update one of the elements:
>>> fruit = ["banana", "apple", "quince"]
>>> fruit[0] = "pear"
>>> fruit[-1] = "orange"
>>> print fruit
['pear', 'apple', 'orange']
With the slice operator we can update several elements at once:
>>> list = ['a', 'b', 'c', 'd', 'e', 'f']
>>> list[1:3] = ['x', 'y']
>>> print list
['a', 'x', 'y', 'd', 'e', 'f']
We can also remove elements from a list by assigning the empty list to
them:
>>> list = ['a', 'b', 'c', 'd', 'e', 'f']
>>> list[1:3] = []
>>> print list
['a', 'd', 'e', 'f']
And we can add elements to a list by squeezing them into an empty
slice at the desired location:
>>> list = ['a', 'd', 'f']
>>> list[1:1] = ['b', 'c']
>>> print list
['a', 'b', 'c', 'd', 'f']
>>> list[4:4] = ['e']
>>> print:
>>> list = ['a', 'b', 'c', 'd', 'e', 'f']
>>> del list[1:5]
>>> print list
['a', 'f']
As usual, slices select all the elements up to, but not
including, the second index.
If we execute these assignment statements,
a = "banana"
b = "banana"
we know that a and b will refer to a
string with the letters "banana". But we can't
tell.
>>> id(a)
135044008
>>> id(b)
135044008
In fact, we get the same identifier twice, which means that
Python only created one string,
and both a and b refer to it.
Interestingly, lists behave differently.
When we create two lists, we get two objects:
>>> a = [1, 2, 3]
>>> b = [1, 2, 3]
>>> id(a)
135045528
>>> id(b)
135041704
So the state diagram looks like this:
a and b have the same value but do not
refer to the same object.
Since variables refer to objects, if we assign one
variable to another, both variables refer to the same object:
>>> a = [1, 2, 3]
>>> b = a
In this case, the state diagram looks like this:
Because the same list has two different names, a and b, we
say that it is aliased. Changes made with one alias affect
the other:
>>> b[0] = 5
>>> print a
[5, 2,]
As an exercise, draw a state diagram for a and b
before and after this change.
Passing a list as an argument actually passes a reference to the
list, not a copy of the list.
For example, the function head takes a list as an argument
and returns the first element:
def head(list):
return list[0]
Here's how it is used:
>>> numbers = [1, 2, 3]
>>> head(numbers)
1
The parameter list and the variable numbers are
aliases for the same object. The state diagram looks like
this:
Since the list object is shared by two frames, we drew
it between them.
If a function modifies a list parameter, the caller sees the change.
For example, deleteHead removes the first element from a list:
def deleteHead(list):
del list[0]
Here's how deleteHead is used:
>>> numbers = [1, 2, 3]
>>> deleteHead(numbers)
>>> print numbers
[2, 3]
If a function returns a list, it returns a reference to the list. For
example, tail returns a list that contains all but the first
element of the given list:
def tail(list):
return list[1:]
Here's how tail is used:
>>> numbers = [1, 2, 3]
>>> rest = tail(numbers)
>>> print rest
[2, 3]
Because the return value was created with the slice operator, it
is a new list. Creating rest, and any subsequent changes
to rest, have no effect on numbers.
A nested list is a list that appears as an element in another
list. In this list, the three-eth element is a nested list:
>>> list = ["hello", 2.0, 5, [10, 20]]
If we print list[3], we get [10, 20]. To extract an
element from the nested list, we can proceed in two steps:
>>> elt = list[3]
>>> elt[0]
10
Or we can combine them:
>>> list[3][1]
20
Bracket operators evaluate from left to right, so this expression
gets the three-eth element of list.
The join function is the inverse of split. It
takes a list of strings and
concatenates the elements with a space between each pair:
>>> list = ['The', 'rain', 'in', 'Spain...']
>>> string.join(list)
'The rain in Spain...'
Like split, join takes an optional delimiter
that is inserted between elements:
>>> string.join(list, '_')
'The_rain_in_Spain...'
As an exercise, describe the relationship between string.join(string.split(song)) and song. Are they the same
for all strings? When would they be different?
As an exercise, describe the relationship between string.join(string.split(song)) and song. Are they the same
for all strings? When would they be different?
Warning: the HTML version of this document is generated from
Latex and may contain translation errors. In
particular, some mathematical expressions are not translated correctly. | http://greenteapress.com/thinkpython/thinkCSpy/html/chap08.html | CC-MAIN-2017-47 | refinedweb | 1,323 | 66.57 |
You may or may already be familier with Java Annotations as they are alredy very common, specially if you are working with IDEs they mark the required code with annotations whenever required, but this is an Ideal article for you if you are looking to create a custom annotation, to the point and to the work.
This tutorial is a very sort one and to the point. In this tutorial you are going to see the example of how you can create your own annotation in Java.
Before we rush to the code part let us take a look at what are the annotations.
Annotations, a form of metadata, provide data about a program that is not part of the program itself. Annotations have no direct effect on the operation of the code they annotate.
Now this is the official definition of Annotation from the original Java SE docs. But in simpler terms you can sy the Annotations are the tags we add to a program which tells us,Compiler or JVM about the intention of that program.
You may already know many annotations like @Override or @SuppressWarning annotation, but this context doesn't takes a dig on already described annotations but goes on with the syntax with witch you can define your own annotations and later on can use that in a program you are coding.
The declaration of the Annotation is similar to that of Interface but the keyword interface is preceded by '@' symbol. and the annotation parameters are defined in a way you define the methods in Interface. For example take into account the annotation which can be used to mark the authorship of a class. However you can mark anything from the class to the data field or method with this annotation.
@Interface Author{ String author(); String date(); float revisionNumber() default 1.0; String purpose(); }
In the above code we have created a custom annotation called
Author which can be used to mark any piece of code for its authorship, like the way we present in following code example.
@Author( author="ExamsMyantra", date="2015-10-03", purpose="This class serves as a Demo for the example code for the use of custom annotation" ) public class Demo{ /* Class code goes here */ @Author( author="ExamsMyantra", date="2015-10-03", purpose="This method serves as a Demo for the example code for the use of custom annotation" ) private void demoMethod(){ /* demo method code goes here */ } }
Simple, right ? Well this is how custom annotation can be created. Give it a try and don't forget to point it out if we are doing something is wrong. | http://www.examsmyantra.com/article/136/java/how-to-create-custom-annotation-in-java | CC-MAIN-2018-51 | refinedweb | 436 | 56.18 |
For the last week or so I've been having a lot fun at work adding some Google map functionality to our application. It's an internal application for my clients, so I can't show you any of the code, but I've put together a little demo to demonstrate some of the techniques. The core message here is that it's easy. Taking a some geographic information from your model and putting some markers on a google map with a bit of interactive functionality is really only a matter of a few lines of code. You can download a zip file of the solution here:
Here's the application. It's pretty simple, it just displays a map with a collection of locations. For fun I've made it show some of the places I've been lucky enough to live in during my fun-packed life :) Each location has a latitude, longitude, name and an image. You can click on the marker for a location and its name is displayed in a speech bubble (known as an 'info window' by Google) and it's image is displayed to the right of the map.
I build the map object graph in memory with a simple 'repository' (this is just a demo, your repository would normally talk to a database).
using Mike.GoogleMaps.Models;namespace Mike.GoogleMaps.Repositories{public class MapRepository{public Map GetById(int id){return new Map{Name = "Places Mike has lived",Zoom = 1,LatLng = new LatLng { Latitude = 0.0, Longitude = 0.0, },Locations ={new Location{Name = "St. Julians, Sevenoaks, UK",LatLng = new LatLng { Latitude = 51.25136, Longitude = 0.21992 },Image = "st_julians.jpg"},new Location{Name = "Kadasali, Gujerat, India",LatLng = new LatLng { Latitude = 21.235142, Longitude = 71.4462 },Image = "india.jpg"},// ...}};}}}
Next we have a controller action that returns the object graph serialized as JSON:
public ActionResult Map(){var mapRepository = new MapRepository();var map = mapRepository.GetById(1);return Json(map);}
On the home controller's index view we have some simple HTML that has div placeholders for the content. One for the map name, another for the map itself and two more for the dynamic location name and image. Please forgive the inline CSS :(
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ""><html xmlns=""><head><meta http-<title>Mike's Google Maps Demo</title><link href="Content/Site.css" rel="stylesheet" type="text/css" /><script src="../../Content/jquery-1.2.6.min.js" type="text/javascript"></script><script src=""<key>" type="text/javascript"></script><script src="../../Scripts/LocationsMap.js" type="text/javascript" ></script></head><body><div class="page">...<h2 id="mapName"></h2><div id="map" style="width : 700px; height : 400px; margin : 0px; padding :0px; float : left; margin-right:20px;"></div><p id="info"></p><img id="image" src="" /><div style="clear:both;"></div>...</div></body></html>
Note that this is the HTML as rendered and is a combination of the master view and the home controller's index view. Also note the script references for the Google maps API, jQuery and the LocationMap.js script which controls the page.
jQuery makes writing Javascript a dream. I am a Javascript novice, but I found it blissfully easy to write this code. Here's the javascript which does all the work:
google.load("maps", "2");// make a json request to get the map data from the Map action$(function() {if (google.maps.BrowserIsCompatible()) {$.getJSON("/Home/Map", initialise);}});function initialise(mapData) {$("#mapName").text(mapData.Name);// create the mapvar map = new google.maps.Map2($("#map")[0]);map.addControl(new google.maps.SmallMapControl());map.addControl(new google.maps.MapTypeControl());map.setMapType(G_SATELLITE_MAP);var latlng = new google.maps.LatLng(mapData.LatLng.Latitude, mapData.LatLng.Longitude);var zoom = mapData.Zoom;map.setCenter(latlng, zoom);// set the marker for each location$.each(mapData.Locations, function(i, location) {setupLocationMarker(map, location);});}function setupLocationMarker(map, location) {// create a markervar latlng = new google.maps.LatLng(location.LatLng.Latitude, location.LatLng.Longitude);var marker = new google.maps.Marker(latlng);map.addOverlay(marker);// add a marker click eventgoogle.maps.Event.addListener(marker, "click", function(latlng) {// show the name and image on the page$("#info").text(location.Name);$("#image")[0].src = "../../Content/" + location.Image;// open the info window with the location namemap.openInfoWindow(latlng, $("<p></p>").text(location.Name)[0]);});}
When the page loads we make an ajax request 'getJSON' to the HomeController's Map action listed above. When the call completes, it fires the callback function 'initialise'. This creates the map and binds it to the map div. We set the centre of the map to the map object's LatLng and the zoom level to the map's Zoom value.
Next we iterate (using jQuery's $.each()) through the locations and call setupLocationMarker for each one. This creates a new Marker object for each location and adds it to the map. It also adds a click event handler to each marker to set the name and image url, and popup the info window.
Simple and easy. I've been really impressed by the power of jQuery. It's very good news that Microsoft have adopted it. With the Firebug plugin for Firefox doing javascript development is a real pleasure. As for the Google maps API, it is nicely conceived and has excellent documentation.
So what's stopping you? Get mapping!
16 comments:
You rock Mike. Google Maps with MVC is next on my to-do list (we are refactoring one of our apps to MVC)
Your blog is very nice...
visit my blog asp.net example
Hi Mike, very pleased with this post, solution works fine from default 'localhost' but when I try and run it as a virtual directory e.g. the map doesn't display. I have tried every IIS trick I know, but no luck... any sugestions? (NB: the MVC routing and everything is working fine). Thanks in advance! Mike
Hi Mike,
It should be easy enough to debug with the right tools. Make sure you've got Firebug installed (on Firefox) and step through the Javascript to see what's failing. Just a thought, but could it be the relative paths in the HTML for the external scripts?
Good luck!
Thanks Mike, thats done the trick :) The webresource.axd references were being calculated relative to the root ("/webresource*") changed this to accept a "rootPath" and now works like a charm. I have another problem now, nothing to do with your code I don't think, but maybe you can help? I'm rendering the maps into an MVC PartialView which I am loading into a specified DIV as part of an Ajax callback... all the surrounding stuff renders in the right places, but the map image itself is offset. It looks like the center point of the map image is positioned in the top left hand corner. I have checked out the DOM, CSS and scripts and the positioning of the image does seem to be left:-vr, top:-ve... but looks like that is coming from google. Any ideas on how to fix? Should the container div be setup in a particular way?? Thanks for your help with the other problem , much appreciated! Cheers Mike
Hi Mike,
The center of the map is controlled by the initial latitude and longitude values of the Map class. If you make both of these zero, the center will be just off the coast of West Africa as you'd expect.
This was true when I did this work, but I haven't run this code recently and it may be that the Google maps API has changed in some way.
Hi Mike
I know this is a very old example, but I wanted to have a play with Google maps and MVC so downloaded the zip, loaded it into VS2008 with both mvc1 and mvc2 installed and get the following errors:
Error 1 'System.Web.Mvc.HtmlHelper' does not contain a definition for 'ActionLink' and the best extension method overload 'System.Web.Mvc.Ajax.AjaxExtensions.ActionLink(System.Web.Mvc.AjaxHelper, string, string, System.Web.Mvc.Ajax.AjaxOptions)' has some invalid arguments
\Mike.GoogleMaps\Views\Shared\Site.Master 27 34 Mike.GoogleMaps
Error 2 Instance argument: cannot convert from 'System.Web.Mvc.HtmlHelper' to 'System.Web.Mvc.AjaxHelper'
\Mike.GoogleMaps\Views\Shared\Site.Master 27 34 Mike.GoogleMaps
My guess is that it has something to do with you using a pre release version of MVC - But I can't see what is going wrong here ( a noob with MVC, so be gentle )
Thanks Tim
I found the answer...
while reseaching a totally unrelated problem.
on
@maayank
said:
In order to fix it I had to replace the following line in the project's Web.Config:
with the following line:
It worked for me too.
hope this helps someone else.
Tim
blogger ate the HTML code...
replace the following line in the project's Web.Config:
add namespace="Microsoft.Web.Mvc"
with the following line:
add namespace="System.Web.Mvc.Html"
Tim
Thanks Tim!
Thank you! ! I am Japanese. Able that!
Thank you! ! I am Japanese. Able that!
Hi,
I know that the post is old.. but I have the same problem to take markers from database and put on the map.
I have copied (I fought the same_ to my page but it is not working. It is seems to like it is not going to the Home/Map action as I put break point there. I use MVC2 in VS2010 and I have jquery-1.3.2.js. I can not understand how it know that it should call $(function () {
if (google.maps.BrowserIsCompatible()) {
//$.getJSON("http://" + window.location.host + "/Home/Map", initialise);
$.getJSON("/Home/Map", initialise);
}
});
?????
Thanks, Iwona
P.S. Yours example is working on my PC but when I tried to addopt this code it is not working for me :/
Hi! THANKS THANKS THANKS
This post has been very helpful for me! I have learnt a lot!
Just one thing: at first it didnt work for me, I had to change a line of code:
I changed
return Json(map)
for
return Json(map, JsonRequestBehavior.AllowGet);
HI mike...please tell me one thing i want to place multiple line in the openifowindow like i want to post string like this
HotelName
Avg Price/night
Distance
can u please tell me how to do that
Thank you! This is the best example that I could find. I am new to both the MVC and google maps. To get it to work, I had to change
return Json(map);
to
return Json(map,JsonRequestBehavior.AllowGet);
in the HomeController Map() method before the map would show up | http://mikehadlow.blogspot.com/2008/10/using-google-maps-with-mvc-framework.html | CC-MAIN-2014-52 | refinedweb | 1,771 | 67.55 |
Don't know how to approach this problem, here's what I have so far, any help would be greatly appreciated.
PROBLEM : Modify the function days() written for exercise 1 to account for the actual number of days in each month. Assume each year contains 365 days( that is do not account for leap years).
#include <iostream> #include <string> using namespace std; struct Date { int month; int day; int year; }; int days(Date*); int main() { const Date date_init = {0,0,0}; Date* dt1 = new Date(); *dt1 = date_init; cout << " Please input the days" ; cin >> dt1->day; cout << " Please input the months" ; cin >> dt1->month; cout << " Please input the year" ; cin >> dt1->year; days(dt1); // function that takes a structure variable as argument."; cout << days(dt1); delete dt1; return 0; } int days(Date *d1) { int convdays; int basis; convdays = (d1->day) - 1 + 30*((d1->month) -1) + 365*((d1->year) -1900); return convdays; } | https://www.daniweb.com/programming/software-development/threads/364068/dynamic-structure-allocation | CC-MAIN-2017-47 | refinedweb | 150 | 61.19 |
On Wed, Jul 30, 2003 at 02:20:47AM +0100, Richard Lyons wrote: > Realtek CardBus Ethernet Card Installation on Linux > > > > 2. Copy driver : > ->Copy the file "realtek_cb.o" to "/lib/modules/2.2.14-5.0/pcmcia" > > 3. Edit config: > ->Add 5 lines to the file "/etc/pcmcia/config" > [...] > after that it should be easy. > > This doesn't seem to be workable with the directory structure debian > has installed. I have managed to copy the file to a directory (in > spite of the thinkpad not being able to read floppies properly - but > that is a separate problem). > > In para. 1. there is no directory like > /usr/src/linux/pcmcia-cs-3.0.9/include/ - I cannot find any > *pcmcia*/include/. What should I do here? I'm no pcmcia guru, but try /usr/src/linux/include/pcmcia. Or /usr/src/linux/drivers/pcmcia. > If someone can help me here, I can try to compile this. If it > succeeds, it will be the first time I ever compile anything > successfully! The floppy they provide also includes a file called > 'kern_compat.h', but there are no instructions what to do with it. > It is mentioned once in rtl8139.c as follows: > #ifdef INLINE_PCISCAN > #include "k_compat.h" > #else > // #include "pci-scan.h" > #include "kern_compat.h" > #endif > which looks commented out to me, but I don't pretend to know C. Can > anyone tell me if I need to copy it too? No point in copying it, it will be used in the compilation. The # means that that line is dealt with by the C preprocessor. Effectively before compiling that file, it will include the contents of kern_compat.h in it, to benefit from it's declarations. This is basically to avoid having everything in one file, and to enable files to share declarations. Ironically the line above it _is_ commented out (that's the //). Antony | https://lists.debian.org/debian-user/2003/07/msg04238.html | CC-MAIN-2014-15 | refinedweb | 312 | 69.79 |
I've always wondered how to design the objects which work together to represent a document, the model of the model-view-controller paradigm. How to design them to be easily scriptable? Undoable? Well, when I stumbled onto ``Undoing Actions in Collaborative Work: Framework and Experience'' by Prakash and Knister [cite PK94] I struck a goldmine. Their paper gives a great overview of undo algorithms and when and why to use them. The ``selective undo'' algorithm presented by them lets the user undo any previous action, not just the last. So in a groupware application, she or he can undo her or his last action, not just the last action applied to the document by someone else.
As an adminstrator of hundreds of thousands of multimedia files, I wrote many Python scripts to move them around, rename them and tweak them for our build process to turn them into CD-ROMs. Every now and then I'd get cocky, run a script without testing it, and rename all the files to the same filename, or do other stupid file tricks. ``If only I could undo that script!'' I would think. Well, now I can since I've written undoable copy, move, link and delete operations as a simple application of the selective undo algorithm.
At first I sat down to write a general purpose undo module. You, the programmer, could just drop it into your application, add some glue, and get undo for free. I wish! It turned out that how the undo algorithm works is tightly linked to how the building block objects are designed. Or rather the other way around: the building block objects need to be designed to fit into the undo framework. Since undo is one of those things I would normally want in an application, I was glad to learn how to write objects that behave well with the undo algorithm.
I have written a general purpose
History module which implements the selective undo algorithm. It isn't and cannot be a black box module. You need to understand how it works in order too use it. That's why I'm presenting it as a literate program as described by Knuth [cite Knuth] to explain how it works. Literate programming lets me write my code and documentation in one file and in any order I want, so I can explain what I'm doing in the order I think best, not the order imposed by the programming language. The same literate program file creates this document as well as the machine readable source code.
If you, the programmer, want to undo an operation that was not the last one, you can't always get away with applying the inverse of that operation to the current state. Say my document is the sentence ``Python rocks!''. I select ``rocks'' and type in ``rules''. Call that op1, an
InsertText operation which stores the selection which was modified: in this case chars
[7:12], the inserted text ``rules'' and the replaced text ``rocks''. The
InsertText object needs the replaced text so it can create the proper inverse operation, which in this case would be
InsertText([7:12], "rocks"). Next I type ``really '' in front of ``rules''. This will be op2, another
InsertText operation with selection of
[7:7], inserted text of ``really '', and replaced text ``''. Now what's required to undo op1?
Just applying the inverse of op1 to the current document of ``Python really rules!'' would result in ``Python rocksy rules!'' because ``rules'' moved without op1 knowing about it. The approach in Prakash and Knister [cite PK94] is to have a
Transpose(a,b) function which modifies
a to make it as if
a was applied after
b, instead of before
b. In the case of op1 and op2, transposing would involve recognizing that op2 shifted any character positions >= 7 by 7 characters. That would mean the range
[7:12] of op1 would change to
[14:19]. Now the inverse of op1 would correctly create ``Python really rocks!''.
In practice the algorithm doesn't actually apply the
Transpose() function to the history list directly. Instead it copies the list and transposes the operation to undo to the end. Since the history list includes everything done to a document, the end of the history list represents the current state of the document. Then the algorithm takes the inverse of the transposed operation and applies that to the document.
There are cases where a previous operation can't be undone. What if after op2, instead of undoing op1, I delete the entire sentence as op3? After that it wouldn't make much sense to revert ``rules'' to ``rocks'' since ``rules'' no longer exists! To detect such conflicts, a
Conflict(a,b) function is defined. The reason the algorithm needs the
Conflict(a,b) function is because not all conflicts will prevent an undo. Consider applying op4 as the inverse of op3. There is still a conflict between op1 and op3, but op4 cancels out op3, so it still should be able to undo op1. A method called
removeDoUndoPair handles these situations.
In summary the algorithm to undo any previous operation first copies the history list from the point of the operation to be undone. Then it transposes that operation to the end of the list copy. This brings the operation into the current state of the document. Then it takes the inverse of that transposed operation and applies it to the document to undo the original operation.
Historymodule defines three objects:
History, the actual history list,
HistoryNode, a single element in the history list, and
AbstractOperation, which you, the programmer, subclass to create your own document operations.
Literate programs define code in terms of ``chunks''. The notation below says that the ``History.py'' chunk is defined as the four following chunks. Chunks can have pieces of code in them as well. Chunks can also be appended to, as with the
HistoryExceptions chunk, which I add on to whenever I need to define a new exception.
<History.py>= <HistoryExceptions> <HistoryNode> <AbstractOperation> <History>
Now the fun part,
AbstractOperation. It's the base for operations which work on a document. These operations must define
Conflict,
Transpose,
inverse, and
perform methods.
AbstractOperation defines
Conflict(A,B) and
Transpose(A,B) in a very simplistic way. Any serious subclass will certainly override these methods to be more efficient.
Conflict checks to see what attributes are affected by each operation and signals a conflict if they overlap.
Transpose just tests for a conflict and if there is none returns the identical objects swapped. The
inverse() and
perform() methods are also defined and must be implemented by subclasses to provide the inverse operation and to actually modify the document.
Initially I had hoped to be able to mix various primitive operations together and have them register what they affect in a table which could be used to deduce the
Conflict and
Transpose functions. But to implement efficient
Conflict and
Transpose routines they really have to have detailed knowledge about how all operations are implemented. These functions violate the encapsulation of operations because they need to know how the operations represent themselves and what they do to a document. Conceptually, operations are methods of some
Document object, even though I implement them here as independent objects. To implement
Conflict and
Transpose you need to know everything about all operations. You can't just add in a new operation without thinking through how it will affect all the other operations. This may not be as limiting as you might think. You can easily implement higher-level actions out of the existing operations and be able to undo them since they only affected the document through undoable operations!
<HistoryExceptions>= (<-U) [D->] UnimplementedBySubclass = "This method must be\ implemented by a subclass"
<AbstractOperation>= (<-U) class AbstractOperation: def perform(self,context=None): return def readSet(self): return () def writeSet(self): return () def inverse(self): raise UnimplementedBySubclass def copy(self): raise UnimplementedBySubclass def Transpose(a, b): return (b, a) def Conflict(a, b): if a is None or b is None: return None bw = b.writeSet() br = b.readSet() # b conflicts with a if it reads or writes # anything a writes for w in a.writeSet(): if w in bw or w in br: return 1 #b conflicts with a if it writes anything a reads for w in a.readSet(): if w in bw: return 1 return None def isNullOp(self): return 1
Once you have the details of your primitive operations worked out, you can build composite operations from them and know that they will be undoable. I had originally structured the
History class as a tree so that composite operations would have the primitive operations as children nodes. That way it would be easy to recognize and undo an entire composite operation at once by undoing all of its children. Having the
History as a tree, however, turned the implementation into a mess. I had defined
beginGroup() and
endGroup() methods and left it to the programmer to use them properly. Later I realized that it would be impossible to have a coherent tree structure if in fact multiple people edited a document simultaneously and grouping operations were not atomic between the
beginGroup() and
endGroup(). I decided I didn't need that headache and turned
History into a list with callbacks that could let programmers keep their own tree and also keep any other tables of
HistoryNodes they want. For instance I could keep a table of file transfers that were for a particular product, or maybe define an experimental mode when the user is playing around and may want to return to a previous state by rejecting the entire experiment.
The
History object keeps a list of the operations applied to a document, provides the actual
undo routine, a method to call whenever an operation has been applied to a document, and callbacks for notification of do and undo actions.
The
History attributes follow.
Callback functions must take a
History object and
HistoryNode object as their first two arguments.
<History>= (<-U) class History: def __init__( self, context, historyList = None, historyNodeClass = HistoryNode, callbacks = None ): self.context = context if historyList is not None: self.historyList = historyList else: self.historyList = [] self.historyNodeClass = historyNodeClass if callbacks is not None: self.callbacks = callbacks else: self.callbacks = [] def addHistoryOp(self, op): historyNode = self.historyNodeClass(op) self.historyList.append(historyNode) for call in self.callbacks: call(self, historyNode) def addCallback(self, callback): self.callbacks.append(callback) def removeCallback(self, callback): self.callbacks.remove(callback) <undo routines>
HistoryNodeholds some special information for the undo algorithm, the operation that was performed, and any other attributes you want to add in via a callback, such as username or timestamp. It can also make a copy of the undo related attributes for use in the undo routines. The undo related attributes follow.
<HistoryNode>= (<-U) class HistoryNode: def __init__ ( self, op, undoBlockMember = None, undoneBy = None, undid = None) : self.op = op self.undoBlockMember = undoBlockMember self.undoneBy = undoneBy self.undid = undid def tempCopy(self): return self.__class__( self.op.copy(), undoBlockMember = self.undoBlockMember, undoneBy = self.undoneBy, undid = self.undid)
<HistoryExceptions>+= (<-U) [<-D->] UndoConflict = "Undo conflict, incompatible operations"
<undo routines>= (U->) <undo> <removeDoUndoPair> <tempListForUndo>
The
tempListForUndo method constructs a copy of
self.historyList starting just after the node to be undone, copying each
HistoryNode by calling its
tempCopy() method. I keep a flag in
HistoryNode for nested blocks of do-undo's. ABCDD'C'B'A' is effectively a null operation, where the primes signify undo operations. As such it can be ignored completely, so I skip over such blocks when making the temporary list. I also return a dictionary with new
HistoryNode copies as keys and old
HistoryNodes as values so any exceptions can refer to the original nodes to get all the attributes stored in it which may not be in the copy.
<HistoryExceptions>+= (<-U) [<-D] NoHistNode = "historyNode not found in historyList"
<tempListForUndo>= (<-U) def tempListForUndo(self, historyNode): if historyNode not in self.historyList: raise NoHistNode, (historyNode, self) histList = self.historyList[ self.historyList.index(historyNode)+1:] copy = [] newToOldMap = {} oldToNewMap = {} skipUntil = None for node in histList: if (not skipUntil and node.undoBlockMember and node.undoneBy): skipUntil = node.undoneBy if skipUntil and node != skipUntil: continue elif node is skipUntil: skipUntil = None continue # we still want to skip this node newnode = node.tempCopy() copy.append(newnode) newToOldMap[newnode] = node oldToNewMap[node] = newnode for node in copy: if node.undoneBy: node.undoneBy = oldToNewMap[node.undoneBy] if node.undid and oldToNewMap.has_key(node.undid): node.undid = oldToNewMap[node.undid] return copy, newToOldMap
The
removeDoUndoPair method modifies nodes in the list passed in to cancel out a do-undo pair by transposing the do until it is next to the undo. Whey they are together they nullify, which it signals by setting the
op attribute to
None for both of them. It does know for sure though that there are no real conflicts between the do and undo, otherwise the do could not have been undone in the first place! So if any conflicts are found it knows that the offending operation has already been undone and calls
removeDoUndoPair() recursively to eliminate it.
<removeDoUndoPair>= (<-U) def removeDoUndoPair(self, tempList): doNode = tempList[0] for index in range(1, len(tempList)): node = tempList[index] if node is doNode.undoneBy: break if node.op is None: continue if doNode.op.Conflict(node.op): self.removeDoUndoPair(tempList[index:]) else: node.op, doNode.op = doNode.op.Transpose(node.op) doNode.op = node.op = None
With all the support for the undo algorithm in place, it practially writes itself! It gets a copy of the history list to work with, shifts the operation to undo to the end of the list, then performs its inverse and keeps track of who undid who.
<undo>= (<-U) def undo(self, historyNode): tempList, newToOldMap = self.tempListForUndo( historyNode) shiftOp = historyNode.op.copy() index = 0 #move shiftOp to end of history list for node in tempList: if node.op is None or node.op.isNullOp(): continue if shiftOp.Conflict(node.op): if node.undoneBy: self.removeDoUndoPair(tempList[index:]) else: raise UndoConflict,(historyNode,newToOldMap[node]) else: unused, shiftOp = shiftOp.Transpose(node.op) index = index + 1 #perform the inverse, #context will call us back to add it to History self.context.performOp(shiftOp.inverse()) endNode = self.historyList[-1] historyNode.undoneBy = endNode endNode.undid = historyNode if not tempList: #means all ops between historyNode and #end cancelled out endNode.undoBlockMember = 1 historyNode.undoBlockMember = 1
Conflictand
Transposeproperly.
So now the guidelines. First, come up with your state or document representation and what operations you want to change that state with. Keep in mind that to be undoable you must provide an inverse for each operation. Usually this means squirrelling away whatever that operation destroys so the inverse can recreate it. You will also need to define
Conflict and
Transpose functions that work over all operations. With these features designed in at the start you'll have everything the selective undo algorithm needs to work. Prakash and Knister [cite PK94] gives some excellent examples of implementations of this framework.
The prototypical modules I've presented do work. I've tested them but have not used them extensively, so they are not mature by any means. In a real application I would improve them by customizing or subclassing
AbstractOperation. The
readSet and
writeSet approach may not be the correct way to determine conflicts for the document representation. At the very least I would change them to use real
Set objects. The context and callback mechanisms for actually performing the operations and notification of additions to the history list may need improvements for performance reasons. And, with the
History module working to satisfaction now, it could be moved into C or Java for performance reasons.
Note that once you have all your operations behaving as described above you've nearly got a domain specific language! You or your users can write functions using these operations and get undo for free. With a little extra bookeeping you could record which function invocation caused which operations in the history list and use that information to undo the entire thing at once. Your operations and document object represent a clean interface to your documents and should allow for clear separation between your UI and document object. In addition to making it easy to embed a language like Python in your application for macros, it should also be easy for programs, even remote programs, to manipulate documents from outside the application.
For groupware applications, the objects would be written the same way, but the
context object would have to synchronize with a central server. The
Conflict function would help identify problems when users simultaneously do things that aren't compatible. Prakash and Knister [cite PK94] goes into detail about these issues.
``Extending programming by demosnstration with hierarchical event histories'' by Kosbie and Myers [cite KosbieDavi94a] discusses techniques for analyzing user's histories in order to facilitate ``programming by demonstration''. They recommend keeping histories as a tree to help the analysis. This could be done fairly easily using a
callback and keeping the tree structure external to the
History object. You could then use their techniques to try to ``learn'' what the user is doing. Or you could just have that information browsable to help the user write macros based on what they have been doing.
Ever since I started programming graphical user interfaces I've been curious about how to implement undo and macro languages. Those topics have been the holy grail which I've quested after. When I learned about Python a few years ago, I found the embeddable language I was looking for. With Prakash and Knister's paper [cite PK94] I've finally discovered and implemented a serious undo framework. Their approach is like defining your own algebra for manipulating documents because it lets you switch around events in the history so you can undo anything, not just the last action. Now I'm ready to do a really killer app! Any ideas?
History.pyand
UndoableFileOperations.pywill be available by the time this paper is published. Check either on the Python home page, or look for my quarters on the Python starship,.
EG94 Erich Gamma, et al. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 94 KM94 David S. Kosbie and Brad A. Myers. Extending programming by .demonstration with hierarchical event histories. Technical Report CMU-HCII-94-102, Carnegie Mellon University, The Human Computer Interaction Institute, May 94.
Knu92 D.E. Knuth. Literate Programming. Stanford University, 92.
PK94 Atul Prakash and Michael J. Knister. Undoing actions in collaborative work: Framework and experience. Technical Report CSE-TR-196-94, University of Michigan, March 94.
readSetand
writeSetroutines for conflicts. A simple test routine creates some files, deletes them all, then undoes the first delete.
<UndoableFileOps.py>= import os, History class UndoableBinaryFileOp(History.AbstractOperation): def __init__(self, frm, to, trashDir = None, trashed = None): self.frm = frm self.to = to self.op = None self.trashDir = trashDir self.trashed = trashed def perform(self): if os.path.isfile(self.to) and self.trashDir is not None: self.trashed = getUniqueName(self.to, self.trashDir) os.rename(self.to, self.trashed) else: self.trashed = None apply(self.op, (self.frm, self.to)) def readSet(self): return (self.frm, ) def writeSet(self): if self.trashed: ret = (self.frm, self.to, self.trashed) else: ret = (self.frm, self.to) return ret def inverse(self): return None def isNullOp(self): return 0 class UndoableMove(UndoableBinaryFileOp): def __init__(self, frm, to, trashDir = None, trashed = None): UndoableBinaryFileOp.__init__(self, frm, to, trashDir, trashed) self.op = os.rename def inverse(self): if self.trashed: ret = AtomicOpsTuple((UndoableMove(self.to, self.frm), UndoableMove(self.trashed, self.to, self.trashDir))) else: ret = UndoableMove(self.to, self.frm) return ret def readSet(self): return (self.frm, self.to) def copy(self): return UndoableMove(self.frm, self.to, self.trashDir, self.trashed) class UndoableCopy(UndoableBinaryFileOp): def __init__(self, frm, to, trashDir = None, trashed = None): UndoableBinaryFileOp.__init__(self, frm, to, trashDir, trashed) self.op = os.copy def inverse(self): if (self.trashed): return UndoableMove(self.trashed, self.to, self.trashDir) else: return UndoableDelete(self.to,self.trashDir) def copy(self): return UndoableCopy(self.frm, self.to, self.trashDir, self.trashed) class UndoableSymlink(UndoableBinaryFileOp): def __init__(self, frm, to, trashDir = None, trashed = None): UndoableBinaryFileOp.__init__(self, frm, to, trashDir, trashed) self.op = os.symlink def inverse(self): if (self.trashed): return UndoableMove(self.trashed, self.to, self.trashDir) else: return UndoableDelete(self.to,self.trashDir) def copy(self): return UndoableSymlink(self.frm, self.to, self.trashDir, self.trashed) class UndoableHardlink(UndoableBinaryFileOp): def __init__(self, frm, to, trashDir = None, trashed = None): UndoableBinaryFileOp.__init__(self, frm, to, trashDir, trashed) self.op = os.link def inverse(self): if (self.trashed): return UndoableMove(self.trashed, self.to, self.trashDir) else: return UndoableDelete(self.to,self.trashDir) def copy(self): return UndoableHardlink(self.frm, self.to, self.trashDir, self.trashed) class UndoableDelete(History.AbstractOperation): def __init__(self, filename, trashDir = None, trashed = None): self.filename = filename self.trashDir = trashDir self.trashed = trashed def copy(self): return UndoableDelete(self.filename, self.trashDir, self.trashed) def perform(self): if (os.path.isfile(self.filename) and self.trashDir is not None): self.trashed = getUniqueName(self.filename, self.trashDir) else: self.trashed = None if self.trashed: os.rename(self.filename, self.trashed) else: os.unlink(self.filename) def readSet(self): return (self.filename,) def writeSet(self): if self.trashed: ret = (self.filename, self.trashed) else: ret = (self.filename) return ret def inverse(self): return UndoableMove(self.trashed, self.filename, self.trashDir) def isNullOp(self): return 0 class AtomicOpsTuple(History.AbstractOperation): def __init__(self, operationsTuple): self.operationsTuple = operationsTuple def copy(self): return AtomicOpsTuple(self.operationsTuple) def perform(self): for op in self.operationsTuple: op.perform() def readSet(self): ret = () for op in self.operationsTuple: ret = ret + op.readSet() return ret def writeSet(self): ret = () for op in self.operationsTuple: ret = ret + op.writeSet() return ret def isNullOp(self): return 0 def getUniqueName(fn, trashDir): base = os.path.basename(fn) count = 1 result = os.path.join(trashDir, base + '%.3d' % count) while os.path.isfile(result): count = count + 1 result = os.path.join(trashDir, base + '%.3d' % count) return result if __name__=='__main__': limit = 12 #a helper function to see what's going on def listem(): print for dir in ["/tmp/tests", "/tmp/tests/trash"]: l = os.listdir(dir) for f in l: fn = dir+'/' + f if not os.path.isfile(fn): print '\t' + fn else: fi = open(fn) s = fi.read() fi.close() print '\t' + fn, s #create Context and History objects class Context: def performOp(self, op): op.perform() self.history.addHistoryOp(op) c = Context() h = History.History(c) c.history = h #create some directories and files to play with try: os.mkdir("/tmp/tests",0777) except: pass try: os.mkdir("/tmp/tests/trash", 0777) except: pass for x in range(limit): f = open('/tmp/tests/hey'+str(x), 'w') f.write(str(x)) f.close() listem() #delete everything for x in range(limit): u = UndoableDelete('/tmp/tests/hey'+str(x), '/tmp/tests/trash') c.performOp(u) listem() #undo only the first delete h.undo(h.historyList[0]) listem() | http://www.python.org/workshops/1997-10/proceedings/zukowski.html | crawl-002 | refinedweb | 3,859 | 50.43 |
The Web is switching slowly from the mainframe computing paradigm (static HTML pages generated on the server) to a high-performance computing (HPC) paradigm, where the computing power is distributed along the network. This shift gained its momentum with the explosion of Ajax applications.
Ajax is an exciting technology. However, programming Ajax means hard work, mainly because of the asynchronous programming model and XML parsing between different environments or programming languages. Ajax helped the Web to be faster by reducing the CPU load on the servers for tasks that are mostly cosmetic and have almost no interference in the application business logic.
A logical next step is to use more of the CPU power of web clients, making the dull web browser into an more powerful fat client. We already share the control logic between the server and the client; how hard would it be to also share the views and even the models?
The answer is, not that hard! To solve the model sharing, all that is needed is a simple mechanism to get the data from the server into the client in a transparent way (and this is what Ajax is all about). The view problem can be addressed within the web browser itself; today's browsers include an XML Stylesheet Processor (XSLT). XSLT is used on the server side to generate dynamic content; we can relieve the server of that job if we do it on the client side. To control all of this, we need some programming language, and again inside our browsers' JavaScript has been a standard feature for quite a long time.
It is clear we can do it, but why we should do it is the next question to arise. Everything works fine with the current paradigm, so why make a change? To me, it is clear that this change should be made for the same reasons we started using Ajax in the first place: web applications were slow, suffered from network latency, etc. Ajax brought the data layer to the web browser; now we bring the layout layer to the web browser, too. By doing this, we get a clear separation of code and data, and simplify the code necessary for displaying a view in the browser by using XSLT transformations in a template instead of a big HTML file.
From here we have less code, easier deployment, and easier maintenance of the application. Surely we will have a win-win scenario. The developer wins by having to code less, saving time and money, and the user wins with faster response times and less network data to transfer.
The RAJAX Project
RAJAX is a project that set out to test this idea. At its heart it is a Java library, a tag library, and two JavaScript files. The tag library with the Java library will address the Model problem just described. It works by using runtime reflection to take any class that extends the RAJAX base class and transform it into an Ajaxified JavaScript object. For example, consider the following Java class:
package demo; import java.sql.*; import java.util.*; import javax.naming.*; import javax.sql.DataSource; import net.java.rajax.RAJAX; public class ShoppingItem extends RAJAX { private DataSource ds; /** * Creates a new instance of ShoppingItem */ public ShoppingItem() { try { Context initContext = new InitialContext(); ds = (DataSource) initContext.lookup( "java:/comp/env/jdbc/rajaxDB"); } catch (NamingException ex) { ex.printStackTrace(); } } @net.java.rajax.RAJAXMethod public Map findById(String id) { try { Connection conn = ds.getConnection(); PreparedStatement pstmt = conn.prepareStatement( "SELECT name, description, price FROM " + "items WHERE id = ?"); pstmt.setString(1, id); ResultSet rs = pstmt.executeQuery(); Map item; if(rs.next()) { item = new HashMap(); item.put("name", rs.getString("name")); item.put("description", rs.getString("description")); item.put("price", rs.getString("price")); } pstmt.close(); rs.close(); conn.close(); return item; } catch (SQLException ex) { ex.printStackTrace(); } return null; } }
RAJAX will generate the following corresponding JavaScript object:
function ShoppingItem() { var sU=null; var sP=null; var cN='ShoppingItem'; var oN='/rajax/demo.ShoppingItem.rajax'; this.findById = function(p0) { if(typeof(this.findById_Callback) == 'undefined') this.findById_Callback=XMLHttp.defaultCallback; if(Rajax.isString(p0)) { var u=oN+'?m=findById'; var p='p='+Rajax.toJSON(p0,true); XMLHttp.exec(u,p,false,sU,sP,this.findById_Callback); } else this.testInt_Callback('Bad Request',400); }; }
There is no need to write configuration files; all the developer needs to do is create a Java class that extends RAJAX, and every
public method with a
@net.java.rajax.RAJAXMethod annotation will be mapped to a method with the same signature in the generated JavaScript object. The original code is a simple POJO; this helps the developer to focus on the problem and avoid other concerns. Also, since we are handling POJOs, testing is easier (using
JUnit for example), since there is no need to deploy the code before testing. Testing can start at the same time we are coding our RAJAX objects.
Note that the generated JavaScript object makes use of two JavaScript objects:
Rajax and
XMLHttp. Starting from the end,
XMLHttp is a simple abstraction of the
XMLHttpRequest object that will make it easier to expand the library to other browsers. At the moment, this object has been tested with success on Microsoft Internet Explorer 6, Microsoft Internet Explorer 7, Mozilla Firefox 2, and Opera 9. With time and other platforms to test, the browser platform is supposed to be expanded. The
Rajax object contains some helper functions to cast variables and encode and decode JSON messages back to and from the back-end server.
These abstractions may seem like re-inventions of the wheel, with other projects like Sarissa, which is about 28K of source code (62K with extras). RAJAX keeps it simple with only the basic features within 12K (21K with the MVC framework).
One of the goals of the project was to write an Ajax library that would require no configuration. While this is somewhat true, there is one thing that needs to be configured; you need to instruct your JSP/servlet container to handle the RAJAX requests. This is done using a special
Servlet in the
webapp. This servlet will be a dispatcher that will map a request to a Java object and if (and only if) that object is an instance of the superclass
net.java.rajax.RAJAX, it will instantiated and the requested method executed. This is what the
web.xml file should look like:
<?xml version="1.0" encoding="UTF-8"?> <web-app <servlet> <servlet-name>RajaxDispatcher</servlet-name> <servlet-class>net.java.rajax.SimpleServletDispatcher</servlet-class> </servlet> <servlet-mapping> <servlet-name>RajaxDispatcher</servlet-name> <url-pattern>*.rajax</url-pattern> </servlet-mapping> ... </web-app>
Implementing a Model-View-Controller (MVC) Pattern
The title says this pattern is "almost" CPU-free. "Almost" because RAJAX will implement it entirely on the client browser, so no CPU is used on the server for that computation. There are six classes available to the developer (Figure 1).
Figure 1. Object model of the JavaScript MVC framework
Of these classes, the developer typically only cares about three:
Nimble: The main MVC class. It is called "nimble" because it tries to be as simple as possible.
NimbleModel: The model representation for the MVC.
NimbleView: The view representation for the MVC, with methods to render in the browser and other important functions such as event handling.
Putting Everything Together
As with any software program, the application needs an entry point. In the case of RAJAX, it is a HTML page. In the above example, a simple display/edit
ShoppingItem is to be implemented. The user enters the application and sees the shopping item in the display view (Figure 2).
Once the edit view is clicked, a form will enable its editing (Figure 3).
When switching back to the display view, the data will reflect the changes (Figure 4).
Figure 4. Display view with updated content
The application entry HTML is just a plain JSP page with the inclusion of the MVC tag library:
<%@page <html> <head> <title>JSP Page</title> <rajax:object <rajax:mvc </head> <body onLoad="Rajax.enableThrobber(Rajax.THROBBER_IMG); startUp()"> <h1>RAJAX Nimble MVC</h1> <div id="content"></div> </body> </html>
The display view is a simple XML stylesheet file:
<?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet <xsl:template <div> <h3><xsl:value-of</h3> <p><xsl:value-of</p> <p><em>Price: <xsl:value-of</em></p> <p><a href="#" id="editLink">edit</a></p> </div> </xsl:template> <xsl:template </xsl:stylesheet>
The edit view is another simple XML stylesheet file:
<?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet <xsl:template <form method="post" action="#" class="editForm"> <h3><input id="Item/name" type="text" value="{name}" /></h3> <p><textarea id="Item/description"><xsl:value-of</textarea></p> <p><input id="Item/price" type="text" value="{price}" /></p> <p> <a href="#" id="displayLink">display</a> <input type="submit" value="ok" /> </p> </form> </xsl:template> <xsl:template </xsl:stylesheet>
Earlier in this article, we defined a model with the RAJAX objects and a view using the XSLT stylesheets. To complete the MVC framework, only the business logic is missing. The controller is a JavaScript file as stated before; if one pays particular attention to the base application HTML, there was an extra JavaScript script,/js/controller.js. This script is where all the business logic is: it will handle all the event handling from the view, execute actions against the model, and finally update the view.
Before the controller can handle the events, one needs to attach them to the view. To do this, RAJAX uses the Observer pattern, since it is also related to the way browsers handle events on the
Document Object Model. The code for our example is:
var data,view,edit; function startUp() { // load the XSLT files view = Nimble.loadView('views/display.xsl'); edit = Nimble.loadView('views/edit.xsl'); // RAJAX will get the data from the DB data = (new ShoppingItem()).findById('tomato soup'); // attach the event handlers view.attachEvent('editLink', 'click', function () { edit.render('content', data); }); edit.attachEvent('displayLink', 'click', function () { getData(); }); // render the initial view view.render('content', data); } function getData() { view.updateModel(data); view.render('content', data); }
The
attachEvent function takes three arguments: the first is the DOM element to which the MVC framework will attach the event; the second is the event type, (one of
blur,
focus,
contextmenu,
load,
resize,
scroll,
unload,
click,
dblclick,
mousedown,
mouseup,
mouseenter,
mouseleave,
mousemove,
mouseover,
mouseout,
change,
reset,
submit,
keydown,
keypress,
keyup, or
error). Of course, not all events can be attached to all DOM elements. For example, the
load event can only be attached to
Document elements, and the
submit to
Form elements. The last argument is the notify function; since JavaScript has closure support, the API is simplified by not having the need to write a specific
Observer object.
Final Considerations
In this example, no Ajax-specific code was (manually) created and no CPU is used on the server to render the user interface. The load is distributed between the server and the client, and the client will have all the load for user interface processing.
Resources
- Sample codefor this article.
- RAJAX project sitewith further information about this topic.
- Freja: A PHP project with an XSLT MVC pattern without dynamic models.
- Sarissa: An ECMAScript library acting as a cross-browser wrapper for native XML APIs.
- The Observer pattern. | https://community.oracle.com/docs/DOC-983162 | CC-MAIN-2017-13 | refinedweb | 1,888 | 55.13 |
<?xml:namespace prefix = o /><o:p>
This.<o:p>
<o:p>
<o:p>
View1.ViewMode = StickyView.View.mode.View<o:p>
You can write code on your web site to change the mode to edit mode<o:p>
View1.ViewMode = StickyView.View.mode.Edit<o:p>
All the content is stored to text files with the name of the control. So you must be sure to allow write access to the account running IIS plus you also need to include this on your web site’s web config<o:p>
<appSettings><o:p>
<add key="StickyViewContentFolder" value="c:\StickyContent" /><o:p>
</appSettings><o:p>
I have not tested this on a web farm but basically you can add a UNC path on the web config<o:p>
<add key="StickyViewContentFolder" value="\\SHARED_DRIVE\SHARED_FOLDER" /><o:p>
If you are going to allow HTML to be entered into the control you need to add this to every page holding the control in the page directive<o:p>
validateRequest="false"<o:p>
*height. | http://www.codeproject.com/Articles/13334/Allow-editable-content-on-your-asp-net-web-pages?PageFlow=FixedWidth | CC-MAIN-2017-17 | refinedweb | 170 | 53.95 |
Technical Articles
Advanced MPL using Elastic Stack
Introduction
There were a couple of posts about advanced logging already. In certain respect they all point out that SAP Cloud Platform Integration has pitfalls when it comes to logging. Some can really ruin your business day:
- the 1GB per day MPL attachment limit – when enforced by the so called “circuit breaker” makes log analysis impossible
- automatic MPL deletion after 30 days
- Monitoring dashboard message filtering and sorting could be better
Since there are reliable cloud logging solutions, at last there is no reason in enduring that situation.
One of them is Elastic Stack also known as ELK.
The scope of the article is to provide an overview what can be done with it. I do not go into every technical detail.
Install an Elastic Stack
Elastic Stack has a Basic licence which makes the product available at no cost. It can also be used as managed Elastic Cloud service.
I decided to try out a self managed scenario in an Azure subscription by deploying a prepared Ubuntu virtual machine image with the complete elastic stack already installed. We could also use containers in a Kubernetes service in future – that depends on the experiences we make with the setup and cost considerations.
The virtual machine only opens HTTP/HTTPS ports 80/443. A DNS name is assigned to its public IP.
Currently, it uses a 200GB disk.
There are two endpoints that have to be opened to the internet:
- Logstash – the API to send log messages from CPI flows
- Kibana – the front end to visualise log data
Their transport must be encrypted and clients have to authenticate.
I installed an Nginx as reverse proxy that utilises Let’s Encrypt certificates with automatic renewal via cron job. Authentication is done using basic username and password simply provided with htpasswd.
Create a Search Index Template
The Kibana UI has a Stack Management / Index Management perspective that allows to create index templates. With a template you can define settings that are inherited by the indexes which are automatically created on a daily basis. It can also have a lifecycle policy that removes indexes after a defined period or moves them to less performant and therefore cheaper hardware.
To use the index for searching there must be an index pattern which can be created on the same management UI. It is useful to create the pattern after at least one document is in the index. Else the pattern must be refreshed to know all the fields sent by the CPI.
Send log messages to the Elastic Stack
As with any other MPL attachment where you use scripts like this Groovy
Message logMessage(Message message) { def messageLog = messageLogFactory.getMessageLog(message) if (messageLog) { def body = message.getBody(String) def attachment = createAttachment(message, body) def name = ["Log", message.getProperty("customLogName")] messageLog.addAttachmentAsString(name.findAll().collect { it.trim() }.join(" - "), attachment as String, "text/xml") } }
you basically do the same but use some additional Camel knowledge.
First, there are two tasks to prepare the platform for sending to the Elastic Stack:
- Add the Let’s Encrypt Root Certificate DST Root CA X3 to the platform keystore.
- Add the username and password that was used to protect the Logstash endpoint as user credential
Then, in the script there are the following steps:
- Prepare the request to send to the Logstash API.
def metadata = ["beat": "scpi", "version": "1.0.0", "@timestamp": new Date().format("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'")] def name = ["Log", text, message.getProperty("customLogName")] def logs = ["name": name.findAll().collect { it.trim() }.join(" - "), "level": level, "body": message.getBody(String), "headers": mapToString(message.headers), "properties": mapToString(message.properties), "mplId": message.getProperty("SAP_MessageProcessingLogID"), "messageCorrelationId": getCorrelationIdFromMpl(message.exchange) ] def logstashBody = [ "@metadata": metadata, "component": message.exchange.context.name, "environment": getEnvironment(), "logs": logs ]
- Send the request. (credentials are fetched using the SecureStoreService API)
def logstashUrl = message.getProperty("logstashUrl") def credential = getCredential("Logstash") def template = message.exchange.context.createProducerTemplate() MplConfiguration mplConfig = new MplConfiguration() mplConfig.setLogLevel(MplLogLevel.NONE) def exchange = ExchangeBuilder.anExchange(message.exchange.context) .withHeader("CamelHttpMethod", "POST") .withHeader("Content-Type", "application/json") .withHeader("Authorization", "Basic " + Base64.encoder.encodeToString("${credential.username}:${credential.password as String}".getBytes(StandardCharsets.UTF_8))) .withBody(new JsonBuilder(logstashBody).toString()) .withProperty("SAP_MessageProcessingLogConfiguration", mplConfig) .build() template.asyncCallback("ahc:${logstashUrl}", exchange, new Synchronization() { void onComplete(Exchange ex) { template.stop() } void onFailure(Exchange ex) { if (ex.exception) log.logErrors("Error sending to Logstash: " + ex.exception) else log.logErrors("Error response from Logstash: ${ex.out.headers['CamelHttpResponseCode']} - ${ex.out.headers['CamelHttpResponseText']}") template.stop() } })
- That is it!
Inspect log messages in Kibana
This does not only look pretty, it comes with much more filtering features than the CPI monitoring.
It would also be possible to have a linkage between Kibana and the CPI monitoring just by submitting an URL for mplId or correlationId.
Conclusion
With that relatively simple changes we can provide
- a more robust monitoring for the operations team
- message history whose size only depends on what the customer is willing to pay for the disc
- search in log attachments at a level of granularity that the CPI sorely misses
- continuous logging, no necessity to decrease the log level by setting a logEnabled property to false in test or production environments in fear of the circuit breaker
Hi Uwe Voigt ,
thank you for sharing this great idea!
We are also currently working on an integrated ELK-stack logging and alerting solution.
Therefore, I have three questions:
Thank you for answering my questions in advance!
Malte
Hi Malte,
Currently, the setup of that Elastic Stack has been done within one day. I am sure there are a couple of things that could be done better. If we are forced to invest further, I will write an update here.
E.g. I would like to have the node’s log stream indexed. But I think that most probably would require an OSGi bundle or fragment to be deployed. Not sure if that can be done using the custom adapter deployment infrastructure.
Regarding your questions:
The script reports any connectivity or Logstash message to the node log only. The integration flow is not affected. As a fallback, the log message including body, headers and exchange properties could be attached to the MPL. Note that the initial reason to use the Elastic Stack was the MPL circuit breaker.
Since we use JMS messaging between receiver and sender integration flows, we should not lose messages even if Logstash/Elastic may be down for a while.
To my knowledge, Logstash comes with a bunch of plugins that allow to modify your message while being processed by the pipeline. I have not yet experimented a lot with Ingest pipelines, but I guess they do not offer too much functionality Logstash does not.
At the moment, we are creating an index document for each message sent by an integration flow. An example of a useful modification of a message might be to use the correlation id of the message as the document id and to add fields to the document on each new message.
But since the filtering capabilities of Kibana are that strong, the current setup is already very effective.
On the other hand, I do not think that the additional component Logstash imposes a great risk of failure. There is not much memory pressure on it because it streams batches of data.
The experiences of the last weeks show us that this is not necessary. Quite the contrary – the integration flows perform faster than with MPL attachments.
Best Regards!
Uwe
Hi Uwe,
Great post, I´ve seen that you are using lots of camel classes to do the srcipting.
How do you make use of these? Could you share some of the imports or a more detailed script?
Thanks in advance.
Hi Mikel,
When developing a Groovy script, you better use an IDE (I do not like the way SAP decided to create their developing process for the CPI, frankly said - they should not have switched from the approach of developing locally to the online design editor. It would have been even smarter to provide a downloadable CPI feature set to run within a local OSGi container. This for instance would enable us to debug scripts.)
If you attach the Apache Camel core to your IDE project or whatever, you will see all Camel classes and you could even read the Javadoc!
Here are the imports for your reference:
Best Regards,
Uwe | https://blogs.sap.com/2020/08/18/advanced-mpl-using-elastic-stack/ | CC-MAIN-2022-33 | refinedweb | 1,400 | 55.95 |
Hi,
> Just a short question ... what problems are you actually having with Jira? Is it the
availability of an instance? The migration of Issues allready reported to Adobe?
It's the migration. Infrastructure have recently upgraded to a newer version of JIRA and Alex
has supplied then with a new dump that should be compatible with it. As all Apache project
use the same instance of JIRA there's only limited times (only weekends?) when this very large
import can be done.
> For me the change of Flex mooving to Apache offers more chances than risks.
Agree there. Having a community of very smart developers being able to commit to it is positive
change.
Thanks,
Justin | http://mail-archives.apache.org/mod_mbox/incubator-flex-dev/201206.mbox/%3C9BACFA14-177F-4C1B-845B-368007C451D3@classsoftware.com%3E | CC-MAIN-2016-07 | refinedweb | 116 | 68.36 |
Go to the first, previous, next, last section, table of contents.
The prototypes for the low-level CBLAS functions are declared in the file `gsl_cblas.h'. For the definition of the functions consult the documentation available from Netlib (see section References and Further Reading).
The following program computes the product of two matrices using the Level-3 BLAS function SGEMM,
[ 0.11 0.12 0.13 ] [ 1011 1012 ] [ 367.76 368.12 ] [ 0.21 0.22 0.23 ] [ 1021 1022 ] = [ 674.06 674.72 ] [ 1031 1032 ]
The matrices are stored in row major order but could be stored in column
major order if the first argument of the call to
cblas_sgemm was
changed to
CblasColMajor.
#include <stdio.h> #include <gsl/gsl_cblas.h> int main (void) { int lda = 3; float A[] = { 0.11, 0.12, 0.13, 0.21, 0.22, 0.23 }; int ldb = 2; float B[] = { 1011, 1012, 1021, 1022, 1031, 1032 }; int ldc = 2; float C[] = { 0.00, 0.00, 0.00, 0.00 }; /* Compute C = A B */ cblas_sgemm (CblasRowMajor, CblasNoTrans, CblasNoTrans, 2, 2, 3, 1.0, A, lda, B, ldb, 0.0, C, ldc); printf ("[ %g, %g\n", C[0], C[1]); printf (" %g, %g ]\n", C[2], C[3]); return 0; }
To compile the program use the following command line,
$ gcc -Wall demo.c -lgslcblas
There is no need to link with the main library
-lgsl in this
case as the CBLAS library is an independent unit. Here is the output
from the program,
$ ./a.out [ 367.76, 368.12 674.06, 674.72 ]
Go to the first, previous, next, last section, table of contents. | http://www.gnu.org/software/gsl/manual/gsl-ref_45.html | CC-MAIN-2014-42 | refinedweb | 269 | 82.24 |
text = text.toUpperCase();
Type: Posts; User: welikedogs
text = text.toUpperCase();
import java.util.*;
import java.io.File;
class assignment1{
public static void main (String[] args) throws Exception{
File alicefile = new File("alice.txt");
Scanner scan = new...
Tried a while loop. But i only know how to scan next line untill there is no more. Im not sure how to read a certain amount of lines. This is what i need a bit of help with. :)
Hey guys my names scott and this is my first post. I just recently started using java and am still very basic.
I want to read a certain amount of lines of text from a file. I'm not sure which... | http://www.javaprogrammingforums.com/search.php?s=353abf055fef69840e383420394c58a6&searchid=1133992 | CC-MAIN-2014-42 | refinedweb | 116 | 87.72 |
Innovative - will you? where can I find flash/air integration admob android ...
...where is Google Ads - Admob plugin or similar code?
This is a must in next release Flash CS5.5/AIR2.5!
this is very important ...
i'm interested in this feature too.
i'd like to have the opportunity to monetize my app using ads.
I hope that Adobi will include this functionality in next release.
I'm interested too !
Another vote here.
Can we have a response of someone from Adobe about this request?
Thanks
I agree. Timeline?
I may be wrong, but i dont think this has anything to do with adobe. What we need is admob to provide API in as3.
Does anyone know of any other ad providers that have an AS3 sdk?
I was waiting for these people to chime in from the pre-release, but so far i have'nt seen either of them.
In the air/android pre-release, someone ("Jean") had posted some code to work with smaato, another mobile ad provider. The last status posted was that it pulled and displayed the ads with as3, however the smaato account was not registering impressions, (the developer got no credit for the downloads). This person was waiting on a callback from smatto support, then the pre-release forum closed and that thread is gone.
Someone else("nick carr") who was testing the same code, posted an example on the market called "adtest" by carr solutions, perhaps you can check that in the market and email him to see if he ever got the impressions to work properly, or if he has Jeans contact info since Jean wrote it.
Also some other people reportedly were able to use Admob by using a proxy.
Your air for android app calls a php file on your own server, and php calls the ad from admob.
I haven't tried this at all, but it reportedly and worked perfect and is not difficult if you know php at all.
I know this doesnt help much yet, but this is what was going on in the pre-release hopefully we can bring this fprum up to date with this info.
so far it doesn't look like any of these people are showing up on these forums yet..
Looks like i had the example still.
I don't want to take credit for this (i didn't make it) It was an unfinished work in progress provided to the community by "Jean S". You may want to clean it up a bit.
However it does pull an ad from smaato!
For test purpose, publisher ID and AdSpace ID is set to 0. You will eventually change this to your own ID after signing up with smaato.
Here is the source code:
I hate links like this, if someone can let me know a cleaner public place to upload it i will do so.
(Or am i just crazy and not seeing the attachment link on this forum?)
Thanks boat5, very useful info.
Accomplishing Admob ad-serving on Adobe AIR is my current official project at work!
I've seen the AdTest app as well. I have spoken on email with Nick Carr and Jean S. from the pre-release forums. I think they both work at carrsolutions.com from the AdTest app. I haven't gotten a hold of Jean recently and can't find her email. My understanding from Nick is that the page impressions are working now through Smaato, but Smaato requires >1Mio impressions and might shut down accounts who aren't meeting the quota.
I'll keep you up-to-date on the progress of my company's Admob ActionScript 3 code. We're also monitoring AdMob's blog. Really, it should be the ad network who provides the code free of charge, but if we can figure out how to code it properly, we can share/sell it with a select few! We'll work out the kinks on here!
-Adele
AdMobAndroid.com (under construction)
Any examples of either of these? I'd be willing to test!
Another vote for this to bu built in!!!
Update
I've been searching the web for the last 2 weeks and haven't found anyway to do this. Does anyone have any idea how to get any ad into any app?!!!
What are you trying to do? If you're using Flex Mobile you can use constraints to set the size and position of the component regardless of screen resolution or orientation.
I'll send you the Flash Builder Mobile project I'm using to get ad's to work through PHP. If Flex Mobile is not an option then I can get the code for you to check when the orientation changes.
Laxidasical wrote:!!!
Can you loop me in here too? I can share the optimized Smaato code with you two.
So far, that seems to be the only AS3 ad code working without needing your own web server to host ads and request via PHP. But, I would prefer to use AdMob since I use AdSense, AdWords, and others!
My experience is in PHP and MySQL, but I have 3+ years experience in ActionScript (not much AS3 exposure until recently). There's a 100% chance the three of us should be able to get this working.
Hi,
Yes. I sent you a private message with my email. Can you also send me what you have working? I will integrate it into my project so we have as many ad options as we can get working then post it online where we can get more help and exposure.
Update? Has anyone made progress on this?
My solution (requires running .net website)
1. create admob mobile web project which points to your website
2. copy the code you get (c#) in my case and paste into http-handler/webservice
4. filter the html returned by admob so flash can read it as urlvariables
3. get the variables into flash (url, banner-image-url, text) by loading the http-handler
4. make banner from variables
5. profit ?
I guess this also works via php/java ......
ofcourse this is only a temporary solution, since you only will get mobile-web related ads and no app-ads etc.
I also would like to include Admob in my Flash/AIR apps. I have published 2 so far and the only thing holding me back now is the integration with Admob. If a solution from within the Flash/AIR SDK was possible, that would be incredible. I hope something becomes available soon.
Has anyone tried using a StageWebView object and the JavaScript provided by Admob here: ?
Or constructing a URL request using the information here ?
I haven't had a chance to try it yet -- just wondering if anyone had already gone down that path.
I was going down this path; but It doesn't run:
Error #1014: Class flash.media::StageWebView could not be found.
Possibly you need to compile to a later SWF version or update the namespace of the application descriptor to 2.5. For the SWF version in Flash Builder, add -target-player 10.1 to the compile options. However, if you are targeting the iPhone, note that the StageWebView class was added in AIR 2.5 and isn't available on iOS yet.
Anyone know how to hide the scrollbars? Here's what I wrote and it works for loading a page from my site. I still haven't implemented it for using admob. I'm new to admob so first was getting it to display. Can I set up a hidden page on my url and load it in?,stage.stageWidth,50);
webView.loadURL( "" );
}
StageWebViewExample();
//------------------------
Actually, once I run it as an app, the viewport takes over the entire screen regardless...
I have it working with a straight image but not a page yet.
Sent via Android.
Does anyone have an working example of this smaato example?
I can’t get the impressions to register or solve this problem.
Thanks
I´ve manage to pull and successfully register the impressions and clicks using the StageWebView, in an air4android app, but I can’t get rid of the scroll bars and zoom buttons.
I get no ads using this javascript in my html page... I however have the scrollbars gone. Anyone?
<script type="text/javascript">
var admob_vars = {
pubid: 'a14d430b52xxxxx', // publisher id
bgcolor: '000000', // background color (hex)
text: 'FFFFFF', // font-color (hex)
ama: true, // set to true and retain comma for the AdMob Adaptive Ad Unit, a special ad type designed for PC sites accessed from the iPhone. More info:
test: false // test mode, set to false to receive live ads
};
</script>
<script type="text/javascript" src=""></script>
Any idea why no ads show up even in the browser on my Android? Requests show to be coming in. I even made a house ad to see if it would serve... nuthin.
I'm using smaato and the provided php code, and its working.
haven't tried with admob yet.
This code isn't working for me either. I can't even to display an ad when I'm using the browser on my laptop...
I don’t really know why, but this only works if the app is running on the mobile device,
on the laptop I can’t see anything either.
I couldn't get it to work using the first example provided by adMob, either. What did work was to use the "manual" mode and call the fetchAd() method in the adMob Javascript code directly.
This is the HTML file that I loaded into my StageWebView object:
NOTE: change the publisher ID to your own -- or I will get credit for your ads!
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="">
<head>
<meta http-
<title>Untitled Document</title>
<script type="text/javascript">
var admob_vars = {
pubid: 'a14d4115a7779ac', // publisher id
bgcolor: 'ffffff', // background color (hex)
text: '000000', // font-color (hex)
test: false, // test mode, set to false if non-test mode
manual_mode: true
};
function showAd()
{
//alert("show ad");
_admob.fetchAd(document.getElementById('adspace'));
}
</script>
<script type="text/javascript" src=""></script>
<style type="text/css">
body {
margin-left: 0px;
margin-top: 0px;
margin-right: 0px;
margin-bottom: 0px;
}
</style>
</head>
<body onload="showAd()">
<div id="adspace">
</div>
</body>
</html>
And this is the ActionScript code (I did this using Flash CS on the main timeline, if you are doing this in Flash Builder or in a class, you will have to adapt it accordingly):
//Admob
var webView:StageWebView = new StageWebView();
webView.stage = this.stage;
var adViewPort = new Rectangle( 0, 0, this.stage.stageWidth, 60 );
webView.viewPort = adViewPort;
webView.addEventListener(ErrorEvent.ERROR, onWebViewError );
webView.addEventListener(LocationChangeEvent.LOCATION_CHANGING, onWebViewLocChanging );
webView.addEventListener(LocationChangeEvent.LOCATION_CHANGE, onWebViewLocChange );
var templateFile:File = File.applicationDirectory.resolvePath( "adview.html" );
var workingFile:File = File.createTempFile();
templateFile.copyTo( workingFile, true );
var fstream:FileStream = new FileStream();
fstream.open( templateFile, FileMode.READ );
var htmlString:String = fstream.readUTFBytes( fstream.bytesAvailable );
try
{
//webView.loadString( htmlString );
webView.loadURL( workingFile.url );
}
catch (e:Error)
{
trace( e );
}
function onWebViewLocChange( event:LocationChangeEvent ):void
{
trace( "--------------- Change to" + event.location );
if( event.location != workingFile.url )
{
trace("----------- reset ad URL ----------");
navigateToURL( new URLRequest( event.location ) );
try
{
//webView.loadString( htmlString );
webView.loadURL( workingFile.url );
}
catch (e:Error)
{
trace( e );
}
}
}
function onWebViewLocChanging( event:LocationChangeEvent ):void
{
trace( "--------------- Changing " + event.location );
event.preventDefault();
navigateToURL( new URLRequest( event.location ) );
}
function onWebViewError( error:ErrorEvent ):void
{
trace( error );
}
One issue I noticed is that the method shown above in AS won't work when you are running on the app through ADL (or Test Movie in Flash CS, etc). You can use the loadString() method, which is commented out. The reason for this is that the StageWebView control is implemented in different ways on different OS's. AIR only controls the desktop control, the other OS's use the OS supplied code.
Has anyone tried using the Flash Lite code that Admod provides? The classes are in ActionScript. I'm still learning ActionScript, so I'm not sure if I can get this working...
I have it working with Admob now. I think there was a time
period that I had to wait for the server to update? I don't know but it works now. I manually added the d
imensions of the webview object to make the scrollbars go away. I can't believe it!
It works on the desktop for me in a FlashDevelop Android/AIR template. No luck in the "regular" AIR FlashDevelop or in Flex. So I don't know at the moment what exactly is the "magic setting". But I'am happy with the progress
Well here is my code stored on my URL
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
</head>
<body bgcolor="#000000">
<script type="text/javascript">
var admob_vars = {
pubid: 'my_id', // publisher id
bgcolor: '000000', // background color (hex)
text: 'FFFFFF', // font-color (hex)
ama: false, // set to true and retain comma for the AdMob Adaptive Ad Unit, a special ad type designed for PC sites accessed from the iPhone. More info:
test: false // test mode, set to false to receive live ads
};
</script>
<script type="text/javascript" src=""></script>
</body>
</html>
Here is my AS3:,800,85);
webView.loadURL( "myURL/page.html" );
}
StageWebViewExample();
----------------------------------------
Now I need to figure out how to make the viewPort redraw when the ad is clicked...
Any suggestions? | http://forums.adobe.com/message/3396698 | CC-MAIN-2013-48 | refinedweb | 2,231 | 75 |
What is the resolution of cpu time value in ants or dotTrace profiler?
As I understand from my previous research, the resolution timer, if we want to measure cpu time for a function is ~ 15.6ms, we can get the value as 0.15.6.32.2ms
int a=Process.getCurrentProcess.UserProcessTime; functionTest(); int b=Process.getCurrentProcess.UserProcessTime; (b-a) //value like 0,15.6,32.2 ms
But using a performance profiler like dotTrace or ant I can see in the time column where the time parameter is "CPU Time" like 4.129; 1.032 ms. This is a high resolution.
What is the way to get this permission by encoding?
functionTest is ==>
private long FindPrimeNumber(int n) { int count = 0; long a = 2; while (count < n) { long b = 2; int prime = 1;// to check if found a prime while (b * b <= a) { if (a % b == 0) { prime = 0; break; } b++; } if (prime > 0) count++; a++; } return (--a); }
source to share
You can most likely call native methods in WinAPI from C # via DLLImport. But for simplicity you can try to use the package from here .
But you must clearly understand what you are doing. There will be a difference between the first call to your function and the second due to JITting time. And if your method allocates memory - GC can occur anytime your method is called and it will be reflected in the dimension.
source to share | https://daily-blog.netlify.app/questions/2216663/index.html | CC-MAIN-2021-49 | refinedweb | 240 | 73.47 |
iCamera Struct ReferenceCamera class.
More...
[Views & Cameras]
#include <iengine/camera.h>
Inheritance diagram for iCamera:
Detailed DescriptionCamera class.
This class represents camera objects which can be used to render a world in the engine. A camera has the following properties:
- Home sector: The sector in which rendering starts.
- Transformation: This is an orthonormal transformation which is applied to all rendered objects to move them from world space to camera space. It is the mathematical representation of position and direction of the camera. The position should be inside the home sector.
- Field of View: Controls the size on screen of the rendered objects and can be used for zooming effects. The FOV can be given either in pixels or as an angle in degrees.
- Shift amount: The projection center in screen coordinates.
- Mirrored Flag: Should be set to true if the transformation is mirrored.
- Far Plane: A distant plane that is orthogonal to the view direction. It is used to clip away all objects that are farther away than a certain distance, usually to improve rendering speed.
- Camera number: An identifier for a camera transformation, used internally in the engine to detect outdated vertex buffers.
- Only Portals Flag: If this is true then no collisions are detected for camera movement except for portals.
Main creators of instances implementing this interface:
Main ways to get pointers to this interface:
Main users of this interface:
Definition at line 102 of file camera.h.
Member Function Documentation
Add a listener to this camera.
Create a clone of this camera.
Note that the array of listeners is not cloned.
Eliminate roundoff error by snapping the camera orientation to a grid of density n.
Get the camera number.
This number is changed for every new camera instance and it is also updated whenever the camera transformation changes. This number can be used to cache camera vertex arrays, for example.
Get the 3D far plane that should be used to clip all geometry.
If this function returns 0 no far clipping is required. Otherwise it must be used to clip the object before drawing.
Return the FOV (field of view) in pixels.
Return the FOV (field of view) in degrees.
Return the inverse flield of view (1/FOV) in pixels.
Get the hit-only-portals flag.
Get the current sector.
Set the X shift amount.
The parameter specified the desired X coordinate on screen of the projection center of the camera.
Set the Y shift amount.
The parameter specified the desired Y coordinate on screen of the projection center of the camera.
'const' version of GetTransform ()
Get the transform corresponding to this camera.
In this transform, 'other' is world space and 'this' is camera space. WARNING! It is illegal to directly assign to the given transform in order to modify it. To change the entire transform you have to use SetTransform(). Note that it is legal to modify the returned transform otherwise. Just do not assign to it.
Calculate inverse perspective corrected point for this camera.
Calculate inverse perspective corrected point for this camera.
- Deprecated:
- Use InvPerspective(const csVector2&, float) instead.
Return true if space is mirrored.
Moves the camera a relative amount in camera coordinates.
Moves the camera a relative amount in camera coordinates, ignoring portals and walls.
This is used by the wireframe class. In general this is useful by any camera model that doesn't want to restrict its movement by portals and sector boundaries.
Moves the camera a relative amount in world coordinates.
If 'cd' is true then collision detection with objects and things inside the sector is active. Otherwise you can walk through objects (but portals will still be correctly checked).
Moves the camera a relative amount in world coordinates, ignoring portals and walls.
This is used by the wireframe class. In general this is useful by any camera model that doesn't want to restrict its movement by portals and sector boundaries.
If the hit-only-portals flag is true then only portals will be checked with the 'MoveWorld()' function.
This is a lot faster but it does mean that you will have to do collision detection with non-portal polygons using another technique. The default for this flag is true.
Calculate perspective corrected point for this camera.
Calculate perspective corrected point for this camera.
- Deprecated:
- Use Perspective(const csVector3&) instead.
Remove a listener from this camera. FOV in pixels.
'fov' is the desired FOV in pixels. 'width' is the display width, also in pixels.
Set the FOV in degrees.
'fov' is the desired FOV in degrees. 'width' is the display width in pixels.
Set mirrored state.
Set the shift amount.
The parameter specified the desired projection center of the camera on screen.
Move to another sector.
Set the transform corresponding to this camera.
In this transform, 'other' is world space and 'this' is camera space.
The documentation for this struct was generated from the following file:
Generated for Crystal Space 1.0.2 by doxygen 1.4.7 | http://www.crystalspace3d.org/docs/online/api-1.0/structiCamera.html | CC-MAIN-2015-48 | refinedweb | 828 | 60.82 |
This code is working well locally, but fails sometimes when it is deployed. Why?
@WebService is a spring annotation
public class Calculator {
@WebService
public boolean isSumOdd(int sum1, int sum2) {
int result = 0;
result = sum1 + sum2;
if (result % 2 == 1) {
return true;
}
return false;
}
}
// The answer seems to be related that this singleton is not thread-safe but why I have to take care about syncronization ? Every thread doesn't pass his variable to this singleton to get the result? Doesn't it have in his switch context the state of the sum variables?
Does the thread share also the variable inside the scope of the method in a singleton so this could lead to problem? I was thinking that the variable are passed by value and copied and after a context switch the other thread has the possibility to restore his state but I believe that I am wrong.....
Open Chat in New Window | https://careercup.com/forumpost?id=5087713539653632 | CC-MAIN-2021-43 | refinedweb | 155 | 66.47 |
Jelly is a new scripting paradigm from Jakarta that allows the developer to write scripts using XML. Mind you, this is not just another use for XML. Jelly is powerful, flexible, and easy to use—and more important, it's easy to extend.
Let’s get started using Jelly with the simple example in Listing A. You can grab the complete source code for this article here.
Jelly libraries
Jelly has quite a few built-in libraries, which are listed in Table A. In our example, we make use of three:
- · Core
- · Log
- · Interaction
Libraries are made available by assigning them to XML namespaces. In the first tag after the XML declaration, <j:jelly>, we assign three namespaces: j, l, and i. Our first nested tag makes use of the interaction tag library. This library allows you to capture input from the user. Here, you ask for the user's name, providing a default value of anonymous. The result of the user's input is stored in the variable name. The second tag makes use of the log library. It outputs some text and uses the name variable inside of it.
Table A
See Tag Reference for full details.
Jelly extensibility
One distinguishing Jelly feature is its extensibility. Any regular JavaBean can be bound to an XML tag. The JavaBean does not have to extend any special class or implement any special interface. If it supports an invokable method, such as run(), invoke(), or execute(), the method will be called after all appropriate setters are called. If you want to invoke a different method, you can declare it, as shown in Listing B.
The class in Listing B could be accessed with Jelly via the code in Listing C.
Jelly and Ant
Jelly allows full access to all of the existing Ant libraries. This makes it simple to begin writing scripts that make use of Ant’s rich set of utilities. For example, using the interaction and ant libraries from Jelly, you can easily write a directory copy script with optional backup functionality. (See dircopy.jelly.)
This Jelly script will ask you to specify a source and destination directory. It will then ask whether you want to create a backup. If you answer yes (which is the default choice), it will ask you to specify a filename. The script will copy all files from the source directory to the destination directory and if you answered yes, create a tar.gz of the specified directory.
Jelly and Maven
Although Jelly is useful as a stand-alone scripting language, it is most widely used inside of Maven, a project management and build system from Jakarta. Maven is built on the concept of plug-ins. All Maven plug-ins are written using Jelly. For instance, the default goal in Maven is java:jar, which will compile all source files for a project and create a distributable JAR file.
Working under the hood: Jelly tags
Let’s dive into the internals of Jelly and see what it takes to create a custom Jelly tag library. Our example will be a simple tag that turns our body text to lowercase or uppercase. Jelly tags are grouped in the tag libraries we saw in Table A. A tag library is registered with Jelly as a namespace.
<j:jelly
xmlns="jelly:core"
xmlns:
This example makes available the core tag library in the default namespace and our custom library in the case namespace. CaseTagLibrary must extend the org.apache.commons.jelly.TagLibrary class and invoke the registerTag method for each available tag.
registerTag(“case”, CaseTag.class);
We now have one tag available to us, case. CaseTag must extend org.apache.commons.jelly.TagSupport. TagSupport is declared as abstract and requires us to implement the doTag(XMLOutput output). After all setters have been invoked, this method will be called. For the case tag, you will look for one attribute type. For our tag class, you will define one setter method as follows:
public void setType(String type) {
this.type = type;
}
Now let’s look at the meat of the CaseTag class, the doTag method. The first thing we need to look for is whether the user supplied the required attribute, as shown in Listing D.
If type is set to lower or upper, let’s roll. The case tag is designed to take the contents of the tag and convert the text to either lowercase or uppercase. To do this, you access the text and convert it, as shown in Listing E.
Our case tag is now complete. Check out CaseTag.java (included in the download) for the complete source code. You can now use our tag like this.
<case:case type=”upper”>Hello World!</case:case>
See case.jelly (also in the download) for the complete example.
Jelly installation
After you download Jelly, you use Maven to build and install. To install on your machine, run maven dist:install. By default, it will be placed in /usr/local/jelly. To use Jelly, make sure that you set the environment variable, $JELLY_HOME, to point to where Jelly was installed. Then, add $JELLY_HOME/bin to your path. Jelly uses forehead to handle class loading. To make our CaseTagLibrary available to jelly, drop a JAR file containing the above classes into $JELLY_HOME/custom.
Final steps
Now it’s time to see the script in action. In the directory containing the sample script, run jelly case.jelly. You should be greeted with Hello World!
Wrapping up
Jelly has taken all the great ideas from many of the scripting languages that have come before and rolled them up into one. It is highly extensible, easy to use, and has a rich set of core components. | http://www.techrepublic.com/article/jelly-offers-component-rich-xml-scripting/ | CC-MAIN-2017-17 | refinedweb | 953 | 66.84 |
There are times when it is required to mix the C and C++ code together. For example, while using a legacy C code or while using a specific C library the provides your C++ code with some specific functionality. So, some special steps are to be taken care of when using C code in C++ file or vice versa.
This article, through some examples discusses the steps required to mix C/C++ code.
1. Call C functions from C++
In this section we will discuss on how to call C functions from C++ code.
Here is the C code (Cfile.c):
#include <stdio.h> void f(void) { printf("\n This is a C code\n"); }
The first step is to create a library of this C code.) :
#include <iostream> extern "C" { void f(); } void func(void) { std::cout<<"\n being used within C++ code\n"; } int main(void) { f(); func(); return 0; }
The C function f() is declared within the notation extern “C” to tell the cpp compiler that it has C type linkage.’ :
$ ./main This is a C code being used within C++ code
So we see that a C function was successfully called from a C++ code.
Also, read this for detailed information on how to create shared libraries in Linux.
2. Call C++ functions from C
In this section we will discuss on how to call C++ functions from C code.
Here is a C++ code (CPPfile.cpp) :
.
{ 7 comments… read them below or !!! | http://www.thegeekstuff.com/2013/01/mix-c-and-cpp/ | CC-MAIN-2015-14 | refinedweb | 247 | 79.8 |
Widget: could not load
Hi,
I need some help with my widget. My widget reads a file and pass the content to an object.
I don’t know why but my widget keeps showing a message saying that it could not load the widget ( in Portuguese) and I need to tap it so that pythonista refreshes the widget and then it shows the data loaded from the file. I mean, I lock the phone and unlock it after 5 seconds and I get this error.
I have the optimization code which does not create a new view as it is in the examples. I copied the code from the launcher widget example.
The launcher example does not have this problem.
def main(): widget_name = __file__ + str(os.stat(__file__).st_mtime) v = appex.get_widget_view() # Optimization: Don't create a new view if the widget already shows the :launcher. if v is None or v.name != widget_name: shelve_file = shelve.open('data') dia = shelve_file['dia'] #print('=====keys====') #print ('keys',dia.keys()) feriados = holidays.Portugal() v = EmentaView(dia) v.name = widget_name appex.set_widget_view(v)
I get the same error while loading my widget (sometimes). It has to do with the amount of memory allocated, you can try and remove some of the other widgets in your notification bar but I have noticed only a very slight difference.
Hi, thanks for your answer.
I did solve my problem some time ago. In my case the problem was related with a module called holidays.
Somehow this was causing the 'could not load' problem. What I did was to import the module in other place and since thenni never got that message again.
David | https://forum.omz-software.com/topic/4127/widget-could-not-load | CC-MAIN-2018-13 | refinedweb | 276 | 77.03 |
I am trying to use extern templates in VS 2010.
In most cases, people use extern templates purely as an optimization, but in my case I actually need them for correctness; in the header, the type instantiating the template is forward declared, so I want to defer instantiation to the source file where I have a complete type. Otherwise I'll get a compilation error. (It's valid for the template itself to be incomplete in the header, since it's just a pointer.)
Header:
Code:
class AbstractFolder;
extern template class Watcher<AbstractFolder>;
Source:
Code:
#include "abstractfolder.h"
template class Watcher<AbstractFolder>;
I've *almost* got it working, but I've run into this:
error C2960: 'Watcher<T>' : inconsistent explicit instantiations, a previous explicit instantiation specified 'extern template'
This seems like one of the stupidest errors ever, since my intent is clearly to override the "extern" and force instantiation at that point, but the compiler doesn't seem to want to let me do that.
Unfortunately it's impossible to do this in a source file where the header is not included, since the header with the extern declaration is included from abstractfolder.h (which is the reason forward declarations are needed in the first place).
Two questions. First, is it possible to make VS2010 do what I want here? Second, we may be upgrading to VS2013 soon; will it give me the behavior I want?
class AbstractFolder;
extern template class Watcher<AbstractFolder>;
#include "abstractfolder.h"
template class Watcher<AbstractFolder>;
Oh, and the strange thing is that if I look up C2960, I get an unrelated error from VS6. There doesn't appear to be any documentation of this error message.
Hey. Mind if you send me your files your coding. I can check to see if I get the same issue. I can try to fix it and send you the fixed code.
thanks. Hope I can help.
Zip up a solution of a small sample that repros the problem so someone can try it out in VS 2013.
My Code Guru Articles
Forum Rules | http://forums.codeguru.com/showthread.php?545827-Symbols-already-defined-(MSVCRTD-DLL)&goto=nextoldest | CC-MAIN-2016-44 | refinedweb | 346 | 62.27 |
Name: Tor
Member since: 2002-06-04 22:52:43
Last Login: N/A
No personal information is available.
4 Jun 2005 (updated 4 Jun 2005 at 20:38 UTC) »
the mupdf firefox plugin is actually quite nice. get it here. now i just need some time and invest some effort into the linux port. my main problem is where to find a good scrollbar. i can draw my own (ick!), i can swallow gtk+ (double ick!), but i can not access firefox's own (infinite ick!).
more ickiness: i cannot send keyboard events that are uninteresting back to firefox. so all the standard menu and keyboard shortcuts are dead. stone dead. so stupid!
31 May 2003 (updated 31 May 2003 at 15:46 UTC) »
Now all that's needed is a bullet through the POSIX header mess. I don't want to need to use autoconf/automake just to know what headers I need to get the basic libc functions. I just want to write...
#include <libc.h>
...and everything should just work. One header per library. No headers including other headers. No stupid inclusion guards. Okay?
tor certified others as follows:
Others have certified! | http://www.advogato.org/person/tor/ | crawl-001 | refinedweb | 196 | 78.04 |
Asked by:
It's a task for school with C#
Question
Hey Everyone,
I have a simple question. For school I have to make a little program with some names and dates and a random number.
Here you can find the question of the excecise:
In this exercise, they have to put there the first name, family name and the date of birth. You create a code based on these data. the code starts with 2 letters of the first name in capital letters and then a number between 1 and 10, followed by the last two letters of the first name in lower case with the date of birth without delimiters.
thanks guys!
All replies
>I have a simple question.
Yes, I have read a specification of this task and find it very simple.
I would say that you should be able to complete it within hour or two.
Good luck with your home work!
or as some guys from the prison use to say: early in - early out.
Sincerely, Highly skilled coding monkey.
I have a simple question. For school I have to make a little program with some names and dates and a random number.
Here you can find the question of the excecise:
We don't do homework assignments for students here.
Show us the code *you* have written so far.
Describe precisely which part of the program you are stuck on.
Then we can make suggestions to guide you toward a resolution without
actually doing your work for you.
- Wayne
Hi MR.BUZZ88,
I would suggest that you could write your home work by yourself, maybe the following code could provide some ideas for you.
class Program { static void Main(string[] args) { List<MyClass> list = new List<MyClass>() { new MyClass(){ FirstName = "Test1", FamilyName = "Test1", BirthDate = new DateTime(2008, 3, 1)}, new MyClass(){ FirstName = "Test2", FamilyName = "Test2", BirthDate = new DateTime(2008, 3, 1)}, new MyClass(){ FirstName = "Test3", FamilyName = "Test3", BirthDate = new DateTime(2008, 3, 1)}, new MyClass(){ FirstName = "Test4", FamilyName = "Test4", BirthDate = new DateTime(2008, 3, 1)}, new MyClass(){ FirstName = "Test5", FamilyName = "Test5", BirthDate = new DateTime(2008, 3, 1)} }; Random rnd = new Random(); var result = list.Select(t => string.Format("{0}{1}{2}{3}",t.FirstName.Substring(0, 2).ToUpper(), rnd.Next(1,10), t.FirstName.Substring(t.FirstName.Length-2, 2).ToLower(), t.BirthDate.ToString("MMddyyyy"))).ToList(); } } public class MyClass { public string FirstName { get; set; } public string FamilyName { get; set; } public DateTime BirthDate { get; set; } } Wednesday, September 26, 2018 8:52 AM
Well, seriously I think the code he need to know has already been taught in lessons. If he still not understand, it's better to get him ask the teacher and let the teacher knows, so the teacher have a chance to correct it before he graduates (or before "he proceed to a more advanced subject so the accumulated "knowledge not learnt" debt is too huge to do anything about it").
At least he have to learn "how to ask a specific question for help" and you should give him the chance to ask that question.
- Edited by cheong00Editor Tuesday, September 18, 2018 1:52 AM | https://social.msdn.microsoft.com/Forums/en-US/0123ed33-26ee-4696-96cd-43f91b652b75/its-a-task-for-school-with-c?forum=csharpgeneral | CC-MAIN-2020-45 | refinedweb | 523 | 67.99 |
Just the way we create final classes, we create final methods that cannot be overridden in the subclasses. To explain the purpose behind this, let’s look at an example. In Java, the Object class includes methods such as wait and notify. These are system-level methods that implement core language capabilities. If Java allowed users to substitute these methods with their own overridden implementations, the semantics of the language itself would be altered. Therefore, these methods are declared as final.
If we do not want a subclass of our class to override our method implementation, you mark the method as final. The following code snippet shows a declaration of a final method:
public class MyClass {
...
public final void myMethod() {
...;
}
}
The method myMethod has been declared final and cannot be overridden in a subclass of MyClass. For example, the following declaration will generate a compile-time error:
public class YourClass extends MyClass {
...
public void myMethod() {
...;
}
}
There are three benefits to making a method final. The first benefit, as we have seen so far, is to explicitly prevent overriding it in a subclass. As mentioned earlier, there is a very valid reason not to allow the wait and notify methods of the Object class to be overridden in the subclasses.
The second benefit is that it tells the compiler that for a call to this method, dynamic binding isn’t necessary, which potentially results in a slightly more efficient code. Static binding is always more efficient than dynamic binding. In the case of static binding, the method call is resolved at compile time, whereas in case of dynamic binding, the runtime resolves the method call.
The third benefit also results in better efficiency—marking a method final allows the compiler to turn any call to that method into an inline call. When the compiler sees a final method call, it can, at its discretion, skip the normal approach of inserting code via the method call mechanism.
A call mechanism consists of pushing the method arguments on the stack, jumping to the method code, executing it, hopping back to the caller, cleaning off the stack arguments, and finally dealing with the returned value. Instead of this, the compiler now can replace the method call with a copy of the actual code in the method body. This is called inlining and eliminates the overhead of a method call. However, if the method is big, the benefit of saving time in calling and returning from a method will be dwarfed by the amount of time spent inside the method.
Therefore, generally small methods benefit from inlining. The inlining benefits are not necessarily restricted to the size of the method, however, because inlining a method often leads to further optimizations, such as the elimination of dead code or more inlining.
- Nice blog. You made it very precise and explained well.Thanks | https://www.mindstick.com/blog/11093/the-final-keyword-final-methods | CC-MAIN-2018-13 | refinedweb | 476 | 53.81 |
Part of a healthy code ecosystem is consistent conventions. When we all do the same thing the same way, it makes it easier for us to learn our way around each other’s work. It also makes it easier to write tools that can automatically do things for us.
When you build a pub package, we have a set of conventions we encourage you to
follow. They describe how you organize the files and directories within your
package, and how to name things. You don’t have to have every single thing
these guidelines specify. If your package doesn’t have binaries, it doesn’t
need a directory for them. But if it does, you’ll make everyone’s life easier
if you call it
bin.
To give you a picture of the whole enchilada, here’s what a complete package
(conveniently named
enchilada) that uses every corner of these guidelines
would look like:
enchilada/ pubspec.yaml pubspec.lock * README.md CHANGELOG.md LICENSE benchmark/ make_lunch.dart packages/ ** bin/ enchilada packages/ ** doc/ getting_started.md example/ lunch.dart packages/ ** lib/ enchilada.dart tortilla.dart guacamole.css src/ beans.dart queso.dart packages/ ** test/ enchilada_test.dart tortilla_test.dart packages/ ** tool/ generate_docs.dart web/ index.html main.dart style.css
* The
pubspec.lock file is only entered into source control if the package
is an application package.
** The
packages directories exists locally after you’ve run
pub get, but won’t be checked into source control.
enchilada/ pubspec.yaml pubspec.lock
Every package has a pubspec, a file named
pubspec.yaml, in the root directory of the package. That’s what makes it a
package.
Once you’ve run
pub get,
pub upgrade, or
pub downgrade on the package, you will also have a
lockfile, named
pubspec.lock. If your package is an application
package, this will be checked into source
control. Otherwise, it won’t be.
enchilada/ packages/ ...
Running pub also generates a
packages directory. You will not check
this into source control, and you won’t need to worry too much about its
contents. Consider it pub magic, but not scary magic.
The open source community has a few other files that commonly appear at the top
level of a project:
LICENSE,
AUTHORS, etc. If you use any of those, they can
go in the top level of the package too.
For more information, see Pubspec Format.
enchilada/ README.md
One file that’s very common in open source is a README file that describes the project. This is especially important in pub. When you upload to pub.dartlang.org, your README is shown on the page for your package. This is the perfect place to introduce people to your code.
If your README ends in
.md,
.markdown, or
.mdown, it is parsed as
Markdown.
enchilada/ CHANGELOG.md
To show users the latest changes to your package, you can include a changelog file where you can write a short note about the changes in your latest release. When you upload your package to pub.dartlang.org it detects that your package contains a changelog file and shows it in the changelog tab.
If your CHANGELOG ends in
.md,
.markdown, or
.mdown, it is parsed as
Markdown.
Two directories in your package are public to other packages:
lib and
bin. You place public libraries in
lib and
public tools in
bin.
The following directory structure shows the
lib portion of enchilada:
enchilada/ lib/ enchilada.dart tortilla.dart
Many packages are library packages: they
define Dart libraries that other packages can import and use. These public Dart
library files go inside a directory called
lib.
Most packages define a single library that users can import. In that case,
its name should usually be the same as the name of the package, like
enchilada.dart in the example here. But you can also define other libraries
with whatever names make sense for your package.
When you do, users can import these libraries using the name of the package and the library file, like so:
import "package:enchilada/enchilada.dart"; import "package:enchilada/tortilla.dart";
If you want to organize your public libraries, you can also create
subdirectories inside
lib. If you do that, users will specify that path when
they import it. Say you have the following file hierarchy:
enchilada/ lib/ some/ path/ olives.dart
Users import
olives.dart as follows:
import "package:enchilada/some/path/olives.dart";
Note that only libraries should be in
lib. Entrypoints—Dart scripts
with a
main() function—cannot go in
lib. If you place a Dart script
inside
lib, you will discover that any
package: imports it contains don’t
resolve. Instead, your entrypoints should go in the appropriate
entrypoint directory.
Dart scripts placed inside of the
bin directory are public. Any package
that depends on your package can run scripts from your package’s
bin
directory using
pub run. Any package can run scripts
from your package’s bin directory using
pub global.
If you intend for your package to be depended on,
and you want your scripts to be private to your package, place them
in the top-level
tool directory.
If you do not intend for your package to be depended on, you can leave your
scripts in
bin.
You can, of course, reference a package from within your app. For example, say your source tree looks like this:
myapp/ example/ one/ sub/ index.html
The resulting build directory has the following structure:
build/ example/ one/ packages/ myapp/ style.css sub/ index.html
In this scenario, index.html references the stylesheet using
the relative path
../packages/myapp/style.css. (Note the leading
...)
You can also use a path relative to the root URL, such as
/packages/myapp/style.css, but you must be careful on how you
deploy your app.
enchilada/ lib/ guacamole.css
While most library packages exist to let you reuse Dart code, you can also reuse other kinds of content. For example, a package for Bootstrap might include a number of CSS files for consumers of the package to use.
These go in the top-level
lib directory. You can put any kind of file
in there and organize it with subdirectories however you like.
Users can reference another package’s assets using URLs that contain
/packages/<package>/<path> where
<package> is the name of the package
containing the asset and
<path> is the relative path to the asset within that
package’s
lib directory.
For example, let’s say your package wanted to use enchilada’s
guacamole.css
styles. In an HTML file in your package, you can add:
<link href="packages/enchilada/guacamole.css" rel="stylesheet">
When you run your application using
pub serve, or build
it to something deployable using
pub build, pub
copies over any referenced assets that your package depends on.
For more information about using assets, see Pub Assets and Transformers.
enchilada/ lib/ src/ beans.dart queso.dart
The libraries inside “lib” are publicly visible: other packages are free to
import them. But much of a package’s code is internal implementation libraries
that should only be imported and used by the package itself. Those go inside a
subdirectory of
lib called
src. You can create subdirectories in there if
it helps you organize things.
You are free to import libraries that live in
lib/src from within other Dart
code in the same package (like other libraries in
lib, scripts in
bin, and
tests) but you should never import from another package’s
lib/src directory.
Those files are not part of the package’s public API, and they might change in
ways that could break your code.
When you use libraries from within your own package, even code in
src, you
can (and should) still use
"package:" to import them. This is perfectly
legit:
import "package:enchilada/src/beans.dart";
The name you use here (in this case
enchilada) is the name you specify for
your package in its pubspec.
enchilada/ web/ index.html main.dart style.css
Dart is a web language, so many pub packages will be doing web stuff. That
means HTML, CSS, images, and, heck, probably even some JavaScript. All of that
goes into your package’s
web directory. You’re free to organize the contents
of that to your heart’s content. Go crazy with subdirectories if that makes you
happy.
Also, and this is important, any Dart web entrypoints (in other words, Dart
scripts that are referred to in a
<script> tag) go under
web and not
lib.
That ensures that a
packages directory is created nearby so that
package:
imports can be resolved correctly.
(You may be asking whether you should put your web-based example programs
in
example or
web?” Put those in
example.)
enchilada/ bin/ enchilada
Some packages define programs that can be run directly from the command line.
These can be shell scripts or any other scripting language, including Dart.
The
pub application itself is one example: it’s a simple shell script that
invokes
pub.dart.
If your package defines code like this, put it in a directory named
bin.
You can run that script from anywhere on the command line, if you set it up
using pub global.
enchilada/ test/ enchilada_test.dart tortilla_test.dart
Every package should have tests. With pub, the convention is
that these go in a
test directory (or some directory inside it if you like)
and have
_test at the end of their file names.
Typically, these use the unittest package.
enchilada/ benchmark/ make_lunch.dart
Packages that have performance critical code may also include benchmarks. These test the API not for correctness but for speed (or memory use, or maybe other empirical metrics).
enchilada/ doc/ getting_started.md
If you’ve got code and tests, the next piece you might want
is good documentation. That goes inside a directory named
doc. We don’t
currently have any guidelines about format or organization within that. Use
whatever markup format that you prefer.
This directory should not just contain docs generated automatically from your source code using dartdocgen. Since that’s pulled directly from the code already in the package, putting those docs in here would be redundant. Instead, this is for tutorials, guides, and other hand-authored documentation in addition to generated API references.
enchilada/ example/ lunch.dart
Code, tests, docs, what else
could your users want? Standalone example programs that use your package, of
course! Those go inside the
example directory. If the examples are complex
and use multiple files, consider making a directory for each example. Otherwise,
you can place each one right inside
example.
This is an important place to consider using
package: to import files from
your own package. That ensures the example code in your package looks exactly
like code outside of your package would look.
enchilada/ tool/ generate_docs.dart
Mature packages often have little helper scripts and programs that people run while developing the package itself. Think things like test runners, documentation generators, or other bits of automation.
Unlike the scripts in
bin, these are not for external users of the package.
If you have any of these, place them in a directory called
tool. | https://www.dartlang.org/tools/pub/package-layout.html | CC-MAIN-2015-18 | refinedweb | 1,861 | 67.35 |
Thread Class
Creates and controls a thread, sets its priority, and gets its status..
It is not necessary to retain a reference to a Thread object once you have started the thread. The thread continues to execute until the thread procedure is complete.
The following code example demonstrates simple threading functionality.
using System; using System.Threading; // Simple threading scenario: Start a static method running // on a second thread. public class ThreadExample { // The ThreadProc method is called when the thread starts. // It loops ten times, writing to the console and yielding //."); // The constructor for the Thread class requires a ThreadStart // delegate that represents the method to be executed on the // thread. C# simplifies the creation of this delegate. Thread t = new Thread(new ThreadStart(ThreadProc)); // Start ThreadProc. Note that on a uniprocessor, the new // thread does not get any processor time until the main thread // is preempted or. | http://msdn.microsoft.com/en-us/library/system.threading.thread(v=VS.90).aspx | CC-MAIN-2013-20 | refinedweb | 147 | 67.76 |
Issues with MapReduce
The MapReduce solution achieves scalability, but scalability is relative. We were able to scale our application to find the sum of the number of flowers for a trillion or more pickers. But what if we wanted to perform another operation, such as finding the standard deviation among the number of flowers picked, or the mean or mode of these numbers? We would need to write a new MapReduce program for each computation.
Every MapReduce application reads input data individually, and writes its output back to HDFS. As a result, MapReduce is made for acyclic data flow models, a paradigm that doesn't allow for iterative querying or repeated executions on results or inputs. This is limiting, because for every solution, there is always another problem waiting in the wings.
We also don't necessarily want to be bound by reading large data sets from HDFS, given that disk I/O is very expensive and bandwidth is limited. Ideally, we want an in-memory structure partitioned across commodity machines, which will enable us to do repeated querying without reloading from disk.
Aggregating with Spark
As we try out different ways of resolving our problem statement, our list of requirements is growing. The code in Listings 6 and 7 is 104 lines long, and yet it doesn't let us create a chain of operations or computations. In addition to more composable code, we want a language that supports features like cohesion, loose coupling, and orthogonality--meaning that a relatively small set of primitive constructs can be combined in a relatively small number of ways to build the control and data structures of the language.
Java 8 added functional features to the language, but Scala is inherently functional. In this section we'll see what programming in Scala on Apache Spark brings to the table. Note that we assume some experience with Spark, and that you have Spark installed and setup in your development environment already.
Spark vs MapReduce
Just like MapReduce is the programming model for Hadoop, DAG (Direct Acyclic Graph) is the programming model for Spark. We've already noted that MapReduce doesn't have the flexibility needed to create data flows. In MapReduce, you create a map, a shuffler, and a reduce operation. The map is executed on individual blocks, its output is spilled to a circular buffer, and then it's shuffled and sorted for the reducer to pick up. Even if we use a
ChainMapper or a
ChainReducer, the model is
[MAP+ / REDUCE MAP*], where
+ indicates one or more relevant tasks and
* indicates zero or more.
As a result, we can never have a model that executes
map-> shuffle->reduce->reduce, although this is an important requirement for iterative algorithms. If you want to set up something iterative, your only option is to schedule
map->shuffle->reduce as your first job, followed by
map-> shuffle->reduce. Since jobs in MapReduce are mutually exclusive, the first job has no idea whether a second job will follow. This is represented in Figure 7, where the blue line represents a
map task and the black line represents a
reduce task. We've excluded
shuffle and
sort for brevity.
Figure 7. Hadoop MapReduce vs Spark DAG
Now let's analyze the same solution in Spark. You can see the data flow illustrated on the right side of Figure 7. First, note that there are no maps between reducers (i.e., no blue circles) for the Spark solution in Figure 7. That means output from the first MapReduce can be directly fed to the second set of reducers without an explicit
map process. Second, there is no HDFS file system between operations. Instead, Spark leverages memory as long as the data fits it, which greatly reduces the cost for disk I/O. Most importantly, note the part of the diagram marked STAGE 1, where the yellow circles
1->2->4 and
3->5 are mutually exclusive. These can be simple transformations. Most importantly, Spark can intelligently connect the multiple stages into an execution pipeline, deciding which stages should run in parallel. This powerful design supports lightning-fast cluster computing in Spark.
Solving aggregation with cluster computing
Cluster computing makes Spark an excellent choice for solving our aggregation problem. As we've mentioned, Spark leverages memory rather than relying on HDFS. Objects stored in memory are called RDDs, or resilient distributed datasets, and they are the horsepower in the Spark system.
An RDD is a pointer for a collection of individual memory units. When you combine multiple RDDs, you get the entire object stored in Spark. If you apply a transformation or action on an RDD, all the individual partitions across the cluster are affected. Any loss to an individual unit can be reconstructed by lineage graphs associated with the creation of corresponding units. A lineage graph is the plan Spark employs to create individual RDD units or partitions. Further, RDD is immutable, and is evaluated only till an action (terminal operation) is executed against it (lazy evaluation). RDDs can handle any type of data, which makes Spark very popular. There is one drawback, however: operations on RDD must be generic in order to be applicable to any kind of data, which limits the number and type of available operations.
Listing 8 shows the definition of the aggregate function in Spark, in its simplest form. Before using
aggregate, we need to specify how many partitions are to be used. We specify the number of partitions when we create the initial RDD.
Listing 8. Definition of the aggregate function
def aggregate( an initial value)(an intra-partition sequence operation)(an inter-partition combination operation)
Now let's take this aggregation model back to our garden example. Recall that there are four quadrants in the JavaWorld garden: south, north, east, and west. We'll use four partitions, one for each quadrant. Going back to Listing 1, here's how we would write the first line (listing the number of flowers) in Scala:
Listing 9. Aggregation with Spark
val flowers = sc.parallelize(List(11,12,13,24,25, 26, 35,36,37, 24,15,16),4)
Note that the second parameter of 4 represents the number of partitions available in our Spark cluster.
Now it's easy to visualize the data as it relates to our problem statement: 11, 12, and 13 are the number of flowers picked by each person in the south quadrant of the garden. The numbers 24, 25, and 26 are from the north quadrant; 35, 36, and 37 are from the west quadrant; and 24, 25,and 16 are from the east. Each quadrant corresponds to one node in the Spark cluster.
Next, we divide the problem statement into two parts. The first part of the problem is to aggregate the total number of flowers picked in each quadrant; that's the intra-partition sequence aggregation from Listing 8. The second part of the problem is to sum these individual aggregates across the partitions; that's the inter-partition aggregation.
So let's find the intra-partition sequence aggregation results. Note that each flower picker initially goes to the garden with an empty bucket. That will be our starting value of 0.
Southside garden: 11+12+13 = 36
Northside garden: 24+25+26 = 75
Westside garden: 35+36+37 = 108
Eastside garden: 24+25 +16 = 65
Next, we calculate the inter-partition aggregation results:
Southside + Northside + Westside + Eastside = 36 + 75 + 108 + 65 = 284
The sum, stored in an RDD can further be used and processed for any kind of transformation or other action, and it can be persisted for later iterative use. Now we write the last part of the code. This one line of Scala performs both of the above aggregations:
Listing 10. Complex aggregation with Scala
val sum = flowers.aggregate(0)(_+_, _+_)
We start with 0 as the initial value in each of the 12 buckets. The first
_+_ is the intra-partition sum, adding the total number of flowers picked by each picker in each quadrant of the garden. The second
_+_ is the inter-partition sum, which aggregates the total sums from each quadrant.
For the aggregation to work, we need two
reduce functions after the initial value. What would happen if initial value weren't zero? If it were 5, for example, that number would be added to each intra-partition aggregate, and also to the inter-partition aggregate. So the first calculation would be:
Southside garden: 11+12+13 = 36 + 5 = 41
Northside garden: 24+25+26 = 75 + 5 = 80
Westside garden: 35+36+37 = 108 + 5 = 113
Eastside garden: 24+25 +16 = 65 + 5 = 70
Here's the inter-partition aggregation calculation with the initial value of 5:
Southside + Northside + Westside + Eastside + 5 = 41 + 80 + 113 + 70 = 309
Aggregation with accumulators
To illustrate the concept further, assume we want to find out the maximum number of flowers in each quadrant of the garden, and then aggregate the totals. We only need a slight alteration to the intra-partition function:
Listing 11. Accumulators for a quadrant
val sumofmaximums = flowers.aggregate(0)(Math.max(_,_), _+_)
And what if we wanted to find the maximum number of flowers each person could pick, across the entire garden? We could do:
Listing 12. Accumulators for the garden
val maximum = flowers.aggregate(0)(Math.max(_,_), Math.max(_,_))
The initial value used in these examples is called an accumulator, in this case, a value that is iterated across partitions and then propagated for the final result.
Aggregation with tuples
For our final example, let's say that we can use as many initial values as we want. In that case, we could solve the problem of finding the average number of flowers among all of the flower pickers in each quadrant of our garden like this:
Listing 13. Tuples
val flowersandpickers = flowers.aggregate((0,0)) ( (acc, value) => (acc._1 + value, acc_.2 +1), (acc1, acc2) => acc1._1 + acc2._1, acc1._2 + acc2._2) )
In this example, notice that the
reduce functions applied within the aggregate have to be both commutative and associative. There shouldn't be any order of execution for sequencing or combining operations. The initial values are two zeros, representing a tuple, or pair. The first zero is the initial value for the sum total number of flowers picked (because you start with zero flowers in the basket); the second zero is the initial value we use to find the average sum of flowers picked per picker (because you start with zero flowers picked). The intra-sequence aggregation adds the number of flowers in each quadrant. At the same time, we add the number 1, indicating that we have added one flower picker per basket. The inter-partition combination function adds the number of flowers and the number of flower pickers from each quadrant. To find the average, we then write:
Listing 14. Averaging with tuples
val avg = flowersandpickers._1/ flowersandpickers._2.toDouble
For comparison, here is how you would write equivalent code in Python instead of Scala:
Listing 15. Aggregating with Python
flowersandpickers = sc.parallelize([11,12,13,24,25, 26, 35,36,37, 24,15,16],4).aggregate( (0, 0), (lambda acc, value: (acc[0] + value, acc[1] + 1)), (lambda acc1, acc2: (acc1[0] + acc2[0], acc1[1] + acc2[1]))) avg = flowersandpickers[0]/ float(flowersandpickers[1])
In conclusion
A three-node Spark cluster outperforms MapReduce at about 14 times the speed, and the DAG execution patterns ensures that RDDs are reusable for future iterations. While you could program your Spark solution in Java, Scala code is significantly more efficient. The entire program, with accumulators and tuples, is just 10 lines of code. | https://www.javaworld.com/article/3184109/analytics/aggregating-with-apache-spark.html?page=2 | CC-MAIN-2018-47 | refinedweb | 1,960 | 53.51 |
- Ext.ux.form.Multiselect - issue changing height
- Help, How do I set the parameters of the form is submitted by itemid
- publish subscribe mechanism in EXTJS
- how to get treepanel reference
- how to refresh a selected node in Ext.tree.Panel- Extjs 4.1.0
- viewport layout:border margin around
- Make a panel in front of a window
- Window Forms inside Tabpanel remain visible after closing the tab
- ExtJS 4.x Button example
- How to replace the record at the specified index of a store with a new value ?
- How to create a Label Link? and scroll to option in EXTJS 4
- How to give an ID for Select all Checkbox(Top Most Checkbox)
- Grid view height is too big
- Sencha SDK and Command
- How can I change the margin color?
- Single Tree, Multiple Models and Stores.
- Dynamic GroupHeaderTpl
- My checkboxes don't render as checkboxes
- ExtJS Grid Scrollbar not working after store reload
- store grouping not working
- TreeStore, TreeGrid, MVC & loading ajax/json from php file...
- afterrender not fired on new tab
- Originial source of theme images
- Extending Ext.data.proxy.Ajax to call a function instead of a ajax request
- Animating Card layout or tab panel in Extjs
- HBOX layout issue - within old style frameset, hbox expanded to be very large
- How can i refresh the page after the action?
- Change grouping field on existing store -- can it be done?
- How to show/hide editor field according to the row record
- templatecolumn doesn't work with data that contains a root
- ExtJS between multiple BROWSER windows?
- Timefield.increment Override? (Displaying 11:59 PM)
- Best practices for loading stores and protected resources
- ExtJS 4.0.2 to ExtJS 4.1.1
- Session Time Out
- EXTJS4 not have EXT.reg() method.
- this.el error
- How can I hide null value in a Line chart?
- How to change the language of the calendar (datepicker) dynamicly?
- window.scrollto does not work in production build + Android
- Extjs4.0 Grid render issue
- ExtJs 4.1.1 Ext.window.Window: frame config behavior change
- charts be printed ?
- Problem with tabpanel in window
- Nested data and create operation
- grid not showing data
- Issues displaying a grid in a window when the grid has an ID
- How can i check the validation?
- Can't override Ext.form.Labelable.getLabelCellAttrs
- Does Extjs support to pass parameter from js file to js file
- The Paging Tool bar isn't working.
- Styling Columns
- YEA
- Limit Tree Panel Drag and Drop to Appen Only
- Grid Sum error
- loader not working for a custom path
- writeAllFields - ID Never Makes It To My MVC Controller
- TreePanel - Root is shown double
- pagingtoolbar movig with toolbar
- How to avoid copying the code when using similar buttons?
- Data flow from setRoot to implicit store to renderer in tree grid
- Simple attempt to provide a model and in-memory data to tree panel fails
- Charts: Fill between two lines
- How to handle key-value fields in a grid?
- ExtJs Grid Panel row header (to display extra information)
- Grid Load
- How to load data in formfield via Ajax
- Itemselector - override onAddbuttonClick
- me.selModel.refresh() is gone in 4.1.1 in Ext.view.Table.onUpdate()
- Cannot get scrollbar into a tree panel
- Keep the Visible properties for a tree node after it's parent expanded
- Issue while migrating from EXTJS 4.0.7 to EXTJS 4.1
- Event on Grouping feature
- ExtJs dynamically load store in menu
- aligning buttons right using tbfil
- ExtJs dynamically load store in menu
- How to get numeric index for tree panel in ExtJs
- How to read XML with "namespace-prefixes" in tags?
- Class prototype gets changed unexpectedly
- How to get grid click column name on extjs4?
- Destroying and creating new panels
- Tree: Add node sorted
- Save data to XML file
- Some graphic matters : fields mixed on display
- Sencha EXTJS 4.x Offline Capability
- How to destroy the state of Grid store
- Disable cache parameter not working
- Sencha Sliced Images Never Used Because of Compass Generated CSS Path Isn't Correct.
- How to avoid store hardcoded field names?
- How catch "this" ?
- GRID PANEL DISAPPEAR
- GRID PANEL DISAPPEAR
- Problem with store when using the groupingsummary feature.
- Spinner hide input field?
- Can't have two treepanels extended from the same class at the same time
- Select cells/rows from grid with mouse
- Get parent in loop inside loop - Xtemplate
- Tabpanel Items automatically add a header to its dockedItems
- how to make a edited column bold in roweditor
- iFrame disappear in IE8 & IE9
- Preferred architecture
- Drag and Drop in Ext4.1.0
- selectionchange treepanel get parent view
- combobox in grid
- Dynamic user interfaces
- Not allowing focus at a readOnly field
- Add a variable in "Ext.data.Model.load"...
- Disbale some items in itemselector
- adding custom row to combo box populated from store
- XMLStore
- real full afterrender event?
- node's text changed event in tree panel
- ExtJS4.1 Tree Panel Filtering
- Store won't sync without setDirty
- Tree panel not expanding properly in IE8/9.
- Syntax error in the component Ext.ux.grid.filter.DateFilter (Ext JS 4.1) Line 133
- ExtJs 4.1.1 TabPanel card generating duplicate html markup
- MVC - using several instances of views that need a controller
- problem with json
- Plugin that adds toolbar to grid
- url undefined when using Ext.data.Model
- Disable Repair Animation
- Problem with Using Keyboard Arrow Keys to Select Dataview Records
- How to dynamically change region of border layout?
- html in panel not coming in top
- List Filter with value label
- panel.Panel,grid.Pane,form.Panel
- How to use suspendEvents to queue events of a Ext.Button?
- ExT JS 4.1 LoadMask
- treepanel with rowbody horizontal resize bug when vertical scroll bar added
- Count store records with condition
- How can I get data?
- Change the style of a form field (not the label's)
- Rest Proxy exception - operation remains in batch
- Custom Layout Advice
- ExtJs static conten (.js ,.css) cache
- Issue in fieldLebel of comboBox
- hideHeaders property is not working in border layout | May be a defect || Ext Js 4.0
- Setting displayfield on combobox
- Load data into json file?
- About the 'noCache' of Ext.data.proxy.Ajax propertie
- Dynamically Setting config options
- TaskRunner + Ajax.request don't work
- Error in 4.1 samples and examples using chrome ?
- how to assign values to panel before initComponent
- Menu and MenuItem click dont work
- DecimalPrecision on Number Field
- Bubble type graph in extJS
- Combo width is not working in Ext 4.1
- Problem Event Binding on Controller
- Exjs4.1 Grid column misalignment
- Get Window Blur event
- Extjs 4.1 auto suggestion/ auto complete without using combo box
- Amazing vanishing markInvalid validation and the date picker datefield component
- Textfield 3.3 vs 4.0
- Keyboard press event and button.
- Unable to load tree data wen it has special characters
- Store.add, store.sync, how do I update my record id?
- Reg: Rendering the Grid
- define with Extend Ext.panel.Panel
- Parent window is reloaded on closing child window automatically in IE9
- Ext 4.1 HtmlEditor not taking proper width.
- ComboBox "Loading..." mask won't go away when drop-down list visible
- How to use ExtJs with asp.net
- Bug with negative positions and getPosition(true)
- Couple of questions regarding Ext.slider.Multi
- [ treeviewdragdrop ] No Ajax on position changed while having on parents changed
- Detect reason why Ext.data.JsonP.request failed
- How to do color per 'slice' on the pie chart
- Ext.Window and corresponding controller
- How can I switch between two plugins in a grid?
- Problem with Coloring each grid cell
- Color legend for the grid panel
- TreePanel : Adding Nodes Back-To-Back
- How?Ext.tree.Panel from json store using metaData to define field instead of a model.
- How not to set response data on model when performing Model.save()???
- Max Header/Toolbar Width?
- Using belongsTo / hasMany with a nesting-enabled restful store.
- pageSize on Combobox for remote loading
- in extjs4, alert window, how to only mask the parent panel!
- create login page using extjs4.1 according to client requirements
- How to escape special characters while loading the tree data in the tree panel
- TAB PANEL
- Extracting datas from xml
- How to get the SVG data for a chart?
- Got got some errors about cross-platform in IE and Opera.
- EXTJs calendar , Deselecting the selected date
- Margins in a Fit-Layout don't work correctely in ExtJS 4.0.2a
- Callback with Servlet
- Combobox always display all items
- In Extjs are there collections to component's instance created ?
- combo box reloading
- Is there any setStore() method for TreePanel?
- Setting the value for the textfield inside fieldcontainer
- Extjs4 Excel export from grid
- Parameter issue
- Balloon widget
- encoding js MS date
- Load Project Common Models
- Change Grid Row Color
- Store value giving "buffered" and "on" undefined errors
- json store giving error on special character '.'
- grid and new value
- TreePanel scroll event using a Controller
- Radio buttons not checked on form load when your form data json has boolean value
- Advanced SVG Supported?
- ToolTip AnchorSize works?
- Whole data displayed per page in grid
- How to access nested models from store in ExtJs 4
- Live Search Grid Panel, hide or remove tbar items
- Textbox Field in a Grid with CellEditing
- grid filtering feature for grouped store
- in Ext.Mssagebox, if using renderTo, but can't render specified component!
- Execution stops after loading "requires"-property
- ExtJS 4 grid update the grid after adding a new record
- Paging toolbar refresh and next not working
- Not select item in the Tree panel in case of setting the idProperty in Model
- Failed to load resource: the server responded with a status of 405
- How to receive pdf an open in new browser window using Ext.direct ?
- autoCreate is not working in 4.1for textField
- Update large data with Ext.Ajax.Request
- How to behave check box tree like radio button tree
- How to access Extjs nested models (loaded from Xml)?
- EXTjs 4 Title-Bug Collapsible Panel - (Collabsed:true)
- Panel tool handler config versus click event
- tree menu not display children
- Textbox on a Grid
- BUG: Combobox htmlEncoding; encoding diffrent in displayField then in the dropdown.
- Row Editor sending wrong Date
- form load() always return failure
- how to get selected menu item
- XTemplates and hasMany associations
- How to create a new store record with ID returned in AJAX response to POST?
- Detailed documentation for ExtJS REST request / response handling?
- ExtJS Combo box forceSelection : true doesn't work when user types in a exact match
- Building scrollbars like Sencha Docs in Ext 4
- Aout NmberField format
- How to use extjs splitbutton in xtemplate
- How to display array data, when each item display contain image, and split button
- grid - get columns display order
- ext js 4 MVC tutorial do not work ...
- I can not get server response from model save with rest proxy
- gridSummary
- How to use Record object with Xtemplate?
- TreeStore and json dont work together
- extjs is really heavy
- On a grid is it possible to toggle between group summary and normal summary?
- Unable to add textfield to a toolbar, me.el is null error | https://www.sencha.com/forum/archive/index.php/f-87-p-31.html?s=f9916fbd119361fcc2af1a6e14d9e8bf | CC-MAIN-2020-16 | refinedweb | 1,837 | 54.83 |
Some Ideas for structuring wikis to help orient people within them
Regional nesting topic initiated by KenManheimer, 4/14/00
As wikis grow I crave some kind of orienting structure. For the wiki we've been using for our consulting project development i've been playing with regional nesting - and been having great success making our wiki more approachable, for us and our clients!
By regional nesting, i mean local regions contained within more global regions, and so on. What this produces is something like an extension of classic document structuring, suitable for a table of contents - with the addition that the nesting isn't limited by a small set of document entities (book/chapter/section/paragraph, or whatever), subparts can exist in multiple places, where appropriate, the structure can be assessed from any location, and so forth.
What we are doing is a simple parent/child relationships between pages, organized around the backlinks. The page from which a new page was created is the initial parent, and the parentage can be adjusted on the backlinks page. (This enables designation of multiple parents, or divorce/adoption, etc, and nicely constrains the parentage to existing backlinks.) The title bar then gets an outline of the page's ancestry, and there is a way to see all the offspring of a page, a map of the entire wiki, etc.
We have had great success with this - we even have our clients braving the fairly extensive territory of our wiki and wading in to add and change things. We had no success getting them to approach it before that point, aside from going to specific pages and maybe wandering around a little. The specifically identified the organization and locational cues as making the difference. I suspect that the degree to which designating the relationships and having the maps helped us improve the organization helped make it more approachable - it makes a quite noticable difference for us, as well!
We've discussed these changes with simon, who seems quite interested. This may be just a beginning - this use of the doubly-directed links (wiki refs and backlinks) is sparking thoughts about implementing the double links more explicitly, rather than having the backlinks be implicit. We'll see - in any case, we're certainly interested in making the changes available for everyone to play, it's just a matter of having the time to formulate and put them out there...
Afternote - simon has incorporated the parenting stuff to ZWiki, yay! The title bar hierarchy, and the "(contents)" link shown below the page titles is part of the scheme, though the nesting isn't shown in the default view. klm.
See WikiPaths for another linking scheme - EvanSimpson
Based on (mostly successful) efforts to describe medical protocols using ZWiki, I am (slowly) working on the following three enhancements (here I am taking for granted all of the map/tree views of ZWiki 0.6, since that is the first version that I worked with):
- pre-configured pages: In this concept, if you indicate that you are entering a protocol, you automatically get regions on the page set up for: 1) assessment 2) intervention 3) record-keeping 4) expected outcome. If you are entering a policy, you get regions set up for: 1) scope 2) preconditions 3) record-keeping 4) details. If you are entering the assessment part of the protocol, you get other details. You get the idea. In this way, all protocol pages look similar and will be carried out correctly by the medical staff.
- report-print: In this concept, the users of the Wiki get to specify reports that are generated by traversing the linkage tree depth-first and combining all of the results. Essentially, the medical professionals are not entirely comfortable with having everything only on the web. This way they can get formatted, printed, versions of the protocol and related docs and put them into a binder. Saves doing this manually through a web browser, one document at a time.
- deep-docs: In this concept, the ZWiki pages are not Documents at all, but are Folders (with all of the usual Wiki linking). Documents in the folders hold the different sections of the documents, such as assessment, intervention, record-keeping, expected outcome. Rendering methods on the folder sequences the rendering of the docs, formatting according to an XSL-like stylesheet. The result is that when the viewer sees the containing view, he/she can expand and collapse the sub-contents. The degenerate case is that the folder has one document, in which case the system behaves as usual. This concept also has the benefit that the main ZWiki folder does not become crowded with multiple hundreds of documents, which means that it takes a long time to access over the Internet (we are already noticing slowness here).
submitted 7/25 - Jeff Risberg
Why not let the most frequently used navigation paths define the parent-child relationships? For example, if you always get to WikiPaths from WikiStructuringIdeas, then WikiStructuringIdeas should be a parent to WikiPaths. The key is to capture the navigation events, and then to use the captured information to display the paths as a map. I'm new to Zope, but one idea is to make a page keep track of its parents via properties. Each time someone follows the WikiPaths link on the WikiStructuringIdeas page, the WikiPaths object should get a parameter indicating WikiStructuringIdeas was the source of the traversal. Then the WikiPaths object could increase the value of its fromWikiStructuringIdeas property by 1 (you would have to create it if it didn't already exist, which would mean this was the first time you got here from there). Then you could easily display the parentage by using the fromXXX property values with some statistical analysis. - RonDagostino
I like the idea outlined by RonDagostino: if the most frequent navigation events are recorded and displayed as suggested paths, they will be reinforced by new visitors. They will get positive feedback. In effects, the navigation system would develop and change through evolution. This sounds exiting. - AlexR
You can get a WorldWideWikiWeb? going this way. You can define a WikiWebTransferProtocol? that includes the ability to optionally specify the source of the traversal (Do you agree you would need to leave the source blank when you traverse the parentage link?), and then any WikiWeb? that obeys the protocol and displays the parentage information as traversible links would automatically join something that would end up growing bigger than even the WikiWikiWeb. You could even get cute and skip over popular parents that recently tested as failed; permanently bad links would eventually fall behind the new parents. It's pretty elegant, actually. All it takes is a good protocol.
I just posted this idea to WikiWikiWeb:WorldWideWikiWeb to see if anybody over there likes it...
I love the hierarchical view! Thank you, thank you, thank you! One comment: Singletons should appear after the main table of contents because singletons provide less information due to their lack of context. Show the stuff with the most information (i.e the hierarchy) first. Here is a quick set of changes to ZWikiPage.py that switches the order. Note also that the singletons as programmed before were missing the li list-item tags, which I put in. Again, thank you, thank you, thank you!
for i in map: if type(i) == StringType: singletons.append('<li><a href="%s/%s">%s</a>' % (rel, quote(i), i)) else: combos.append(i) return ("<h3>Table of Contents</h3> %s <h3>Singleton Pages</h3> <ul> %s </ul></p>" % (present_nesting(self.id(), combos, rel),string.join(singletons," "))) return ("<h3>Singletons</h3> <ul> %s </ul><h3>The rest</h3> %s </p>" % (string.join(singletons," "), present_nesting(self.id(), combos, rel)))
- klm 5/25/2000 - Glad you like it! Personally, i implemented it because i felt it would be helpful, and now i don't want to do without it!
I agree about putting the singletons lower down, in fact it's what i did on our slightly later internal version - see the wikis on zope.org. That said, singletons often should be considered aberrations in a wiki, so there's some merit to putting them up front, to be wiped out. (Hopefully soon, we'll see all the default wiki pages included in the default nesting, so singletons really are unusual...) (SM: that was done in 0.6)
Re the li tags - i much prefer to just collect together all the singletons in a single paragraph - they don't seem to need structuring, so separate (list item) lines for each seems to me to be wasted space. I put in the list container just to indent the bunch nicely...
Not sure where this should go - but - how can I get the map views to include all the relevant headers and footers - the map view is too bald. -jc
- klm 5/25/2000: I meant to include the standard_wiki_header and standard_wiki_footer, but my simplistic approach failed and i ran out of time to figure out what was going wrong. This is something that should be done! But be careful saying "bald" as if it were a bad thing. Some of my best selves are bald! :->
Other kinds of WikiStructuringIdeas: CliffordAdams' MeatballWiki:ViewPoint, UseModWiki:SubPages.
FlorianKonnertz, 2002/09/17 16:21 GMT (via web):
Could someone tell me briefly, if a subpage feature (/subpage) is planned to implement in the near future or if i could help to?! -FloK | https://zwiki.org/FrontPage/WikiStructuringIdeas | CC-MAIN-2021-25 | refinedweb | 1,579 | 61.67 |
Hi
In my application I have two ui forms. On the first form, I have a push button "Hospitality". Now I have another form (Widget type) which I want to be shown when I press the "Hospitality" button. How can I achieve this?
Hi
In my application I have two ui forms. On the first form, I have a push button "Hospitality". Now I have another form (Widget type) which I want to be shown when I press the "Hospitality" button. How can I achieve this?
Hi,
You can use signal and slot connection:
YourNewWidget *widget=new YourNewWidget();
connect(ui->HisoitalityBtn,SIGNAL(clicked()),widget,SLOT(showMaximized()));
Hopefully that will help
I tried this. But no use. I am writing my code below. Please check it and let me know my mistake.
Here, HospRefineSearch is the name of my second form(which I want to activate)Here, HospRefineSearch is the name of my second form(which I want to activate)Code:AppUi::AppUi(QWidget *parent) : QMainWindow(parent), ui(new Ui::AppUi) { ui->setupUi(this); } AppUi::~AppUi() { delete ui; } void AppUi::changeEvent(QEvent *e) { QMainWindow::changeEvent(e); switch (e->type()) { case QEvent::LanguageChange: ui->retranslateUi(this); break; default: break; } } void AppUi::on_btnAdvancedSearch_clicked() { } void AppUi::on_btnHospitality_clicked() { HospRefineSearch *widget=new HospRefineSearch(); widget->showMaximized(); }
What is the result when you run this code ? you click on the button and its just not working ? are you sure you included your HospRefineSearch .h file ?
On clicking the button, nothing is event, no new form is geeting opened. Yes, I have included HospRefineSearch .h file. Its code is mentioned below
Also, please tell me I used below codeAlso, please tell me I used below codeCode:#include <QWidget> namespace Ui { class HospRefineSearch; } class HospRefineSearch : public QWidget { Q_OBJECT public: HospRefineSearch(QWidget *parent = 0); HospRefineSearch(); protected: void changeEvent(QEvent *e); private: Ui::HospRefineSearch *ui; };
rather thanrather thanCode:void AppUi::on_btnHospitality_clicked() { HospRefineSearch *widget=new HospRefineSearch(); widget->showMaximized(); }
Is it correct or not?Is it correct or not?Code:connect(ui->btnHospitality,SIGNAL(clicked()),widget,SLOT(showMaximized()));
Actually they should work the same as I know....your code seems correct !! but try this:
in your cpp file
Code:AppUi::AppUi(QWidget *parent) : QMainWindow(parent), ui(new Ui::AppUi) { ui->setupUi(this); HospRefineSearch *widget=new HospRefineSearch(); connect(ui->Hospitality,SIGNAL(clicked()),widget,SLOT(showMaximized())); }
I have already tried this. But this also didn't worked for me .
ok please try to define you object on appui.h as a member like this:
and in cpp file:and in cpp file:Code:HospRefineSearch *widget;
I don't know just give it a tryI don't know just give it a tryCode:void AppUi::on_btnHospitality_clicked() { widget=new HospRefineSearch(); widget->showMaximized(); }
Tried this. Still the same problem.Tried this. Still the same problem.ok please try to define you object on appui.h as a member like this:
Anyways, Thanks a lot for your suggestions.Anyways, Thanks a lot for your suggestions.I don't know just give it a try
Basically it should work, but you have a serious problem in that you will immediately "lose" your handle to the HospRefineSearch instance, so you won't be able to reference it (or delete it when done).
Oh! On second look I see you're missing the connect. You need a connect statement along the lines of
But one should always capture and test the return code from connect() as follows;But one should always capture and test the return code from connect() as follows;Code:connect(ui->btnHospitality,SIGNAL(clicked()),widget,SLOT(on_btnHospitality_clicked()));
Code:bool ok = connect(...); Q_ASSERT(ok); Q_UNUSED(ok);
You also have a QMainWindow object, which you have setted centralWidget with first widget. So connecting first button clicked signal to a function that change centralWidget for your QMainWindow.
Try this mine sample:
Regards camera, but it's so short that you can see how go between "screen".
You always need a "connect". In some cases you can simply point it at "showMaximized" or some such, while in other cases you need to point it at your own slot.
(Note that you need to declare the target of a "connect" to be a "slot". Eg:in your .h file.in your .h file.Code:public slots: void on_btnXyz(); .....
Please read the excellent documentation section on signals & slots. | http://developer.nokia.com/community/discussion/showthread.php/218063-How-to-activate-another-ui-on-button-click-event?p=813108&viewfull=1 | CC-MAIN-2014-10 | refinedweb | 712 | 56.45 |
#include <Controller.h>
List of all members. 102 of file Controller.h.
[inline]
Constructor.
Definition at line 104 of file Controller.h.
Constructor, sets a default root control.
Definition at line 105 of file Controller.h.
[inline, virtual]
Destructor.
Definition at line 106 of file Controller.h.
[private]
shouldn't be called...
[virtual]
register for events and resets the cmdstack
Reimplemented from BehaviorBase.
Definition at line 32 of file Controller.cc.
stop listening for events and resets the cmdstack
Definition at line 53 of file Controller.cc.
passes an event to the top control
Implements EventTrapper.
Definition at line 75 of file Controller.cc.
just for e-stop activation/deactivation
Definition at line 127 of file Controller.cc.
will take the command stack back down to the root
Definition at line 137 of file Controller.cc.
Referenced by DoStart(), DoStop(), setRoot(), and takeLine().
refreshes the display, for times like sub-control dying, the previous control needs to reset it's display
Definition at line 147 of file Controller.cc.
Referenced by pop(), reset(), setRoot(), and takeLine().
puts a new control on top
Definition at line 153 of file Controller.cc.
Referenced by chkCmdStack(), and setNext().
kills the top control, goes to previous
Definition at line 162 of file Controller.cc.
Referenced by reset(), and setNext().
returns the current control
Definition at line 132 of file Controller.h.
sets the root level control
Definition at line 169 of file Controller.cc.
Sets the emergency stop MC to monitor for pausing.
Definition at line 176 of file Controller.cc.
[inline, static]
Gives a short description of what this class of behaviors does... you should override this (but don't have to).
If you do override this, also consider overriding getDescription() to return it
Definition at line 138 of file Controller.
Definition at line 139 of file Controller.h.
attempts to open a Java object on the desktop
Definition at line 142 of file Controller.h.
Referenced by WorldStateSerializerBehavior::DoStart(), WalkControllerBehavior::DoStart(), ViewWMVarsBehavior::DoStart(), UPennWalkControllerBehavior::DoStart(), HeadPointControllerBehavior::DoStart(), Aibo3DControllerBehavior::DoStart(), WalkCalibration::err(), SegCamBehavior::setupServer(), RegionCamBehavior::setupServer(), and RawCamBehavior::setupServer().
[static]
Definition at line 188 of file Controller.cc.
calls close() on a Java object loaded with loadGUI() (on the desktop)
Definition at line 206 of file Controller.cc.
Referenced by SegCamBehavior::closeServer(), RegionCamBehavior::closeServer(), RawCamBehavior::closeServer(), WorldStateSerializerBehavior::DoStop(), WalkControllerBehavior::DoStop(), ViewWMVarsBehavior::DoStop(), UPennWalkControllerBehavior::DoStop(), HeadPointControllerBehavior::DoStop(), and Aibo3DControllerBehavior::DoStop().
called by wireless when there's new data from the GUI
Definition at line 214 of file Controller.cc.
Referenced by DoStart().
called by wireless when someone has entered new data on the tekkotsu console (NOT cin)
Definition at line 245 of file Controller.cc.
[protected] 292 of file Controller.cc.
Referenced by Controller().
assigns appropriate values to the static event bases
Definition at line 328 of file Controller.cc.
Referenced by init().
called with each line that's entered on the tekkotsu console or from the GUI
Definition at line 360 of file Controller.cc.
Referenced by console_callback(), and gui_comm_callback().
called with slots (options), a name to lookup; will select the named control
Definition at line 338 of file Controller.cc.
Referenced by takeLine().
sets a config value - some values may require additional processing (done here) to have the new values take effect
Definition at line 519 of file Controller.cc.
maintains top Control
Definition at line 540 of file Controller.cc.
Referenced by push(), select(), takeLine(), and trapEvent().
called when the estop switches on
causes the top control to activate, registers for button events
Definition at line 548 of file Controller.cc.
Referenced by processEvent(), and setEStopID().
called when the estop switches off
causes the top control to deactivate, stops listening for buttons
Definition at line 558 of file Controller.cc.
returns true if a valid control is available on the stack
if the stack is empty, will push root if it's non-null
Definition at line 570 of file Controller.cc.
Referenced by activate(), push(), refresh(), takeLine(), and trapEvent().
[inline, static, protected]
returns true when the current time and last time are in different periods
Definition at line 196 of file Controller.h.
Referenced by trapEvent().
event masks used by processEvent()
Definition at line 115 of file Controller.h.
Referenced by initButtons(), ValueEditControl< T >::pause(), ValueEditControl< T >::processEvent(), and trapEvent().
Definition at line 116 of file Controller.h.
Definition at line 117 of file Controller.h.
Referenced by initButtons(), and trapEvent().
Definition at line 118 of file Controller.h.
Definition at line 119 of file Controller.h.
Definition at line 120 of file Controller.h.
invalid_MC_ID if not active, otherwise id of high priority LEDs
Definition at line 183 of file Controller.h.
Referenced by activate(), chkCmdStack(), deactivate(), DoStart(), DoStop(), and push().
the EmergencyStopMC MC_ID that this Controller is monitoring
Definition at line 186 of file Controller.h.
Referenced by setEStopID().
the base control, if cmdstack underflows, it will be reset to this
Definition at line 189 of file Controller.h.
Referenced by chkCmdStack(), setRoot(), takeLine(), and ~Controller().
the stack of the current control hierarchy
should never contain NULL entries
Definition at line 193 of file Controller.h.
Referenced by activate(), chkCmdStack(), deactivate(), pop(), push(), refresh(), reset(), setNext(), takeLine(), top(), and trapEvent().
the time of the last event
Definition at line 205 of file Controller.h.
the time of the current event (do*() can check this instead of calling get_time() )
Definition at line 206 of file Controller.h.
the magnitude of the last next event (nextItem)
Definition at line 207 of file Controller.h.
the duration of the last next event (nextItem)
Definition at line 208 of file Controller.h.
the magnitude of the last prev event (prevItem)
Definition at line 209 of file Controller.h.
the duration of the last prev event (prevItem)
Definition at line 210 of file Controller.h.
if doReadStdIn() was already called, but the buttons are both still down
Definition at line 211 of file Controller.h.
true if the Controller is currently active (in the activate()/deactivate() sense, not DoStart()/DoStop() sense - use isActive() for that...)
Definition at line 212 of file Controller.h.
Referenced by activate(), deactivate(), DoStart(), processEvent(), setEStopID(), and takeLine().
the socket to listen on for the gui
Definition at line 214 of file Controller.h.
Referenced by activate(), chkCmdStack(), closeGUI(), console_callback(), DoStart(), DoStop(), loadGUI(), pop(), push(), and takeLine().
[static, protected]
currently can't pull connection socket off of server socket, so only one Controller
Definition at line 215 of file Controller.h.
Referenced by closeGUI(), console_callback(), DoStart(), DoStop(), gui_comm_callback(), loadGUI(), pop(), push(), takeLine(), and ~Controller(). | http://www.tekkotsu.org/dox/classController.html | crawl-001 | refinedweb | 1,085 | 51.24 |
Why doesn't this work as one may have naively expected?
class Foo(object):
def __init__(self):
self.bar = 3
def __bool__(self):
return self.bar > 10
foo = Foo()
if foo:
print 'x'
else:
print 'y'
x
For Python 2-3 compatibility, just add this to your example:
Foo.__nonzero__ = Foo.__bool__
or expand the original definition of Foo to include:
__nonzero__ = __bool__
You could of course define them in reverse too, where the method name is
__nonzero__ and you assign it to
__bool__, but I think the name
__nonzero__ is just a legacy of the original C-ishness of Python's interpretation of objects as truthy or falsy based on their equivalence with zero. Just add the statement above and your code will work with Python 2.x, and will automatically work when you upgrade to Python 3.x (and eventually you an drop the assignment to
__nonzero__). | https://codedump.io/share/j6eIxzMFwEYp/1/defining-truth-value-of-a-class-in-python | CC-MAIN-2017-09 | refinedweb | 149 | 71.14 |
I want to provide some examples to clarify the technical definitions around namespaces. Wikipedia has the following definition:.
This definition is quite a bit to digest for some folks, so perhaps this analogy will help. Consider my apartment building. It's technically two distinct buildings with their own entrances. However, the parking garage, gym, pool, and common rooms are shared. The buildings have their own names, City Place and City Place 2. They have their own street addresses, floors, and elevators. Yet, they are attached to the same physical complex.
The physical complex is the same idea as a computer. Two namespaces (or more) can reside on the same physical computer, and much like the apartment building, namespaces can either share access to certain resources or have exclusive access.
There are seven common types of namespaces in wide use today. Using the apartment as our guide, let's walk through a summary of what each type does. Below is a brief overview of each namespace type. In subsequent articles, we will show how each namespace works by example.
Process isolation (PID namespace)
A PID, or process ID helps a system track a specific task on a computer. When you launch Firefox on your computer, it will have a PID associated with it. In order to make sure that the computer interacts with a task properly, the task is referenced by the PID. Let's assume that you have Firefox and the Brave Browser open at the same time. You fire up your preferred search engine, such as DuckDuckGo, in each browser and search for cat pictures in one and dog pictures in the other. Both browsers are making a similar request to the same website. How does the computer ensure that the correct search results are returned to the correct browser? One of the main ways is by tracing the requests each PID makes and then returning the results to the requesting process.
"Great, I'm lost," you say. What does this have to do with apartment buildings and namespaces? Well, let's say that you want to run identical copies of a piece of software. Some software was not written to have more than one copy open at a time. To accomplish this, you might have to isolate the PID so that it is not aware of what is going on outside of its own processes. This is what process isolation can help to solve.
To put it another way, going back to our apartment building, the individual apartments in the complex are similar to the PID namespace. Each household in the apartment complex wants to turn on their TVs and watch a variety of content. In a silly parallel universe, imagine that when a single person in the apartment building turned on their TV, no one else could watch TV because otherwise, the cable provider wouldn't know which TV should be watching "The Office" and which "The Apprentice." Namespaces would allow the cable company to isolate each household and provide distinct programming to each resident because it no longer matters what other people in your complex are viewing.
Network interfaces (net namespace)
Every computer that is connected to a network (such as the internet) requires an IP address. This is a unique number that allows computers to communicate effectively. When a specific type of resource is accessed, say a web page, there is a particular port used for that communication. This is because a computer could host a web server, a game server, and perhaps an email server on the same host. The ports might be 80 or 443 for web traffic, 8888 for game traffic, and 25 for email. When I type https://<website address> into my browser, the computer translates it to send the traffic to the browser's IP xxx.xxx.xxx.xxx on port 443. The server on the other end then replies with the appropriate content via the source IP address. Some technology stacks, as mentioned before, do not support multiple instances of the software running concurrently. Unlike PID isolation, however, when software such as an email server is receiving a connection, it expects to be on a specific port. So even if you isolated the PID, the email server would only have a single instance running because port 25 is already in use. Network namespaces allow processes inside each namespace instance to have access to a new IP address along with the full range of ports. Thus, you could run multiple versions of an email server listening on port 25 without any software conflicts.
In my building scenario, the buildings are attached to the same physical complex and are mirror images of each other. This means there are the same amount of living spaces in each building. In addition, they follow the same apartment numbering schema. This means that each building has Unit 707, for example. In this example, the IP address is like the street address of the apartment buildings, and the apartment number is the port. Each apartment complex is part of the same physical building, but they have different street addresses. Since the buildings are not physically separate, without this net namespace a company sending a bill to Steve Ovens, 123 Fake Street Unit 707 might not ever receive a reply because it went to the wrong address. More likely, the post office simply would not attempt to deliver the letter due to the ambiguity of the address. However, net namespaces allow us to have 123 Fake Street and 125 Fake Street in the same physical building.
[ Readers also enjoyed: How to manage cgroups with CPUShares ]
Unix Timesharing System (uts namespace)
This namespace is unfortunately named by today's standards. It dates back to the early days of Unix and has to do with how information was stored for a specific system call. Today, this detail is largely lost to history and what you really need to know is that the UTS namespace allows for the segregation of hostnames. Often hostnames are simply a convenience. As mentioned earlier, with some exceptions, most communication to and from a host is done via the IP address and port number. However, it makes life a lot easier for us humans when we have some sort of name attached to a process. Searching through log files, for instance, is much easier when identifying a hostname. Not the least of which because, in a dynamic environment, IPs can change.
In our building analogy, the hostname is similar to the name of the apartment building. Telling the taxi driver I live in City Place apartments is usually more effective than providing the actual address. Having multiple hostnames on a single physical host is a great help in large containerized environments.
User namespace
Every computer system has some way of tracking which user owns which file(s). This allows the system to restrict access to sensitive system files. It also prevents people using the same computer from accessing each others' files. To the end-user, the files are shown to be owned by a login name such as stratus. To the computer, however, this is just an arbitrary combination of alphanumeric characters. In order to track these permissions correctly, there is a process of mapping the user stratus to a specific user identification (UID) number, such as 1001. This UID is then applied to the metadata of the file. This allows you to change your username to stratus1 without the system having to do a significant amount of metadata updating. The mapping file gets updated so that UID 1001 is then associated with stratus1 instead of stratus. The entry is changed in a single (or a small number) of places.
Think of this as similar to the mailboxes on the floor of my apartment building. Each mailbox has a name or multiple names associated with it. When someone wishes to deliver mail to me, they put the mail in box 707, for example. Should I decide to change my name, or add my wife's name to the mailbox, nothing changes. The mail still goes to the same location, only the authorized persons can view these files, and my key to unlock the box stays unchanged.
Mount (mnt namespace)
The mount namespace is used to isolate mount points such that processes in different namespaces cannot view each others' files. If you are familiar with the chroot command, it functions similarly.
In our analogy, I am going to deviate a little from the apartment. Let's assume two people get married. In this scenario, there are a bunch of things that usually end up being shared. You probably put their name on the address listing, you might have both names on the bills, have both names on any car ownerships, and add them to your credit cards, etc. In this sense, you are sharing the same mount point with the other person. However, their credit may be separate, they might pay their own taxes, and so on. Their Social Insurance number (Social Security number for our friends in the USA) stays distinct from yours and vice versa. Therefore, while your partner may see a similar view of the financial situation, it will not be exactly the same. They have no real ability to see any financial matters that are legally your sole possession.
The same is true for filesystem mount points. By default, different mount namespaces cannot view the other's content. As far as the namespace is concerned, it is at the root of the file system, and nothing else exists. However, you can mount portions of an underlying file system into the mount namespace, thereby allowing it to see additional information.
Interprocess Communication (IPC)
This namespace is highly technical and complex to understand and describe. So much so that Marty Kalin has a series of articles on opensource.com detailing Interprocess Communication (IPC). If you are really interested, I will refer you to Marty's series as I could not do this topic justice. IPCs handle the communication between processes by using shared memory areas, message queues, and semaphores. If all of that sounds foreign to you, don't worry, you aren't alone. The most common application for this type of management is possibly the use of databases.
If we zoom in on a single aspect of IPC, shared memory, we can get a basic understanding of the importance. In programming, whenever the application needs to keep track of some information temporarily, it requests that the operating system dedicate a certain amount of random access memory (RAM) to its process. You can think of this, similar to starting a conversation with someone in a chat program. By default, there are only two recipients, yourself and the person on the other end, or thought of another way, the operating system, and the application. Both participants can see the entire history of the conversation, but no one else can. However, if someone new is invited into the chat, they too can read and participate in the discussion. Shared memory can act similarly, allowing two or more programs to have access to the same information. A change by one program will be immediately visible to the other program.
CGroup
For an in-depth look at CGroups, I recommend reading my series on redhat.com. In that four-part series, I go into detail about how CGroups work and how you can put them into practice. In brief, CGroups are a mechanism for controlling system resources. When a CGroup is active, it can control the amount of CPU, RAM, block I/O, and some other facets which a process may consume. By default, CGroups are created in the virtual filesystem
/sys/fs/cgroup. Creating a different CGroup namespace essentially moves the root directory of the CGroup. If the CGroup was, for example,
/sys/fs/cgroup/mycgroup, a new namespace CGroup could use this as a root directory. The host might see
/sys/fs/cgroup/mycgroup/{group1,group2,group3} but creating a new CGroup namespace would mean that the new namespace would only see
{group1,group2,group3}.
You might wonder why this might be needed. Part of this is security related. If a new CGroup namespace was not created, there is a potential to leak information. The man page says it best:
It prevents information leaks whereby cgroup directory paths outside of a container would otherwise be visible to processes in the container. Such leakages could, for example, reveal information about the container framework to containerized applications.
In a traditional CGroup hierarchy, there is a possibility that a nested CGroup could gain access to its ancestor. This means that a process in
/sys/fs/cgroup/mycgroup/group1 has the potential to read and/or manipulate anything nested under
mycgroup.
Finally, thinking specifically of containers, CGroup namespaces allows containers to be agnostic of ancestor CGroups. This is important to something like OpenShift, which may migrate containers between hosts. Without the isolation provided by namespaces, the full CGroup path names would need to be replicated on a new host when migrating a container. Since pathing must be unique, CGroup namespaces help avoid conflicts on the new host system.
In terms of the building analogy, a CGroup is similar to the utilities in each unit. While CGroups can limit resource usage, it also can act as an accountant. So while my apartment doesn't prevent me from using a certain amount of power or water, it definitely tracks (and bills) my usage. That usage is just a tiny part of the usage of the building as a whole. I do not know the building's total usage (my apartment's ancestor), nor can I know how much my neighbor is utilizing. They are in a completely different CGroup namespace.
[ Get this free ebook: Managing your Kubernetes clusters for dummies. ]
Wrapping up
So there you have a brief overview of what the seven most used namespaces are. Hopefully, my analogy was useful and clear. In the next couple of articles, I explore some of these namespaces and how they are created by hand. This will give you a better understanding of the utility of namespaces. In the final article, I tie it all together, including the use of CGroups to explain how containers function "under the hood." | https://www.redhat.com/sysadmin/7-linux-namespaces | CC-MAIN-2021-17 | refinedweb | 2,385 | 63.19 |
I created a form with javafx that opens a login dialog, after the user enters the login information correctly the dialog closed and the main form loaded some data, what I need is that when the login dialog is closed it would return the user id (which make the login) to the main form, the code of the above case is like that:
The main form
Stage loginDialog = new LoginDialog(stage); loginDialog.sizeToScene(); loginDialog.showAndWait();
the login dialog form
/* do the login */ close(); /* need to return thew user id to the main form*/
Any help please
The first thing I would suggest is to create your own simple dialog box. The controlsFX stuff is cool, but my experience has been that some of the controls are overly complex and others have bugs. Here is an abridged example.
public class DialogBox { private static String[] login; public static String[] display(String title, String message) { Stage window = new Stage(); window.initModality(Modality.APPLICATION_MODAL); window.setTitle(title); window.setWidth(300); window.setHeight(175); window.initStyle(StageStyle.UTILITY); Label label = new Label(message); // Set up the JavaFX button controls and listeners and the text fields // for the login info. The button listeners set the login values window.setScene(new Scene(root, 300, 175); window.showAndWait(); return login; } }
As you can see, there is a single static method called display() that returns a String array that contains the user login information. Simply do a static call to this method as follows.
String[] login = DialogBox.display("Login Dialog", "Enter User Name and Password"); | https://javafxpedia.com/en/knowledge-base/16810370/return-values-from-javafx-dialog | CC-MAIN-2020-40 | refinedweb | 256 | 51.78 |
s/mm-commits/lkml/
On 07/23, Eric W. Biederman wrote:[color=blue]
>
> Oleg Nesterov <oleg@tv-sign.ru> writes:
>[color=green]
> > <troll mode on>
> >
> > Sadly, I can't see some really bad problems with this patch ;)
> >
> > Because with this change it is much harder to remove tasklist_lock
> > for the "kill(-1)" case.
> >
> > kill(-1) is not time critical, the problem it holds tasklist_lock.
> > And this patch makes things worse for the global namespace.[/color]
>
> Slightly. It leaves the code very readable in all namespaces, and it
> puts all of the logic in one function where it can be more easily
> worked with.
>
> I have yet to see an instance where we can safely drop tasklist_lock. In
> the kill -1 case.[/color]
Afaics, all we need is the patch below. Then we can s/tasklist/rcu/ + add
fat comment to explain why this is safe.
Oleg.
--- kernel/signal.c
+++ kernel/signal.c
@@ -1110,6 +1110,23 @@ out_unlock:
EXPORT_SYMBOL_GPL(kill_pid_info_as_uid);
/*
+ * Same as group_send_sig_info(), but make sure we don't race
+ * with exec() when we don't hold tasklist_lock
+ */
+int kill_xxx(int sig, struct siginfo *info, struct task_struct *g)
+{
+ struct task_struct *p = g;
+
+ do {
+ ret = group_send_sig_info(sig, info, p);
+ if (ret != -ESRCH)
+ break;
+ } while_each_thread(g, p);
+
+ return ret;
+}
+
+/*
* kill_something_info() interprets pid in interesting ways just like kill(2).
*
* POSIX specifies that kill(-1,sig) is unspecified, but what we have
@@ -1137,7 +1154,9 @@ static int kill_something_info(int sig,
for_each_process(p) {
if (p->pid > 1 && !same_thread_group(p, current)) {
- int err = group_send_sig_info(sig, info, p);
+ int err = kill_xxx(sig, info, p);
+ if (err = -ESRCH) /* not possible under tasklist */
+ continue;
++count;
if (err != -EPERM)
retval = err;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email]majordomo@vger.kernel.org[/email]
More majordomo info at [url][/url]
Please read the FAQ at [url][/url] | http://fixunix.com/kernel/512108-re-signals-introduce-kill_pid_ns_info-patch-added-mm-tree-print.html | CC-MAIN-2016-22 | refinedweb | 309 | 63.39 |
0
hi, i created a webpage explaining what polymorphism is. could someone point to me any inaccuracies or additions that i could make?
Polymorphism Syntax Polymorphism is when one type can take the form of a related type. Here is an example: DerivedClass B = new DerivedClass(); B.DoWork(); BaseClass A = (BaseClass)B; // class B is a derived class and therefore, casting is needed. A.DoWork(); Explanation In the explanation section, i will use inheritance and overriding methods to demonstrate to you the concept of polymorphism. The reason i will use inheritance is because inheritance has got one base class and several derived classes that are associated with it (thus making polymorphism valid as one class can take the form of another). Short explaination on overriding methods As it was explained in the inheritance section Inheritance, there is a base class that holds fields/methods and every derived class gets them. Thus in your source code you can call a method on a base class, and cause a derived class's version of the method to be executed. To achieve that a method in the derived class needs to override the virtual method in the base class. here is an example: This code demonstrates polymorphism and overriding of methods: class Program { static void Main(string[] args) // The goal in the code is to create one base object and many derived objects (the animals) and make each object implement its own mehtod. { Animal[] zoo = new Animal[6]; zoo[0] = new Monkey(); zoo[1] = new Dog(); zoo[2] = new Ez(); zoo[3] = new Person(); zoo[4] = new Cow(); zoo[5] = new Cat(); // Foooooooooooooooo for (int i = 0; i < zoo.Length; i++) { zoo[i].MakeNoise(); // This is how polymorphism manifests itself. MakeNoise in every derived class should override the method in the base class. } Console.WriteLine("======"); // Notice the method invocations below because an explanation will follow. Cat cat = new Cat(); cat.MakeNoise();// Cat Miyau Console.WriteLine("==parsi==="); Animal pCat = new ParsiCat(); pCat.MakeNoise(); //Foooooooooooooo Cat pcat1 = new ParsiCat(); pcat1.MakeNoise();//Cat Parsi: Miyauuu } } } class Animal { public virtual void MakeNoise() // you need to use the word virtual to accept an overriding from the derived class. { Console.WriteLine("Foooooooooooooooo"); } } class Dog : Animal { public override void MakeNoise() // the word override must be there to override the method. if you dont put that word, "new" word will be set as a default, which means, no overriding!! { Console.WriteLine("Dog: Wooff"); } } class Ez : Animal { public override void MakeNoise() { Console.WriteLine("Meehעעhee"); } } class Cow:Animal { public override void MakeNoise() { Console.WriteLine("MOOOOOoooooo...."); } } class Monkey : Animal { public override void MakeNoise() { Console.WriteLine("ahahahaoobovosssoso"); } } class Person : Animal { public override void MakeNoise() { Console.WriteLine("BoynaLehMipo"); } public new void MakeNoise() { Console.WriteLine("Caaaaaaaaaaaaaaaaaaaaaaaaaaaasting"); } } class Cat:Animal { public new virtual void MakeNoise() // To make a new virtual method set the words "new virtual". { Console.WriteLine("Cat: Miyauuu"); } } class ParsiCat:Cat { public override void MakeNoise() // ParsiCat will override the method that is inside the Cat class { Console.WriteLine("Cat Parsi: Miyauuu"); } Note: you can reach a derived classes method even without the override word by casting: ((Person)Animal[3]).Foo(); (Output = Caaaaaaaaaaaaaaaaaaaaaaaaaaaasting). For more information on casting see .............. A derived class can stop virtual inheritance by declaring an override as sealed. This requires putting the sealed keyword before the override keyword in the class member declaration. public class C : B { public sealed override void DoWork() { } } Accessing Base Class Virtual Members from Derived Classes: A derived class that has replaced or overridden a method or property can still access the method or property on the base class using the base keyword. public class Base { public virtual void DoWork() {/*...*/ } } public class Derived : Base { public override void DoWork() { //Perform Derived's work here //... // Call DoWork on base class base.DoWork(); } } For those of you who want to know another trick using base and inheritance, see the following example: In this example i try to print the area of each shape. class Program { static void Main(string[] args) { Line line = new Line(500); line.PrintLine(); Rec rec = new Rec(25, 25); rec.PrintRec(); Cube cube=new Cube(10,5,6); cube.PrintCube(); //300 } } } class Line { int x; public Line(int x) { this.x=x; } public int X { get { return x; } } public void PrintLine() { Console.WriteLine(X); } } class Rec : Line { int y; public Rec(int x,int y):base(x) { this.y = y; } public int Y { get { return y; } } public void PrintRec() { Console.WriteLine(X*Y); } } class Cube:Rec { int z; public Cube(int x, int y, int z):base(x,y) { this.z = z; } public int Z { get { return z; } } public void PrintCube() { Console.WriteLine(X * Y * Z); } Lets see what happens! lets say i send parameters to Cube. This is the process that happens: The base(x,y) contructor of Cube class gets them first-----> it sends them to Rec(its base class)-----> Rec has got another base(x) constructor, it sends x to Line----> Line gets x and sets it---> then all the derived classes obtain x (including cube)--->same process happens with y, once it is being obtained it is set and passed to Cube---> ---> cube now has got all the variables to calculate and print the area. What happens is that values go up to the base class and then go down to each of the derived classes according to the hierarchy!!! | https://www.daniweb.com/programming/software-development/threads/323288/could-someone-help-me-with-polymorphism | CC-MAIN-2016-50 | refinedweb | 886 | 64.61 |
I have been reading some post looking for an anwer to my question but I still not complitely sure about it. I understand that for a complex, what rosetta gives you is a score that it is no exacly the DG. I would like to know if with the next code, using the standard score funcion, I can calculate for a comple of 2 chain the DG with the formula:
DG= Score_complexe - (Score_chain_A + score_chain_B)
I am building an scoring function to predict DG for a protein-protein complexes and I want to compare its performance with the performance of RosettaDock in a blind test.
#!usr/bin/env python
from pyrosetta import *
init()
from pyrosetta.teaching import *
scorefxn = get_fa_scorefxn()
f = "complexe.pdb"
pose = pose_from_pdb(f)
score_c = scorefxn(pose)
pose_A = pose_from_pdb(f[0:4]+"_A.pdb")
score_A = scorefxn(pose_A)
pose_B = pose_from_pdb(f[0:4]+"_B.pdb")
score_B = scorefxn(pose_B)
DG = score_c - (score_A + score_B)
print DG
That's close to the general approach used to evaluate binding energy calculations in Rosetta.
Often, though, we don't try to score the individual chains as separate poses, but instead move the chains very far apart from each other and then rescore. That should be equivalent to scoring the different chains in separate Poses and then summing the results, though. (*)
Simply separating the chains is sometimes not all you want to do, though. Often you'll want to reoptimize the chains in the apo state, to allow for some apo-state relaxation. How much optimization you do in the complex state versus apo state is a bit of a black art, though, and depends a bit on what sort of thing you're attempting to test, and what your personal philosophy is.
You may want to take a look at the InterfaceAnalyzer mover to see the typical way people do binding energy calculations in Rosetta.
*) There's some issues with things like inter-chain disulfides, or inter-chain or absolute position constraints which can mess things up, but so long as you are using the standard scorefunction and don't have any inter-chain covalent bonds you should be alright. | https://www.rosettacommons.org/node/10518 | CC-MAIN-2021-25 | refinedweb | 354 | 57.91 |
Data::Hive::Store is a generic interface to a backend store for Data::Hive.METHODSAll methods are passed at least a 'path' (arrayref of namespace pieces). Joining the path in a way that is meaningful is most of the point of the Store modules.get print $store->get(@path, %opt);Return the resource represented by the given path, however $store is structured. This could map to e.g. $hash->{foo}->{bar} $obj->get('foo.bar') io('/foo/bar')->alldepending on the Store module involved.set $store->set(@path, $value, %opt);Analogous to get.name print $store->name(@path, %opt);Return a store-specific name for the given path. This is primarily useful for stores that may be accessed independently of the hive; in the io example above, some external process/function may want to write to /foo/bar directly.exists if ($store->exists(@path, %opt)) { ... }Returns true if the given path exists in the store.delete $store->delete(@path, %opt);Delete the given path from the store. Return the previous value, if any.Product's homepage
Requirements:
· Perl | http://linux.softpedia.com/get/Programming/Libraries/Data-Hive-Store-31182.shtml | CC-MAIN-2013-20 | refinedweb | 178 | 61.22 |
Hi All,
First of all Here is the question and what I have done so far.
A) The helper method boolean isVowel (char c) should return true if the character it recieves is an uppercase or lowercase English vowel. For Our purposes, Vowels are the letters a, e, i, o, and u. The metod should do this by checking if the character c is contained in a string composed of vowel characters.
B) The public String shorthand(String in) method should use the class StringBuilder to build an output string by appending non-vowel characters from its argument in to the result it returns. You will need to identitfy vowels to be removed using the helper method you developed in part (a).
C) The run method of this class should contain a loop that repeatedly prompts the user to enter a line of text from the keyboard, using Scanner class to read the input.
After each line is entered and return is pressed , the method should repeat the text back to the user but with any vowels removed using shorthand method and the method should terminate if when user enters *
I have done all the code but not sure if it it correct and specially the last method.
import java.io.*; import java.util.*; import java.util.Scanner; public class Shorthand { public boolean isVowel(char c) { if (c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u' || c == 'A'|| c == 'E'||c == 'I'|| c == 'O'|| c == 'U') { return true; } else { return false; } } //TODO Complete the shorthand method public String shorthand(String in) { StringBuilder vowel = new StringBuilder(); for (int i=0; i < in.length(); i++) { if (isVowel(in.charAt(i)) == false) { vowel.append(in.charAt(i)); } } return vowel.toString(); } //TODO Complete the run method public void run() throws IOException { String txt = ""; Scanner sc = new Scanner(System.in); for(int i = 0; i < sc.length; i++) { System.out.println("Enter your line text"); txt.append(shortHand); String line = sc.nextLine(); if (line.equals("*")) break; String shortHandLine = shorthand(line); System.out.println(String.format("Short hand form", line, shortHandLine)); } } } | https://www.daniweb.com/programming/software-development/threads/280917/home-work-help-with-java-code | CC-MAIN-2020-10 | refinedweb | 346 | 64.2 |
Python Programming, news on the Voidspace Python Projects and all things techie.
Resolver is Hiring!...
We also have a Resolver News Page so you can keep track of what we're up to.
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2007-08-17 14:12:19 | |
Categories: IronPython, Work
Format Tracebacks from Current Exception
One of the interesting pieces of the Mini Silverlight Python IDE is the code that formats tracebacks from exceptions. These are printed to the debugging pane.
This code comes originally from Resolver and I have reused it with permission from my long suffering boss. We needed this because we were using format_tb from the traceback module. When there were many cells with errors then Resolver ground to a halt. When we came to look at the reason for this, we discovered that format_tb uses the linecache module to attempt to find the line of source code that each stack frame in the traceback refers to. As we were executing compiled code, this was doomed to failure. It caused a disk access for every directory in sys.path for every frame in the exception for every cell with an error.
The following code just extracts the core code needed to find line numbers and names for every frame in a traceback. Even though IronPython doesn't use Python stack frames (or code objects), IronPython reconstructs the necessary objects so that code like this will work.
def StackTraceFromCurrentException():
traceback = sys.exc_info()[2]
stackTrace = []
while traceback is not None:
frame = traceback.tb_frame
lineno = traceback.tb_lineno
code = frame.f_code
filename = code.co_filename
name = code.co_name
stackTrace.append((filename, lineno, name))
traceback = traceback.tb_next
sys.exc_clear()
return stackTrace
def formatLine(lineInfo):
fileName, lineNo, name = lineInfo
line = ' File "%s", line %s, in %s' % (fileName, lineNo, name)
return line
def executeCode(code):
if not code:
print 'No code to execute.'
return
code = code.replace('\r\n', '\n') + '\n'
context = {'__name__': '__main__'}
try:
exec code in context
except Exception, e:
print "Traceback (most recent call last):"
for line in StackTraceFromCurrentException():
print formatLine(line)
print e.__class__.__name__ + ": " + str(e)
If you need to present exception tracebacks to the user, you might find this useful. The explicit call to sys.exc_clear() is unnecessary for CPython but is needed in IronPython to work around a bug.
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2007-08-15 00:04:05 | |
Categories: Python, IronPython, Hacking
How Badly will IronMonkey Hurt Performance?
'Someone' (I couldn't easily find a name, sorry) has written a blog post about the Mozilla project IronMonkey, which aims to port IronPython and the other DLR languages to run on the Tamarin JIT.
In How Badly will IronMonkey Hurt Performance? he declares:
I'm actually glad we're seeing this sort of a project go forward. Perhaps before we return to our sensibilities, we need to see a very evident failure. In terms of performance, I think this effort may just lead to such a disaster.
His thinking goes roughly along these lines:
- Python is slow
- IronPython is slower
- Javascript is slow so the Tamarin implementation will probably be even slower
- Therefore IronMonkey on top of Tamarin must be unbearably slow
It seems to me that just about all of these steps are wrong, and the blog didn't seem to be accepting comments, so I'm posting a response here.
My goodness! Which misconception to unpick first.
First of all, amongst the dynamic languages Python is not generally considered slow. For example here is a quote from the Python main page from the creators of Youtube (which is almost entirely written in Python and serves 16 billion pages a month):
YouTube.com:
"Python is fast enough for our site and allows us to produce maintainable features in record times, with a minimum of developers," said Cuong Do, Software Architect.
A variant of Python called Stackless Python also drives the online game Eve, where performance is understandably critical.
As for IronPython, well... In several key areas it outperforms CPython. Overall they run at similar speeds, but because of the .NET JIT it is possible to have IronPython code run faster.
Large applications can be written in Python and IronPython. For example, the company I work for is developing a spreadsheet application with IronPython. Performance is comparable with other spreadsheet applications and we have over 100 000 lines of IronPython code in our application and test framework.) Every time we have had to improve performance we have managed to do so by refactoring our Python code and have never (yet) had to drop into C# for performance gains.
IronRuby is still young, but the creators of JRuby have seen a similar thing with their Java implementation of Ruby. Overall performance is similar, but some parts run faster. That dynamic languages are slow is something usually said by those who don't use them!
Onto:
"past JavaScript implementations have never performed very well, so it's quite possible that the future implementations will suffer from performance issues"
Interpreted Javascript generally is slow (although it may be 'fast enough' in many cases). IronPython running inside the Silverlight VM will typically perform operations a couple of orders of magnitude faster than interpreted Javascript. Performance is one of the reasons that Mozilla are switching to Tamarin, and it looks like the performance benefits they will get from this JIT compiler are going to be impressive.
This VM is not as fast as the .NET VM - but then the Tamarin core binary weighs in at around a slender 200kb. IronPython running on Tamarin may well be slower than IronPython on .NET - but I still expect it to be very usable. Declarations of doom and disaster are definitely premature!
None of this is to say that IronMonkey is a guaranteed success of course, but if anyone is capable then Seo is.
(Of course another point is that Silverlight already proves that Python running in a VM in a web browser needn't be slow - which kind of helped Mozilla decide to go for it I guess...)
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2007-08-14 22:55:32 | |
Categories: Python, IronPython, General Programming
Python Bits
A few bits of interesting Python news.
A 'proof of concept' Python to native x86 compiler, called Compyler has been released and is available for download. It is Windows only (not by intention), not being developed any more, and due to inefficient refcount system runs quite slowly.
However, it will compile PyStone and can generate '.pyd' Python extension files. Perhaps more importantly it at last provides an answer to the incessant questions on comp.lang.python as to whether Python can be compiled to native binaries.
The new Python documentation system based on ReStructured Text is nearly ready to launch. Expect an announcement soon.
Work on Python 2.6 continues apace. As promised this version will no longer support Windows 98. This is a shame as I know of several schools running Windows 98 as it is a very cheap OS. However, Windows 2.4 / 2.5 will continue to work and there is no-one able or willing to maintain the Windows 98 support.
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2007-08-14 22:54:37 | |
Silverlight IronPython Example: Experimental 'IDE'
I've put together a simple IronPython Silverlight project. It's an experimental mini-IDE that has a Python editor window and lets you execute arbitrary Python code in the browser.
You can also download the project: IronPython Silverlight IDE Project.
Note
If you have multiple tabs open or multiple extensions installed this can be unstable with Firefox I'm afraid. Silverlight 1.1 is still in alpha, so don't be too harsh.
If you serve this project from Apache then don't forget to set it to serve Python files as source rather than running them. This will save you some of the headbanging that I just went through!
The project is pretty much a barebones IronPython Silverlight project which should get you started if you want to experiment. It uses the EditArea syntax highlighting code editor.
- It requires Silverlight 1.1 Alpha Refresh
- Pressing the 'Execute' button, or focussing outside the code editor after a code change will cause it to be executed
- print statements go to the debugging pane
- There is a DOM element with the id 'experimental'. You can manipulate this from IronPython code (see the example code that the code editor has on startup)
- You can import from the CoreCLR but I haven't made the Python standard library available
- Exceptions (stacktraces) should be reported correctly (but are sent to the debugging pane)
- It's only 90 lines of Python, so it is very simple!
- AFAIK Firefox only works at all with Silverlight if the files are served rather than fetched from the filesystem
I haven't explained all the details yet, nor provided the source code to the little bit of C# that lets the javascript call the Python code. This will all be posted after my PyCon and Mix UK presentations.
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2007-08-12 19:23:46 | |
Categories: Python, IronPython, Hacking, Website
Accessing the Web with IronPython
A while ago Andrzej blogged on How to download a web page using IronPython. It is easy enough (using System.Net.WebClient) but many people were unimpressed with the fact that it seemed to take several lines of code more than using urllib2 and CPython.
Well, it turns out that for the simple cases there is an easier method:
content = WebClient().DownloadString("")
print content
I've updated Download a web page in the IronPython Cookbook.
I've also added a new page (from an example posted to the IronPython mailing list by Dino Viehland): Using Python Functions from C#. It shows how to create and use Python function objects from C#, using the IronPython 2 hosting API.
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2007-08-12 15:39:21 | |
Categories: Python, IronPython
Response(s) to Mac First Impressions
Wow. I had a lot of comments and emails to my First Impressions of the Apple Mac. Thanks to all those who responded, I got a lot of useful advice.
The first thing learned is that Mac users are keen to welcome new members into the community. Here are the specific answers to my questions:
- Several people recommended Adium as a chat client for Mac OS X.
- The missing '#' key seems to be a UK localization issue and not a general Mac thing.
- Escape does cancel some dialogs, but Command-C or Command-+ works for others.
- The Mac equivalent of the Windows task manager is Activity Monitor, found in Applications/Utilities.
- No anti-virus software needed, although I do have it installed on my Parallels Vista image.
- There is a context menu 'create new file' utility, called Nufile.
As for the maximize vs. zoom issue. This page explains the Mac philosophy. I personally agree with Jeff Attwood and prefer to work with a single maximized window rather than manage multiple windows on the same screen. Not a big issue though.
Several of you recommended Macports over Fink for package management as it is generally more up to date. A couple of you recommended neither! I'm going to give Macports a try. The first step in getting it working is downloading the latest version of XCode Tools - a 1 gigabyte download!
Several of you recommended Quicksilver with great fervour. I already have this (but omitted to mention it), and also a Windoze 'equivalent' called Launchy. I don't use either a great deal, I don't think I 'get' what is so revolutionary with them yet but I will persist a while.
I will also need a mail client for the Mac. I guess I will go for Thunderbird.
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2007-08-12 15:37:55 | |
2 ** 5 + 1
Today is my 2 ** 5 + 1 birthday. Hooray for me.
Actually I've already reached the age where the perennial reminder of inexorably impending obsolescence is no longer something to make a fuss of, but oh well.
Just so that this blog entry is worth reading worth reading for my techie audience, I assume you have seen this:
Where did that come from? I haven't been following this closely, but all of a sudden it turns out that SCO never owned the rights to UNIX that they claimed. As a result their whole claim is proved hogwash and they owe Novell a lot of money for blackmailing all those companies to buy Linux licenses that they never had a right to sell!
Another commentator points out that if UNIX is prior art for Linux, then Microsoft's claims that Linux violates unspecified patents are almost certainly invalid - as are those patents in the first place. No wonder Microsoft isn't telling us which patents are involved...
For another tale of a big company being exposed treating little companies badly, read a tale from Australian Publishing: Bookstore chains puts the screws on small publishers. No happy ending yet but, as you will see, maybe it's not needed.
Now I have a tax return to do. As I haven't paid National Insurance on my income from advertising on Voidspace and selling software I have to ensure I made a loss!
Like this post? Digg it or Del.icio.us it. Looking for a great tech job? Visit the Hidden Network Jobs Board.
Posted by Fuzzyman on 2007-08-12 15:36:27 | |
Categories: Life, Computers
Archives
This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License.
Counter... | http://www.voidspace.org.uk/python/weblog/arch_d7_2007_08_11.shtml | crawl-002 | refinedweb | 2,408 | 65.73 |
We’ve blogged some about IRC before and talked about the #LinuxAcademy channel on the Freenode IRC network. Our students have made it very clear on multiple times how much they want more Python from us as well. As we’ve previously announced, more is coming from our new instructor, Shiraz. Until he is ready to release his new Python content, however, I thought I might help tide you over with a fun Python project that I enjoy: Making an IRC bot.
In this blog post, I’m going to cover how to create a basic IRC bot with Python 3. We’re going to assume a basic knowledge of Python and functions for this post, and if you’ve taken our Introduction to Python on Linux course you should have all the knowledge you need for this basic IRC bot tutorial. Feel free to comment below or hop on the #LinuxAcademy channel on Freenode if you have any questions or need any help. I go by OrderChaos on IRC (and most other places) so look for that name!
IRC Bots
Before we get started, you might be wondering, “What is an IRC bot?” The ever-useful and wonderful Wikipedia defines an IRC Bot.” (Source)
So basically, an IRC bot appears as another user to everyone else, but instead performs set actions per its script in response to predetermined events (usually specific messages in the chat). They can serve a variety of purposes: From saving chat logs, to administrative tools, to any number of other silly, useful, or sometimes strange features. There are some bots that can read Twitter feeds, perform web searches, or even do
sed-like find and replace (I’ve written one for this and have it shared on GitHub! See the Wrap Up section below for details).
IRC Bots can be written in a variety of languages. PHP and Python are two common ones, but there are ones in many other languages as well, including C, Java, and even Haskell.
Before we get to start writing all the cool and fun things for our bot to do, first we have to prepare the bot itself. We’ll need to make sure we’ve got Python set up, assigned a few variables, and made sure we can connect to the IRC network. A final note before we get started, attached to this post I’ll include the full code that we go through — if you get lost while coding, just check against the example to see where you should be. The explanations for all the code is included in the comments on the attached bot so you have all the information you need in that file.
Python Setup
First, make sure you have Python 3 installed. We’re going to assume you got it installed and configured prior to this post.
Open your favorite Python IDE (if you don’t have one, a plain text editor, or even vim in a terminal works). To start with, we should make sure to specify we’re using Python 3. For Windows, this means naming the file something that ends in .py. For Linux/Mac, include this line at the top of the file:
#!/usr/bin/python3.
After specifying Python3 we need to import the ‘socket’ library. The socket library is used for connecting and communicating over a network port. We use this to connect to and communicate with the IRC server. Importing libraries is very easy in Python and, as the socket library is built-in, you don’t need to install anything. Just add
import socket beneath the
#!/usr/bin/python3 line.
So right now your file should only contain these two lines:
#!/usr/bin/python3 import socket
Global Variables
Now that we’ve specified we’re working in Python3 and have the socket library imported, we need to define some variables. These variables are used throughout the bot, so we want them to be defined before we write any functions.
The first variable is the socket we are using to connect and communicate with the IRC server. Sockets are complicated and can be used for many tasks in many ways. See here if you’d like more information on sockets:. For now, just know that this is used to establish a continuous connection with the IRC server while the bot is running, to send and receive information.
Next is the name of the server and channel to which we are connecting. We could hard code these, but having a variable makes a couple of steps easier. For example, if we ever want to connect to a list of channels (instead of just one, as in this example) or change to a different server or channel we don’t have to find every instance and can just edit this variable instead. I’m using chat.freenode.net for this example. For other IRC networks, just put in the name in the same location.
After that, we define the channel we’re going to eventually join. We don’t want to use an official/established channel while we do our testing. Aside from being rude, many channels have specific rules for bots or don’t allow them at all. Make sure you check with a channel’s moderators before adding your bot to a channel. For our testing, we’re using a custom, unregistered room (on Freenode, denoted by the ‘##’ before the channel name). This way we’ll be the only ones in the channel with the bot while we do our testing.
The botnick is what we are naming the bot. It is how other users see the bot in the channel. Make sure this is an unused and unregistered nick, as your bot won’t be able to connect if it’s already in use and it will instead be assigned a random name if it’s a registered and protected nick. See here for more information on nickname registration
The last two variables will be used in our main function. It’s not required in an IRC bot, but the function we’re going to use needs it. All we’re doing is defining a nickname that can send administrative commands to the bot and an exit code to look for to end the bot script. We get to how to do this at the end of the main function.
ircsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server = "chat.freenode.net" # Server channel = "##bot-testing" # Channel botnick = "IamaPythonBot" # Your bots nick adminname = "OrderChaos" #Your IRC nickname. On IRC (and most other places) I go by OrderChaos so that’s what I am using for this example. exitcode = "bye " + botnick
Connecting to IRC
Now that we’ve prepared the bot and established a few variables that we are using, let’s get to the actual connection part.
To connect to IRC we need to use our socket variable (ircsock) and connect to the server. IRC is typically on port 6667 or 6697 (6697 is usually for IRC with SSL, which is more secure). We are using 6667 in our example. We need to have the server name (established in our global variables) and the port number inside parentheses so it gets passed as a single item to the connection. Once we’ve established the connection, we need to send some information to the server to let the server know who we are. We do this by sending our user information followed by setting the nickname we’d like to go by. Usually these are the same, but registered users sometimes have several nicknames tied to their account and can choose any of them.
ircsock.connect((server, 6667)) # Here we connect to the server using the port 6667 ircsock.send(bytes("USER "+ botnick +" "+ botnick +" "+ botnick + " " + botnick + "\n", "UTF-8")) #We are basically filling out a form with this line and saying to set all the fields to the bot nickname. ircsock.send(bytes("NICK "+ botnick +"\n", "UTF-8")) # assign the nick to the bot
Defining Functions
Here we define the various functions our bot uses. These are sections of code that may need to be called on multiple times.
Channel Join
Connecting to IRC is good, but unless we’re in a channel with other users it won’t be of much use! We’re putting this in a function instead of hard coding it like the ircsock.connect part above because you can be in multiple channels with a single connection. We’re only going to use one for this example, so it can be hard coded, but this way you can see how it’s done and easily modify it for multiple channels if you’d like.
Here we take in the channel name, as part of the function, then send the join command to the IRC network.
The ‘bytes’ part lets us specify to send the message encoded as UTF-8. This is to explicitly send the correct encoding to the IRC server. In Python 2, this isn’t necessary, but changes to string encoding in Python 3 makes this a requirement. You see this whenever we send data to the IRC server. Something else to note, the “\n” at the end of the message is a new line character. This is equivalent to pressing the Enter key in a chat window. It lets the server know we’re finished with that command rather than chaining all the commands onto the same line.
After sending the join command, we want to start a loop to continually check for and receive new messages from the server until we get a message with ‘End of /NAMES list.’. This indicates we have successfully joined the channel. The details of how each function works is described in the main function section below.
def joinchan(chan): # join channel(s). ircsock.send(bytes("JOIN "+ chan +"\n", "UTF-8")) ircmsg = "" while ircmsg.find("End of /NAMES list.") == -1: ircmsg = ircsock.recv(2048).decode("UTF-8") ircmsg = ircmsg.strip('\n\r') print(ircmsg)
Ping Pong
No, I don’t mean the tabletop game. It is common for IRC servers to periodically send out ‘PING’ signals to connected users to make sure they’re still connected. We must respond to these to let the server know we’re still there. If we don’t respond to these signals, we can get disconnected from the server because it assumes we have dropped the connection.
This function doesn’t need to take any arguments as the response is always the same. Just respond with PONG to any PING. Different servers have different requirements for responses to PING so you may need to adjust/update this depending on your server. I’ve used this particular example with Freenode and have never had any issues.
def ping(): # respond to server Pings. ircsock.send(bytes("PONG :pingis\n", "UTF-8"))
Send a Message
We’ve done all the major preparations, now let’s write some functions so our bot actually has something to do. This function will let us send a message to a channel or a user. All we need for this function is to accept a variable with the message we’ll be sending and who we’re sending it to.
Using target=channel in the parameters section sets the default value of the ‘target’ variable to the channel global variable. If only one parameter, msg, is passed to the function it uses the default value for ‘target’.
The “:” between target and msg lets the server separate the target and the message.
def sendmsg(msg, target=channel): # sends messages to the target. ircsock.send(bytes("PRIVMSG "+ target +" :"+ msg +"\n", "UTF-8"))
Main Function
Okay, now that we’ve got our active function and have all the connection information prepared, it’s time to write the continuous part of the bot. This is the main function of the bot. It will call the other functions as necessary and process the information received from IRC and determine what to do with it.
Starting up
We start by joining the channel we defined in the Global Variables section. After that we start an infinite loop to continually check for and receive new info from server. This ensures our connection stays open. We don’t want to call main() again because, aside from trying to rejoin the channel continuously, you run into problems when recursively calling a function too many times in a row. An infinite while loop works better in this case.
def main(): joinchan(channel) while 1:
Receiving information
Here we take in the information sent to us from the IRC server. IRC sends out information encoded in UTF-8 characters so we’re telling our socket connection to receive up to 2048 bytes and decode it as UTF-8 characters. We then assign it to the ircmsg variable for processing.
After that, remove any line break characters from the string. If someone types in “\n” to the channel, it will still include it in the message just fine. This only strips out the special characters that can cause problems with processing.
We also print the received information to the terminal. You can skip this if you don’t want to see it, but it helps with debugging and to make sure the bot is working.
ircmsg = ircsock.recv(2048).decode("UTF-8") ircmsg = ircmsg.strip('\n\r') print(ircmsg)
Split Received Message
Next, check if the information we received includes PRIVMSG in the text. PRIVMSG is how standard messages in the channel (and direct messages to the bot) come in. Most of the processing of messages is in this section.
If it is a PRIVMSG, we want to get the nick of the person who sent the message and split it from the message. Messages come in from from IRC in the format of “:[Nick]!~[hostname]@[IP Address] PRIVMSG [channel] :[message]” so we split it for the different parts and assign them to separate variables.
if ircmsg.find("PRIVMSG") != -1: name = ircmsg.split('!',1)[0][1:] message = ircmsg.split('PRIVMSG',1)[1].split(':',1)[1]
Choose an Action
Now that we have the name information in it’s own variable, we check if the name is less than 17 characters. Usernames (at least for Freenode) are limited to 16 characters. So with this check we make sure we’re not responding to an invalid user or some other message that just happens to have ‘PRIVMSG’ in it. After that, we use a detection block to see if it includes certain text that the bot should then take action on.
With the first stanza, we’re looking to see if someone says Hi to the bot anywhere in their message and then replying. Since we don’t define a target, it will get sent to the channel.
The second is an example of how to look for a ‘command’ at the beginning of a message and parse it to do a complex task. In this case, we’re looking for a message starting with “.tell” and using that as a code to look for a message and a specific target to which to send. The whole message should look like “.tell [target] [message]” to work properly. There are comments in the attached bot file that explains how it works in detail.
if len(name) < 17: if message.find('Hi ' + botnick) != -1: sendmsg("Hello " + name + "!") if message[:5].find('.tell') != -1: target = message.split(' ', 1)[1] if target.find(' ') != -1: message = target.split(' ', 1)[1] target = target.split(' ')[0] else: target = name message = "Could not parse. The message should be in the format of ‘.tell [target] [message]’ to work properly." sendmsg(message, target)
Stopping the Bot
Since we created an infinite loop in this function, there is no natural end. Instead, we’re going to check for some text and use that to end the function (which automatically ends the loop).
What we do is look to see if the name of the person sending the message matches the admin name we defined earlier. We make both lowercase in case the admin typed their name a little differently when joining. On IRC, ‘OrderChaos’ and ‘orderchaos’ are the same nickname, but Python will interpret them as different strings unless we convert it to all lowercase first.
We also make sure the message matches the exit code above. The exit code and the message must be EXACTLY the same. This way the admin can still type the exit code with extra text to explain it or talk about it to other users and it won’t cause the bot to quit. The only adjustment we’re making is to strip off any whitespace at the end of the message. So if the message matches, but has an extra space at the end, it still works.
If the exit code is sent by the admin, the function hits a ‘return’ line which automatically breaks out of any loops and if statements and goes to the line where the function was called. Normally it continues with additional lines of code, but we’re going to end the script on the call to main() so there won’t be any more code for it to run through and the bot will close.
if name.lower() == adminname.lower() and message.rstrip() == exitcode: sendmsg("oh...okay. :'(") ircsock.send(bytes("QUIT \n", "UTF-8")) return else:
Respond to Pings
If the message is not a PRIVMSG it still might need some response. If the information we received was a PING request we call the ping() function we defined earlier to respond with a PONG. This lets the server know we’re still online and connected.
if ircmsg.find("PING :") != -1: ping()
Start the Main Function
Finally, now that the main function is defined, we need some code to get it started. It doesn’t take any parameters, so it’s as simple as this line.
main()
Wrap Up
And there you have it. A basic, working bot to connect with IRC. It’s not the most efficient or error-proof bot, but it is a good start and will give you something to build on. If you’re interested in more complex and advanced IRC bots, check out the Wikipedia comparison of many popular IRC bots that was mentioned at the top. In particular, the Sopel IRC bot is a very complex and in-depth IRC bot with a multitude of functions and plugins. It is in active development with many contributors.
I also sometimes work on my own Python IRC bot which doesn’t have many features (yet!). You can find the original release (coded in Python version 2!) here:. It logs recent chat and can do find/replace using sed-like commands. The development branch has a Python 3 version and I am in progress of a major rewrite that will show some complex logging functions with a sqlite3 database.
Attached is the bot we just went through. You can check your work against it or just save it to your own script and start using it as is!
EDIT: The attachment appears to have not worked properly. We will upload the bot to our GitHub and make it available ASAP. Check back later today for the full code with comments! In the meantime, feel free to copy out the the code from this post and compile your own!
EDIT 2: You can now find all the code in one place here: Additional comments will be added later today, this is the raw code, exactly as shown in this blog post.
Feel free to hop on to Freenode and join us in #LinuxAcademy or shoot me a message (again, my Freenode nickname is OrderChaos) if you want to chat or have any…
Hello and thank you for such an awesome tutorial on how to create an IRC bot in python 3! I am somewhat new to python and i understand most of the code and what is going on, but i am a bit confused on what the .tell does and how it works? For instance, in my test room, i type “.tell [username] [message to user]” and nothing happens? What am i missing and is this just so i can tell the bot to send a message to another user in the room?
Also, how can i .lower the message.find? I would like to allow for lowercase input to say “hi” instead of “Hi”. Is this possible? I have tried message.lower().find(“”) but it did not work.
Thank you again for a great tutorial! =)
Hi there! Thanks for the kind words, I’m happy you enjoyed the tutorial.
So the .tell is to have the bot send a message to another user, yes. If you log in with two different accounts or have a friend you can test it with that.
As for the lowering part, message.lower.find([string]) should work so long as you are looking for all lower case letters in the find. So instead of “if message.find(‘Hi ‘ + botnick) != -1:” you would want to have “if message.find(‘hi ‘ + botnick.lower()) != -1:” to make sure that you are only looking for lower case characters. If you left it at the default, “if message.find(‘Hi ‘ + botnick) != -1:” you are looking for a capital H which will never be found. Make sense?
Thanks for the reply Sean,
How would I accept multiple strings for the same reply? For instance if someone says “Hi” or “hello”. Is there a way to accept these statements on 1 line instead of having to make 2 different “if” statements?
Also, I still cannot get the “.tell” to work. I have added a 2nd user to a test room, and I (bot admin) type in “.tell [2nduser] [message]” and nothing happens. I have tried with and without “”.
Nm, figured it out. I was expecting to see a message in the main chat, but instead its a PM lol.
Hi Sean, and thanks for this well explained tutorial.
I have the below error when trying to connect to a server.
NOTICE AUTH :*** Found your hostname (cached)
PING :D1008F29
451 JOIN :You have not registered
how can I modify the
def ping():
ircsock.send(bytes(“PONG :pingis\n”, “UTF-8”))
function so the number after PING is added to PONG?
Thanks so much for this. I get an error on Line 12 in :
12 ircsock.send(bytes(“USER “+ botnick +” “+ botnick +” “+ botnick + ” ” + botnick + “\n”, “UTF-8”))
The error says “TypeError: str() takes at most 1 argument (2 given)”. I even copy and pasted the line in directly and get the same. Any ideas?
Feel free to delete this comment, found out the server I was using used Python 2.7!
No worries, we’re here if you need any further help!
Thanks so much for your tutorial. I’ve got it running and it connects to the IRC server and says that it finds by hostname. Then the console goes black (I assume it’s running) but the bot doesn’t join the specified channel. For reference I’m not connecting to Freenote, but Snoonet (irc.snoonet.org). Any help would be much appreciated! | https://linuxacademy.com/blog/geek/creating-an-irc-bot-with-python3/ | CC-MAIN-2018-34 | refinedweb | 3,867 | 72.87 |
When it comes to managing SMS messages with the Vonage Messaging API, receiving inbound SMS messages is paramount. In this tutorial, we're going to explore receiving SMS messages into our app and pushing them out to our front end in real-time using Blazor and SignalR Core.
Jump Right to the Code
If you just want to jump right to the code, just visit this demo's repository in GitHub.
Prerequisites
- You'll need a Vonage API account
- a directory where you want to put the project folder and run the following command in your terminal.
dotnet new blazorserver -o ReceiveSmsBlazor
After that's done,
cd into the
ReceiveSmsBlazor folder that we created and run
code . to launch VS Code. Alternatively, you can open the
ReceiveSmsBlazor.csproj file in Visual Studio.
Install NuGet Packages
There are two NuGet Packages that we are going to need for this demo.
Go back to your console and run:
dotnet add package Microsoft.AspNetCore.SignalR.Client dotnet add package Vonage
Create the SMS Hub
SignalR disperses all messages through central "Hubs." A hub will notify all clients connected to it when a particular event occurs; in this case, the events we are going to be receiving are inbound SMS messages.
Create a folder called
Hubs. In that folder, create a file called
SmsHub.cs and import signalR with
using Microsoft.AspNetCore.SignalR; within it. Then, create a SmsHub class within the
ReceiveSmsBlazor.Hubs namespace like so:
namespace ReceiveSmsBlazor.Hubs { public class SmsHub : Hub { } }
You don't need to add any code in here; we'll be driving this from outside of this class, it just needs to be here so that we can talk to all the clients of the Hub.
Build our Controller
We are going to need to create an API controller. This controller is where we are going to be receiving our inbound SMS webhooks from Vonage.
Create a new folder at the top level of our project called
Controllers, and add a new file
SmsController.cs to this folder.
Open up
SmsController.cs and add the following using statements to it.
using Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.SignalR; using Newtonsoft.Json; using Vonage.Messaging; using System.IO; using System.Threading.Tasks; using ReceiveSmsBlazor.Hubs;
Next, create a
SmsController class in the
ReceiveSmsBlazor.Controllers namespace extending
Controller and annotate it with a
Route to
api/[controller] and an
ApiController attribute like so:
namespace ReceiveSmsBlazor.Controllers { [Route("api/[controller]")] [ApiController] public class SmsController : Controller { } }
Inject the SmsHub Context
To access the SmsHub, we'll dependency inject its context. Create an
IHubContext property called
HubContext in your
SmsController class, then add a constructor taking an
IHubContex as an argument, and assign our
HubContext property to that value.
IHubContext<SmsHub> HubContext { get; set; } public SmsController(IHubContext<SmsHub> hubContext) { HubContext = hubContext; }
Add the Inbound SMS Webhook Action
Now that we have our Hub available, we just need to add a method to handle the inbound SMS and send it to our client.
Add a new async method called
InboundSms that returns a
Task. In this method we are going to read the SMS in from the Request's body, then we are going to send a message out to all of the clients connected to our SmsHub containing the inbound message's
Msisdn (the number the message originated from) and
Text.
[HttpPost("[action]")] public async Task<IActionResult> InboundSms() { using (var reader = new StreamReader(Request.Body)) { var json = await reader.ReadToEndAsync(); var inbound = JsonConvert.DeserializeObject<InboundSms>(json); await HubContext.Clients.All.SendAsync("ReceiveMessage", inbound.Msisdn, inbound.Text); } return NoContent(); }
If you've been following along, the route will look like
api/sms/inboundsms on our server when it launches. Keep this in mind for later because this is going to be where we are aiming our webhooks when we are configuring our webhooks.
Configure Middleware in Startup
Since we are using Controllers and SignalR, we need to make sure that we configure the middleware. Open up
Startup.cs and add
using Microsoft.AspNetCore.ResponseCompression; to the import section. Then drop into the
ConfigureServices method and add the following.
services.AddSignalR(); services.AddResponseCompression(opts => { opts.MimeTypes = ResponseCompressionDefaults.MimeTypes .Concat(new []{"application/octet-stream"}); });
This snippet will add SignalR and response compression for SignalR to your application.
Add Controller and SmsHub
The next thing we need to do is to tell our application to use our controllers and our SmsHub. Drop into the
Configure method in
Startup.cs and add the line
app.UseResponseCompression(); at the beginning of the method. Then, in the
UseEndpoints delegate, add
endpoints.MapControllers(); and
endpoints.MapHub("/smshub");. When it's all said and done your
app.UseEndpoints should look something like this:
app.UseEndpoints(endpoints => { endpoints.MapControllers(); endpoints.MapBlazorHub(); endpoints.MapHub<Hubs.SmsHub>("/smshub"); endpoints.MapFallbackToPage("/_Host"); });
Add a Frontend
Now that we have all the internal stuff working for handling the inbound SMS and we have all of our services and hubs stood up, the last thing we need to do is to create a frontend.
If you ran
dotnet run in your console and opened the app on
localhost:5001 you'd see the generic blazor starter page.
If you want to remove the navigation bar on the side just go into
Shared/MinLayout.razor and remove the
sidebar div.
Everything else we need to do is in
Pages/index.razor so go ahead and open that file now.
Pull in Dependencies
Delete everything below line 2 in
index.razor as it's just generic hello world code. With that done, let's import our SignalR client, inject a navigation manager, and declare that we will implement IDisposable to clean up the SignalR hub when it winds down.
@using Microsoft.AspNetCore.SignalR.Client @inject NavigationManager NavigationManager @implements IDisposable
Add our Messages Table
We're going to display inbound messages in a Table. Let's go ahead and add a simple table into our
index.razor file. Add the following razor code.
<h1>Messages</h1> <table class="table" id="messageList"> <thead> <tr> <th>From Number</th> <th>Message</th> </tr> @foreach (var message in _messages) { <tr> <td>@message.FromNumber</td> <td>@message.Text</td> </tr> } </thead> </table>
The foreach will loop through our page's
_messages collection and create a new row with each message's
FromNumber and
Text.
Handle Inbound Messages
Back in our controller, we used the call
await HubContext.Clients.All.SendAsync("ReceiveMessage", inbound.Msisdn, inbound.Text); to send the inbound message's
Msisdn and
Text down to all clients connected to the SMS hub. Now it's time to connect to that Hub. I'm going to go ahead and show you the whole
@code blob and then walk you through how each part operates.
@code { private HubConnection _hubConnection; private List<Message> _messages = new List<Message>(); private class Message { public string FromNumber { get; set; } public string Text { get; set; } } protected override async Task OnInitializedAsync() { _hubConnection = new HubConnectionBuilder() .WithUrl(NavigationManager.ToAbsoluteUri("/SmsHub")) .Build(); _hubConnection.On<string, string>("ReceiveMessage", (from, text) => { var message = new Message { FromNumber = from, Text = text }; _messages.Add(message); StateHasChanged(); }); await _hubConnection.StartAsync(); } public bool IsConnected => _hubConnection.State == HubConnectionState.Connected; public void Dispose() { _ = _hubConnection.DisposeAsync(); } }
Declare Private Fields and Message Class
The following bit of code declares the private fields of the component. That
_messages list is where we are going to be pulling messages.
private HubConnection _hubConnection; private List<Message> _messages = new List<Message>(); private class Message { public string FromNumber { get; set; } public string Text { get; set; } }
OnInitializedAsync
We override the
OnInitializedAsync method; this is the method that is invoked when the component is initialized. This method initializes the
_hubConnection (the connection to our
SmsHub). It then registers a handler for the
ReceiveMessage callback that we are invoking with our
SmsController. Thus whenever that event occurs, this method will fire, we'll add the inbound message to our
_messages collection, and then we'll call the
StateHasChanged function to tell the component that it needs to re-render, this will make our messages reload live. Lastly, we start
_hubConnection.
Testing with Ngrok
Now that we have everything built, we can go ahead and get testing with ngrok. Ngrok allows us to create a tunnel to our server from our local machine. If you are using ngrok with IIS Express, I will encourage you to check our explainer on the subject as there are special considerations. For this demo, since we are using the dotnet CLI, we don't need to worry about this. Open up your
Properties/launchSettings.json file in the profiles object, the profile named after your app is going to be used when you startup. Check the
applicationUrl; it will typically point to and. Ensure that you delete the non-SSL one of these and then run the following command in another command prompt window.
ngrok http 5000
Replace 5000 with whatever port number your app is going to be running without SSL.
This command will build a tunnel to that port. You can now run your application by using the
dotnet run command. Check back at your ngrok console; it will tell you the publicly accessible URL of your app.
In my case, it's pointing to. By navigating to that URL, I can see my application's frontend.
Configure your Webhooks
The route for inbound sms webhooks is
/api/sms/inboundsms with our ngrok URL as the base path. This means that the full route is going to be something along the lines of. All we need to do now is navigate to our settings page and set the inbound messages URL to our route, and set the HTTP method to
POST-JSON. Click the
Save Changes button at the bottom, and we're good to go.
Test
To test, all we need to do is send a text message to a Vonage Number associated with this account. That can be found on the dashboard under numbers->Your Number.
It will look something like this after you receive a message.
Wrapping Up
That's all that's needed to allow us to watch SMS messages come into an app in real-time. It's clear that by combining SignalR, Blazor, and the Vonage APIs, we can build some cool stuff!
Resources
- The code for this demo can be found in GitHub
- You can learn more about the Vonage APIs on our Developer Website | https://developer.vonage.com/blog/2020/07/14/how-to-receive-an-sms-with-blazor-and-signalr | CC-MAIN-2022-27 | refinedweb | 1,725 | 57.27 |
Vector in C++
Sign up for FREE 1 month of Kindle and read all our books for free.
Get FREE domain for 1st year and build your brand new site
Reading time: 35 minutes
Vector is a sequence container that can store elements, but its size can change dynamically (C/C++ array’s size do not change dynamically). Container is a objects that hold data of same type. Sequence containers store elements strictly in linear sequence. We can store int, string, float elements, depending on how we initialize the vector.
For example:
Below is definition of std::vector from
vector header file
vector<int> v;
Unlike array, vector can shrink or expand as needed at run time. The storage of the vector is handled automatically. To support shrink and expand functionality at runtime, vector container may allocate some extra storage to accommodate for possible growth thus container have actual capacity greater than the size. Therefore, compared to array, vector consumes more memory in exchange for the ability to manage storage and grow dynamically in an efficient way.
Certain functions associated with the vector are:
Iterators
- begin() – Returns an iterator pointing to the first element in the vector
- end() – Returns an iterator pointing to the theoretical element that follows the last element in the vector
- rbegin() – Returns a reverse iterator pointing to the last element in the vector (reverse beginning). It moves from last to first element
- rend() – Returns a reverse iterator pointing to the theoretical element preceding the first element in the vector (considered as reverse end)
- cbegin() – Returns a constant iterator pointing to the first element in the vector.
- cend() – Returns a constant iterator pointing to the theoretical element that follows the last element in the vector.
Capacity
- size() – Returns the number of elements in the vector.
- max_size() – Returns the maximum number of elements that the vector can hold.
- capacity() – Returns the size of the storage space currently allocated to the vector expressed as number of elements.
- resize() – Resizes the container so that it contains ‘g’ elements.
- empty() – Returns whether the container is empty.
- shrink_to_fit() – Reduces the capacity of the container to fit its size and destroys all elements beyond the capacity.
- reserve() – Requests that the vector capacity be at least enough to contain n elements.
Modifiers
- assign() – It assigns new value to the vector elements by replacing old ones
- push_back() – It push the elements into a vector from the back
- pop_back() – It is used to pop or remove elements from a vector from the back.
- insert() – It inserts new elements before the element at the specified position
- erase() – It is used to remove elements from a container from the specified position or range.
- swap() – It is used to swap the contents of one vector with another vector of same type. Sizes may differ.
- clear() – It is used to remove all the elements of the vector container
Some basic functionalities
- We have stored 5 integers in our vector:
vector<int> v; v.push_back(2); v.push_back(5); v.push_back(1); v.push_back(3); v.push_back(4);
- If we use pop_back(), we’ll remove the last element.
v.pop_back();
Now, only 4 elements will and the last element will be popped out, i.e.,
2 5 1 3
Now, we have only 4 elements,
- To remove the first element, we can use erase(). We need to pass the element’s position (iterator position), we want to remove, as an argument.
v.erase(v.begin());
Now, there are only 4 elements in the vector v and first element i.e., 2 is erased.
5 1 3 4
Elements left are,
- We can also remove the last element using erase.
v.erase(v.begin() + v.size() - 1);
Now, we have
Since erasing an element requires moving other elements (to ensure random access), time complexity of erase is O(1).
- To get the first element, we use front,
v.front();
We will get the first element, i.e.,
2
- To the last element, we use back
v.back();
We get the last element, i.e.,
4
- To the know the number of element inside the vector, we use size
v.size();
We will get to know the number of vector v;
5
Instead of using size() we can use empty() method.
v.empty();
size() v/s empty()
empty() function is often said to be preferred over the size() function due to some of these points-
empty() function does not use any comparison operators, thus it is more convenient to use.
empty() function is implemented in constant time, regardless of container type, whereas some implementations of size() function require O(n) time complexity such as list::size().
As a simple array, we can use [] and = operators
v[0] = 10; v[1] = 20; v[2] = 30; cout << v[0] << endl; // 10 cout << v[1] << endl; // 20 cout << v[2] << endl; // 30
Output
10 20 30
- To remove all the elements from the vector, we use,
v.clear();
Using algorithm
We can use algorithm's sort to order the vector elements in an ascending order.
#include <algorithm> sort(v.begin(), v.end());
The vector will get sorted in ascending order, like;
And in a descending order, using the greater< int> comparison as the third argument.
#include <algorithm> sort(v.begin(), v.end(), greater<int>());
This is what we will get,
In order to avoid using
sort(v.begin(), v.end(), greater<int>());, we can put this code into a void function, passing the vector as an argument.
This can be done in 2 ways:
1.As a reference
void desc_sort(vector<int> &v) { sort(v.begin(), v.end(), greater<int>()); } desc_sort(v);
2.As a pointer
void desc_sort(vector<int> *v) { sort(v->begin(), v->end(), greater<int>()); } desc_sort(&v);
In both of the above cases, we will get a vector sorted in descending order.
| https://iq.opengenus.org/vector-cpp/ | CC-MAIN-2021-17 | refinedweb | 969 | 53.61 |
StorageManager Class
StorageManager is a Python utility class designed to make Storage easy to use in Python Extensions. It also has a number of useful features for creating Python properties and dependable collections. Although StorageManager performs some complex Python tasks, using it in your extension is easy, and this article will give you a working knowledge of its use. An intermediate understanding of Python is assumed throughout.
TIP: if you want to use the property and dependability features of StorageManager, but don't need the values to be kept in Storage (between saves and extension initializations), consider using the createProperty function in TDFunctions.
For technical class details see TDStoreTools
Adding StorageManager to an Extensionedit
StorageManager is meant for use in Python extensions. The common way to include it in your extension is as follows:
from TDStoreTools import StorageManager class ExampleExt: def __init__(self, ownerComp): # The component to which this extension is attached self.ownerComp = ownerComp storedItems = [ # Stored Items List goes here ] self.stored = StorageManager(self, ownerComp, storedItems)
Notice that the import statement at the top imports StorageManager. Then, in the
__init__ method, a
storedItems list is created. This list contains dictionaries of information about each item to be stored. We then pass the extension (
self), the component whose storage will be used (
ownerComp), and the
storedItems list to the
StorageManager class constructor.
The Stored Items Listedit
The most important thing for you to set up when using StorageManager is the stored items list. Each entry in the list contains an item dictionary with information about the item to be stored. There are five keys which can be defined in each item's dictionary:
- name - the name of the item to be stored. This is the only required key. If lower case, this stored value's property will not be promoted.
- default - the default value of the item to be stored. If not provided,
Nonewill be used.
- property - if True, an associated property will be created to access this value. More on this below. Defaults to True.
- readOnly - if True, the created property will be a read only value. No effect if property key is false. Defaults to False.
- dependable - if True, the value will be deeply dependable if it is a container. Non-containers are always dependable and this has no effect. Defaults to False.
The items defined in this list will be created in your Component's Storage dictionary, and will thus be saved on disk when your .toe is saved. They will also retain their values when your extension is reinitialized. Let's look at some example item dictionaries before going into further detail about each dictionary attribute:
{'name': 'Value0'} # A stored value called "Value0". Default values will be used for other # dictionary keys, so it is exactly equivalent to: {'name': 'Value0', 'default': None, 'property': True, 'readOnly': False, 'dependable': False} {'name': 'value0', 'default': 23} # A stored value much like "Value0" above, but it won't be promoted, and its # starting value will be 23. {'name': 'ValueList', 'default': [1, 8, 42], 'dependable': True} # A stored, deeply dependable list that will start with contents [1, 8, 42] {'name': 'ID', 'readOnly': True} # A stored value called "ID" whose promoted property will be read only. {'name': 'Internal', 'property': False} # A stored value called "Internal" that will not be accessible via property, # but only through Component storage or the StorageManager object.
Accessing Stored Itemsedit
There are a number of ways to access items stored by StorageManager, depending on the way the items are set up in the stored item list. For the following examples, let's say that you have an extension set up like this:
from TDStoreTools import StorageManager class ExampleExt: def __init__(self, ownerComp): # The component to which this extension is attached self.ownerComp = ownerComp storedItems = [ {'name': 'value0'}, {'name': 'Value1', 'default':0}, {'name': 'ID', 'readOnly': True}, {'name': 'Internal', 'property': False} ] self.stored = StorageManager(self, ownerComp, storedItems)
Access via StorageManager Objectedit
All stored items are available via the StorageManager object itself, which operates like a dictionary keyed by the stored items' names. As an example, here is a function that would work in ExampleExt to print and increment
Value1:
def changeValue(self): print(self.stored['Value1']) self.stored['Value1'] += 1
Stored items can be read and written in this way whether or not they are defined as read only or having a property associated with them.
Access via Propertiesedit
If a stored item has a property associated with it, it can be accessed more directly. A stored item's property will have the same name as the stored item, so, duplicating the example above,
Value1 can also be accessed in this way:
def changeValue(self): print(self.Value1) self.Value1 += 1
If the stored item is set up as read only, it cannot be changed through its property. Looking back to the extension definition above, notice that the
ID stored value is set to read only. This means that you can do something like
print(self.ID)
but
self.ID = 5
would cause an error. If you want to change
ID, you'd do it like this:
self.stored['ID'] = 5
If your extension is promoted, any members or methods that start with a capital letter will be accessible directly via the Component they are attached to. This is true of properties as well, so if
ExampleExt were promoted on a Component in root named "exampleComp", you could do something like this:
op('/exampleComp').Value1 = 12
Note: in our example extension, this method of access would not work with
value0 because it is lower case, with
ID because it is read only, or with
Internal because it has no property associated with it.
Limitations to
StorageManageredit
Only objects that can be saved by Python can be stored using
StorageManager. This includes most built-in Python objects, but does not include many TouchDesigner objects, notably Operators. If you want to store an Operator, the easiest way is to store its
path instead. You will get an error if you attempt to store a dis-allowed object. | https://derivative.ca/UserGuide/StorageManager_Class | CC-MAIN-2021-21 | refinedweb | 1,003 | 54.02 |
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
In file included from ../lash/list.h:37,
from socket.c:36:
../lash/xmalloc.h: In function 'lash_strdup':
../lash/xmalloc.h:51: warning: incompatible implicit declaration of built-in
function 'strdup'
socket.c: In function 'lash_lookup_peer_name':
socket.c:266: error: 'NI_MAXHOST' undeclared (first use in this function)
socket.c:266: error: (Each undeclared identifier is reported only once
socket.c:266: error: for each function it appears in.)
socket.c: In function 'lash_lookup_peer_port':
socket.c:279: error: 'NI_MAXSERV' undeclared (first use in this function)
This seems to be a little bit odd : in lash/xmalloc.h, _GNU_SOURCE is defined
by :
#ifdef LASH_BUILD
#define _GNU_SOURCE
#include "config.h"
#endif /* LASH_BUILD */
Do everything should be fine : socket.c includes lash/list.h which includes
lash/xmalloc.h. But it fails ...
Created an attachment (id=158509) [edit]
patch proposal
I just copy/past code from lash/xmalloc.h in socket.c and it works.
it should just define _GNU_SOURCE over all the sources I think...
well... I don't like the current situation, so I applied your patch anyway.
Thanks! | http://bugs.gentoo.org/229603 | crawl-002 | refinedweb | 207 | 63.56 |
Brian g++ 3.0 and later follow the ISO/ANSI standard, as do Borland's compilers, and to a lesser extent, Microsoft Visual C++. If you are using any ISO compliant comiler, dropping the .h is the corrct thing to do, BUT if you do, you must remember that standard compliant compilers put names from headers without the .h into a structure called a namespace, with the namespace name std. Compilers that claim compliance wtih the standard are supposed to put names from header files with .h extensions into the global namespace which is accessible everywhere. With the huge librarires C++ has, you can have name clashes between your names and the library names in a large project. Namespaces are to help prevent the nameclashes. Most compilers maintain the .h header files for backward compatiblity, and you can use them if you like. It is better C++ style to use the headers without the .h. So, if you use headers without the .h, you them must make the names (that are then in the namespace std) available with the statement using namespace std; Put this outside any function or class, anywhere after you #include the header but prior to any function, class, or declaration that ues the name from namespace std;. There are some niceties that you could do to keep names from being available in the entire file, but until you have a better grip on C++ don't worry about this. Just do it the easy way. With most modern compilers, most access to libraries in most C++ compilers is automatic. Even when it is not, you normally use compiler switches to make the compiler search a library. Access to header files is also automatic. The preprocessor does that, following the #include directive. Email me privately, I'll try to help you if you like. David Teague On Fri, 28 Jun 2002, Brian Nelson wrote: > Date: Fri, 28 Jun 2002 18:27:06 -0400 > From: Brian Nelson <nelson@bignachos.com> > Subject: Re: makefile to get iostream.h, etc > Resent-Date: Fri, 28 Jun 2002 18:27:52 -0400 > Resent-From: debian-user@lists.debian.org > > "Paul Scott,,," <waterhorse@ultrasw.com> writes: > > > Hi, > > > > I have been working on a program to recover some data from a broken ext2 > > partition. I have been using streaming objects. > > > > When I compile with g++ with no options the header files iostream.h and > > fstream.h are found just fine. > > > > I am now trying to write a makefile which I haven't done for many years > > and not on a Linux system. I get: > > > > fixext2fs.c:1: iostream.h: No such file or directory > > fixext2fs.c:2: fstream.h: No such file or directory > > fixext2fs.c:3: iomanip.h: No such file or directory > > In file included from fixext2fs.c:8: > > Ext2fs.h:4: fstream.h: No such file or directory > > In file included from Ext2fs.h:7, > > from fixext2fs.c:8: > > display.h:4: iostream.h: No such file or directory > > display.h:6: sstream: No such file or directory > > display.h:7: fstream.h: No such file or directory > > make: *** [fixext2fs.o] Error 1 > > First of all, drop the .h extension from the #include directives. They > are deprecated in C++. They should like like: > > #include <iostream> > #include <fstream> > ... > > > How do I tell the makefile how to find these? > > Don't. > > -- > Brian Nelson <nelson@bignachos | https://lists.debian.org/debian-user/2002/07/msg00030.html | CC-MAIN-2014-15 | refinedweb | 562 | 68.67 |
How.
_1<<_3<<:
-)
(al)
↑ Back to topShare on Twitter
Ahmed ObaidOctober 1, 2009 3:23 am
I think the most important part is not to take that criticism as a personal attack, and deal with it with some wisdom.
haydynOctober 1, 2009 3:26 am
one of the hardest things i find working in a blue chip corporate environment, but honestly the more constructive criticism you take and work with it, the better designer you will become!
Le MarquisOctober 1, 2009 3:30 am
Ow boy. This sounds just like me… I have a hard time receiving negative responses from my designs. Although my emlpoyer is one of those that gives me useless critics which I have a hard time working with. As a designer you really want to know what wrong, and not hear somthing like: “That logo sucks” or “I don”t like the design”. In all of those cases it’s back to the drawingboard, back to the beginning…
So numbers 7 is for me to work on… This article is a true life saver…
Many thanks for sharing
FrontPage BlogsomeOctober 1, 2009 3:34 am
Thanks Andrew. Sometime the first respon is to strike back his design, or even in ad-hominem tone. But the positive respond usually freshen our feeling and creativity. So, yes, “thank to critic.”
JayphenOctober 1, 2009 3:49 am
Nice article, love the illustrations!
FrancisOctober 1, 2009 3:53 am
Great reminder, all the time I had a bad critic, I’ve ended up designing something that I was really proud of. Most of these pieces are in my portfolio now! Criticism is good.
Luke JonesOctober 1, 2009 3:55 am
I’ve always been able to take and give criticism well, without being too vague or subjective (though there’s always subjectivity in design criticism). One thing that does bug me, however, is when people are lying or telling a white lie to make a new designer feel better. It’s better to be one hundred per cent truthful to make somebody improve, rather than lie and let them carry on with something that is downright vile, and I’m sure many of you agree with me.
A really nice article there, I even start a thread over at the Web Designer Forum asking people to criticise themselves.
PaulOctober 1, 2009 3:59 am
I would love to work in a company where my boss actually knew what he was talking about. I crave critisim and always welcomed it in uni when meeting with my design lectures, but working for someone who is more clueless than my left foot grinds me down!
AndréJanuary 30, 2010 6:28 pm
That`s true Paul. I`ve been through this many times. By means of not letting my capacity and even my motivation to get killed, and on the other hand not killing my boss, i left and went find somewhere else to work. Best thing i did.
BeckOctober 1, 2009 4:43 am
I know how hard is to get bad feedbacks from our clients/bosses because that happens all the time.
Sometimes you get paid not to really design, but to draw and format those elements according and stricktly to the boss/client´s point of view, and that my friend, is the saddest day of work.
However, criticism is always good, so it makes you come down to Earth and realise that those infinite hours you worked are worth or not. If not, then you realise that the one who´s needing to change is you. Sometimes criticism comes in a rewarding way, i guess, most of the time though.
BJMRamageOctober 1, 2009 5:04 am
Love this article. I posted a link to it on a graphic design forum i peruse. I also included a tiny snippet to entice people to visit this page.
It really is a learning tool. Don’t ignore it, a sponge and absorb it…if you don’t act on something now, it may come back to you when you feel a design isn’t there and that critique before will help.
Michiel EbberinkOctober 1, 2009 5:17 am
Really bad criticism, I noticed that I don’t get it too often anymore, but don’t get me wrong I had my share. Sometimes I still do get a good load of criticism. The worst design processes are the ones without any criticism.
Craig WannOctober 1, 2009 5:49 am
I really helps me to have a boss that will rip apart my designs if need be. It was hard to take at first, but it gives you the freedom to try new ideas. You know if it sucks, it will never make it out the door…
Andrew is right, it would be much harder to grow without someone to challenge your designs.
Martin Bentley KrebsOctober 1, 2009 5:51 am
“Constructive” is a key word when it comes to criticism, but the reality is that sometimes we get opinions from someone who is barely qualified to give an assessment. So be it! Whether they’re qualified or not to render an opinion is not the point; sometimes, we are charged with executing to the wishes of the “internal client” first, then the external client second.
The best lesson I ever learned over the years was to separate the issue from the tissue—who I am is SO much more than what I do.
Great article!
Harry MOctober 1, 2009 5:54 am
When in college I had a prof in our video classes that would bring your work up in front of everyone and tear it apart. He tore into mine one time and I never forgot it, so when I shot I made sure I took into perspective everything he said and it made me a better shooter. He was really tough on people and those that rose up he kept pushing. I was one of the only people he wrote a letter of recommendation for in the end.
I just sat through a session Monday with clients and their print designer and had a session similar to all this. I had to defend a few things since print and web design look at different things, but the elements I have added have made my stuff better.
Criticism is just part of this business and so few people can take it whether its nicely given or shoved down your throat you need to accept it, and smile.
jOctober 1, 2009 6:20 am
def something i need to work on .. ha. thanks SMASH!
Wib WalkOctober 1, 2009 6:40 am
Great tips!
There is a big difference between “constructive feedback” and “criticism”, I just wish that people knew what it was.
jenniferOctober 1, 2009 6:42 am
great article! criticism is necessary and helps make things better. I find myself getting very frustrated, though, when the client wants things that just look bad or tacky; I don’t want to put my name on something sub-par.
mgsesqOctober 1, 2009 6:48 am
Great article… I definitely know some people who should give this a read.
Being able to accept criticism is important to a designer, but almost as important is being able to defend your design decisions. I find that outside of creative fields many times people just ‘don’t get it’ until you explain it to them. The subtleties of symbolism and metaphors are sometimes lost on those lacking a background or passion for art.
IanOctober 1, 2009 6:49 am
I think as many have already said, its sometimes difficult taking criticism from people you think aren’t qualified to give it. A bitter pill to swallow when you know your idea is a winner.
But yes I have been taken down different routes and from that the final product has benefited from the initial criticism. Yet many a time that criticism and client led projects have ended in a less than effective design at best.
Many a time I try to explain to clients not to try and personalize their website and look at it through their customers eyes instead. This usually provides common ground to take things forward and get them on side with the initial drafts.
Yet I would disagree with the above post, no criticism means I get the project billed and paid quicker, no one can argue with that.
Great article, as always.
rmlumleyOctober 1, 2009 6:57 am
#7 is definitely the one I run into the most. The best is when people don’t like the color, but don’t suggest a different option. Sometimes I feel like they think there is some undiscovered color that we could be using.
patvillaruzOctober 1, 2009 6:59 am
wow! a lifesaver post… it helps alot.. thnx for sharing!
Stefan ReichertOctober 1, 2009 7:18 am
Great Article. Thank for that.
Rajesh TrilokhriaOctober 1, 2009 8:00 am
I do agree critics for your design always takes you to the next level of design where you think from others perspective :)
Very nice article, I love this…….
dlvOctober 1, 2009 8:33 am
lovely and great article !!
Nice read
Congrats for doing it, i thing it was hard to create…
also ,thanks for share
Michael WernerOctober 1, 2009 8:54 am
I don’t see criticism as a negative at all… because I’m working as an “owner advocate”.
In foodservice and hospitality design especially, functionality is an absolute must, thus constructive criticism fuels great design. My role differs, in that we must bring an added level of expertise with operations and understanding what the client is trying to achieve from an operational, as well as the guest perspective. In my business, I have to see everything from the client’s point of view (including budget) so I welcome the challenge brought on by spirited discussions. Remember, “When you’re green your good, when you’re ripe you’re rotten”
ErikOctober 1, 2009 9:04 am
@rmlumley
HAHAHAHAHAHH +1
Quakeulf >:3~October 1, 2009 9:15 am
Everyone I know who convinces themselves they are doing this seriously and professionally take critisism like a punch in the gut.
Personally, most critique won’t bite me because I know in 99% of the situations I have encountered it’s nothing worth thinking of. Unless I know I am at a loss I can always argue 100% for what I am doing to the point where the one giving me critique must give up. :3
JoelOctober 1, 2009 9:27 am
I don’t like the color blue. Everyone uses blue.
BOctober 1, 2009 9:39 am
Amazing Article!
mgsesqOctober 1, 2009 9:46 am
Joel,
“I don’t like the color blue. Everyone uses blue.”
You must be the people they talk about in number 4.
Or the guy rmlumley is talking about in an above response.
TCCBryanOctober 1, 2009 9:56 am
This is a great article for all levels of designers to consider. I am relatively new to the design world and am picking up the skills as a self-trained amateur. One of the really cool things has been starting a 12 week challenge with a close friend that will hopefully boost our inspiration, challenge us to try new things, but it also gives us the opportunity to leave comments and criticism. We have just started our 6th week of the “challenge” and I can already tell an improvement in my skill and design with the comments, criticisms, and suggestions I have been given on previous weeks projects.
I think I will use this article as a key reference every so often as a good reality check! Thank you.
Bryan – TheCreatorsClub.com
TorshOctober 1, 2009 10:50 am
Agree with everything…except the indictment of open, vague criticism. Everyone always complains about it, but all such criticism is saying the same very useful thing: “It’s just not right.” You don’t have to have figured out exactly is wrong to know that it’s not working. Sometimes it’s impossible to know. Maybe the work is so wrong that there’s no way to improve it via specific suggestions. Either way, creatives should take these vague, open criticisms as a sign that they need try something different. There’s always another idea. Usually, the next one is better. (Unless you’re not talented or hard-working, of course.)
Stop complaining about vague criticism, you lazy-ass creatives.
EricOctober 1, 2009 11:15 am
There is an entire spectrum of criticism between “It just doesn’t work” to “Make this blue and put it here.” I love thoughtful criticism (in fact I crave it), but having someone expect you to read their mind or giving direction so minutely specific that I become a mere mindless instrument without judgment are both quite discouraging. How about an article on how to give criticism?
Lisa BarringerOctober 1, 2009 12:02 pm
Wonderful article! Very poignant! Great points to apply ─ not only when it comes to designing Web sites but, generally, in life.
Jason Collin PhotographyOctober 1, 2009 12:11 pm
I can go either way with criticism, sometimes I can just brush it off, sometimes it would keep me up all night. I guess it depends on the source of the criticism. I often ask friends and colleagues for feedback on photos I’m working on and when I was designing my photography site. I constantly ask my wife her opinion when I’m working on a set of photos for a client.
Just as long as the criticism is meant to help, not just be inflammatory, I try to welcome it.
AdardesignOctober 1, 2009 12:24 pm
Great Post, Its so true!
David AOctober 1, 2009 12:44 pm
Meaningful criticism requires critical thinking. That lets out 50% of the population (in America, anyway). It also requires that egos be checked at the door. Lop off another 40%. It also requires the client to remember (and stick to) the stated objective. Subtract another 15%. That leaves us with a negative number.
Hey! No wonder criticism is so often negative.
dvOctober 1, 2009 1:11 pm
vague criticism often implies that people don’t care – or put in enough effort to really analyze something… and that is not encouraging to anyone on the receiving end to improve or take the feedback. the energy transfer here is super important.
great article
I. GregoryOctober 1, 2009 1:14 pm
Thanks for the great reminder Smashing! My designing has vastly improved over the past year because I have learned to take criticism from my peers and use it for good without compromising my work. I have found that Concept Feedback is a great site to get good quality peer review on my work.
Sadalmo GlivisOctober 1, 2009 1:31 pm
I guess you have never worked for tools that are dumber than dirt, but happen to be a political appointee and like nothing better than throwing around their clout. Criticism is wonderful if it comes from those qualified to give it. When it comes from dolts, its time to slap on a shit eating grin and kiss ass.
NelsonOctober 1, 2009 1:42 pm
Nice article. This is really hard stuff to deal with, but your tips are pretty good to survive in a feedback session.
ByColorOctober 1, 2009 1:58 pm
Don’t forget the opposite: when criticism gets too specific and too often, it can turn a designer into a simple pixel pusher.
Alex KomarovOctober 1, 2009 4:57 pm
It is a very nice article, I guess there are not too many responses because the criticism seems to be so scary and painful matter that people are afraid to read about it.
Cathleen GreenOctober 1, 2009 5:48 pm
This is a well timed article, thank you. I think all of us at one time or another have felt attacked and defensive as you mentioned to a piece of stinging criticism; I know I am guilty as charged. As a new designer, trying to find a place in this tough market, it’s especially important to understand how to productively respond to clients when things get a bit sticky.
zeropxOctober 1, 2009 6:56 pm
This was a wonderful article! What I gained from this article will help so much in my future. I appreciate the write.
AurimasOctober 1, 2009 7:30 pm
Glad you liked the illustrations… I was chuckling as I was putting them together. :D
Edmund ChehOctober 1, 2009 7:31 pm
I guess sometimes people fail to realise that they did not listen properly. It goes both ways :- from the designer’s point of view to the client side. If you consider hearing others criticisms to the equivalent of listening to their thoughts on the project or visual and on how they see your visuals, I think you feel half as bad.
louellaOctober 1, 2009 7:45 pm
thanks for this post.. so timely :) was doing a mini outburst in my head, then i read this and thought… hehe… im gona do this instead
yoxxOctober 1, 2009 8:28 pm
yeah nice :) but not really original..
SijeeshOctober 1, 2009 8:57 pm
This was a wonderful article!
sharmelaOctober 1, 2009 10:26 pm
Good article.
I just love it when people criticise but don’t offer proper feedback; eg: Oh you know, I’ll leave to you.
And then 2 weeks later, after the sample artwork has been delivered: “Oh but that wasn’t what I wanted.”.
Gee dude, what do I look like, a psychic?
Then it’s back to the drawing board.
But otherwise, two or three brains (who more or less know creatively which direction they’re heading) is always better than one.
Rafa CarrascoOctober 1, 2009 10:33 pm
Excuse me, are you telling me you don’t like my design?
Arrrrrrghhhhh!!! I´ll kill u and then kill myself!
:-)
mcnuggets86October 1, 2009 10:34 pm
excellent article, it’s always good to be reminded about the little things that help make a project big.
Kashif M QasimOctober 1, 2009 11:55 pm
Great One! And of course very true. Keep up the good work SM.
TimOctober 2, 2009 1:05 am
Glad you chose that Churchill quote and not the one where he said, “I am strongly in favour of using poisoned gas against uncivilised tribes…”
Still, twas a helpful article all the same.
PepsuberOctober 2, 2009 3:54 am
A good article…. but I never saw Stubbs or worms. Thanks though.
LaurenOctober 2, 2009 7:40 am
“Constructive” criticism is the hardest part of any creative job. What I’ve learned: if I keep getting the same type of correction — in my case, design elements are often seen as too low key — there’s something to the criticism and I should re-examine my overall strategy. But my least favorite approach is when someone (esp. if they’re not a designer) just says “use this” without giving me a whack at whatever isn’t working. Ugh.
Phil SpencerOctober 2, 2009 7:43 am
Some very salient points but I’ve got to question this:
“Design is subjective and, like any art form, has no rulebook.”
Design is NOT art and DOES have a rulebook which is defined by the the objective see your item number 2:
“Clearly identifying the goal of a design before you share it with others is always a good idea.”
TamiOctober 2, 2009 11:31 am
Of course negative feedback stings. But its necessary.
There are several comments above that talk about getting poor feedback i.e. “this logo sucks” or “I don’t like it”. Its usually not the critics job to give you concise, accurate feedback about a design, its our job as the designer to figure it out. That’s the point of 7. If a client doesn’t like a logo – why? Does it not accurate represent their brand? And why are they requesting a bigger logo? Do they feel the brand is getting lost by an overpowering background? When a client asked for red text its our job to understand they’re saying that text needs to be emphasized in someway, not necessarily made red. And, often time to help the client understand that too.
To dismiss feedback as poor, or not constructive is lazy. They’re not the designer, you are and you need to be able to interpret what that feedback is. After all, design is communication, and the designer can’t excel at communication, then its the designer that’s bad, not the critique.
EmilyOctober 2, 2009 2:10 pm
Oh, #7.
So often my request for clarity or additional details in a client’s criticism such as “I don’t like ____”, or “this part is ugly” is met with “isn’t it YOUR job to figure out what to do??”
What do you do with plain ornery clients who are paying the money, but refuse to be part of the creative process? Essentially, the choice is between losing that account, or spontaneously learning the ability to read minds. (At least that’s how it feels…)
Edit: I just read Tami’s response above mine, and I want to clarify: It’s not wise to blindly interpret a client’s reasoning for why they don’t like a specific thing, or why they want some god-awful change. More information is needed to correctly interpret their desires, or else you risk designing on some tangent based on what you THOUGHT they wanted, and shooting completely off the mark. Wasting both your time, and theirs. That is why additional clarification and discussion is important, communication from BOTH client and designer.
paulOctober 2, 2009 2:45 pm
why was my comment not published?
asdfgOctober 2, 2009 3:29 pm
Honestly I don’t think clients or people that don’t know what they’re talking about, should give you bad criticism because it can never be constructive, since they don’t know what they’re talking about. What they’re saying is their subjective opinion and in my experience it rarely helps improve my design, usually makes it worse. But you have to do it anyways since they’re paying for it or you can turn the project down. Sometimes however if you’re dealing with someone that either has been working with artists or designers for a while (not necessarily a designer himself) they can really give you constructive feedback and point you in a right direction.
And I don’t think designers should be the only people that should take bad criticism. Take your clients for example, would you criticize the way they do business if you were their customer? And if so do you think they would value your criticism? Sometimes sure, but for the most part people in their position choose who they take criticism from and I think that’s a good thing.
ElizaOctober 2, 2009 4:43 pm
I feel like when it comes it art & design, everyone can have a valuable opinion regardless of if they’re “qualified.” Of course I put more weight on certain opinions over others but I don’t think that just because someone has a business degree instead of fine arts or is a plumber instead of a designer that means they won’t have valuable insight.
A problem I see often is some designers only respect feedback from other people in their industry (fellow designers / web developers / etc), which is a mistake… what appeals to designers doesn’t work for all people, especially when designing for an audience that is less comfortable with technology.
I do think client’s criticism should always be seriously considered even if it seems silly — after all they probably have been working with their customers a lot longer and may have better insight into what works for them.
percol8rOctober 2, 2009 6:36 pm
Wow — you all have such good attitudes! I think I do most of the time — but I currently work in a very micro-managed environment that takes every ounce of my good spirit to survive within.
It’s a nonprofit agency, which — I don’t know — seems to be a different beast. Several of my superiors are lacking in basic skill and savviness. So, it’s difficult to respect their opinions about my work. But they are paying my paycheck — so I do my best. (I am planning to leave for a better environment as soon as I can, though.)
These are people who don’t have any design training but still feel as though the First Amendment has given them the prerogative to be critical in random, ambiguous ways. Sure, design is subjective. But if someone likes to change colors or layouts on their own whim just because he is an officer in a company with hierarchy over you — that’s just bad management. I can’t tell you how many times I’ve tried to argue for consistency in what we produce in order to build our “brand” — but I get told to make changes that work against that.
Sometimes I put things out into the public that I am so displeased with — but in my environment making the director happy trumps relaying information in the most provocative, understandable ways.
Further, in my environment, these people don’t know how to do what I do — so they are oblivious to the time things can take to change. Sometimes a designer has to weigh how much time a certain tactic might require and might decide to do something more simple to stay on track or not waste resources. I want my best focus to be on a four-color report — not wasted tweaking a newspaper ad few are likely to see. Someone without any design training doesn’t have the sensibility to make calls such as these. Their playing designer wastes your time and the company’s resources — and leads to your burn-out and frustration.
So, ultimately, I’d just like to be treated as the trained professional I am — rather than be put in the awkward position of always having to incorporate others’ ill-advised and poorly-considered ideas into my work. I know what I am doing — I’ve won awards for my work. I would like to be trusted that I work hard, taking considerations of aesthetics into account while also having a bigger picture about time and resources. That’s the special expertise I bring to the table.
No one would hire a lawyer and then fail to utilize him to the fullest extent — imposing their ideas and suggestions over the lawyer’s strategy. So, why would someone feel entitled to do that to other trained professionals?
I mean, if everyone’s opinion is valid — why do we have design schools or encourage people to spend time in training? Why not just hire “just anyone” — if “just anyone” is qualified to evaluate and execute good design?
In sum, a good attitude is a great defense mechanism. But at some point good designers have to stand by their approach and their work. You earn respect by presenting a very clear, bold and assured demeanor. Acquiesce too much and you simply appear wishy-washy, not convicted, lacking in passion and niche — and, thus, valueless in your craft.
^I-thnk-I-no-UOctober 3, 2009 1:30 pm
The above story sounds strangely familiar. In fact I think I know you! This article is great and very timely. Communication skills are a must. I agree with all of the above. This is very common of nonprofits.
razorOctober 4,.
nevertheless great post.
A ReaderOctober 5, 2009 12:00 am
Agree with Ahmed Obaid thought, this kind of thing are hard to master. It ain’t easy to take it as it comes. Especially when the clients ain’t artists at all.
SelvamOctober 5, 2009 12:34 am
thanks….much informative….
JamesOctober 5, 2009 2:04 am
I think the article needs A LOT of work! ;)
Just kidding. Thanks for the tips.
SaschaOctober 5, 2009 2:56 am
While most of the article is really fine and helpful, two thing are dead wrong on spot:This is plain bullshit, sorry, and amongst the worst opinions permeating within our profession. Design is NOT art. There surely exists a grey zone between design and art, but the possibiliity that your work falls into it is extremely low, so stop considering. There are subjective decisions in design, agreed. But if you’re not able to rationalise your concept, to explain to the people you cooperate with (i.e. clients) why you did something this way or the other, you shouldn’t call yourself a designer. In design one is working with (hopefully) clear objectives and restraints. There’s a purpose. Hence, if design work does not support its purpose or even violates it, it definitely can be wrong. Or right, in the opposite case.
Ted RexOctober 5, 2009 5:42 am
Excellent advise and a topic not usually covered. I posted a link to this page on my daily design blog
Joe BarstowOctober 5, 2009 8:07.
CCOctober 5, 2009 8:30 am
I think the article needs A LOT of work! ;)
Just kidding. Thanks for the tips.
CCOctober 5, 2009 8:38.
DocaliOctober 5, 2009 10:43 am
True, true… Thanks for the tips :)
sandeepOctober 5, 2009 10:38 pm
Honestly the more constructive criticism you take and work with it, the better designer you will become! that’s true.
sandeepOctober 5, 2009 10:51 pm
hi,
Honestly the more constructive criticism you take and work with it, the better designer you will become! that’s true.
FusionOctober 6, 2009 3:37 am
Design is not art! Don’t say that! It is much more than art! Art has = emotion and Design = function+form+emotion…
Don`t put the things in same bag. That is why us designers sometimes get that “criticism”…
NiubiOctober 6, 2009 3:40 am
I always accept constructive criticism – how else am I to grow and improve my work? Likewise, I always try to positively criticize my friends and colleagues when I think I can help them to improve something in their life/work. For example, DubLi.com didn’t have a search box for a long while, an omission which I thought was a bit odd for a shopping portal. Now they do! Although I doubt it was my lone pointer which changed that!
DaemonOctober 6, 2009 8:44 am
“Design is subjective and, like any art form, has no rulebook. ”
Stopped reading there.
Design is not ART. Painting a picture is art. Design is science. As such, it is very strict. There ARE rules, and there IS a rulebook. Why do most sites have top-left logo? Why do buttons need to look like real life buttons? There is millions of things that DESIGN does in today’s world, and none of them can be called ART.
Art is something everyone understands differently. Design must be understood EQUALLY by all.
thehilarionOctober 6, 2009 5:54 pm
Thank you for the article and especially for the thread that’s followed.
“ByColor (October 1st, 2009, 1:58 pm)
Don’t forget the opposite: when criticism gets too specific and too often, it can turn a designer into a simple pixel pusher.”
Amen to that, I’m constantly complimented as being ‘easy to work with’ but never happy with my work as somehow I’ve surrendered my power to the client who’s suddenly no longer consulting my expertise.
“razor (October 4th,.”
Razor, you hit the nail on the head. They definitely are enamored by the initial ‘wow’ factor then when the honey moon is over they see the real site. And surprise surprise… how often does it come full circle back to your original designs that were addressing the original requirements?
Ravikumar V.October 6, 2009 8:56 pm
where is digg ??
Ray WenderlichOctober 7, 2009 7:05 am
The list of forums good for design feedback makes this article worth reading all in itself!
lsdOctober 7, 2009 7:06 am
Thank you for a great article. Helped me put all the pieces in place.
JennieOctober 7, 2009 8:50 am
I’m don’t agree w/ the idea that “Design is subjective.” I’ve always believed that design should be OBJECTIVE and I don’t think it should be viewed on the same level as art. Art is about self expression while design is about goals, business, success, etc. I think Smashing is a highly respected source in the online design community but it’s a very irresponsible statement to make. I don’t consider myself an artist but a partner in helping my company achieve their goals using design. The proper response to #1 is to reiterate what those goals were when setting out on the design (and I hope no one here is willy nilly designing something for the sake of making it look “pretty”). Of course there are a lot of clients/bosses where that won’t fly but at least stand up for those goals and let’s base the feedback around those instead of personal feelings. As a designer, it would be a lot easier to respond to “I don’t feel that is working because…” instead of “Make this red” so I think it’s also up to us to probe deep enough to ask “Why don’t you think this is working?”
tRevOctober 7, 2009 2:00 pm
I really have to disagree with most of this article.
When projects are client driven they end up looking like client driven projects. I dont tell my mechanic how to fix my car, why tell the designer how to do page layout?
I have run into plenty of clients who say something just to put their fingerprint on the project. In the end it hurts how my portfolio looks and eventually better job opportunities.
All egos set aside, “constructive criticism” is necessary, but it has to fit in the information architecture of the design and have a purpose. “Turn color x to color y because our competitor uses color X”. Un-constructive criticism: “use reflex blue because everyone else does” is the coup de grace of the design and visibility of the company.
sanyalOctober 7, 2009 11:14 pm
thanks for the article, this is something i always wanted to say but was never able to put in precise words. i dont know about industry but for students it is very helpful.
frazerOctober 9, 2009 2:19 am
Its a subject that comes down to entirely how you are as a person, and what you are interested in achieving. Focus objectively on one person trying to improve the work, and you trying in kind. Sadly it often becomes about the people having a macho standoff of who knows best being king. Eventually people will argue anything just for the sake of disagreeing and showing they too can look at design critically, completely at the expense of the work. Online networks should change this though, running ideas through discussion forums allows anonymous feedback and people have time to calm down and think sensibly.
blur designs network
Jonathan PattersonOctober 12, 2009 6:55 pm
“count to 10 before responding”… this might look a little odd. Instead, just get into the habit of learning constructive AND qualified criticism from useless, haphazard remarks.
Once you learn to take criticism well you’ll be a better designer.
Krish MandalOctober 13, 2009 7:35 am
One thing I’ve come to realize after a very long time is that you are only a designer when you do the first iteration or what my mentor calls (and now I do too) “A straw man.” After that first iteration, you are no longer the designer, but only the person with the skills to help the client, who is really the designer. There has to be a place to start. Don’t make the straw man so perfect and put so much energy into it that it’s your “one perfect design.” It’s meant to be offered as a place to start, so the client / boss can feel like they have had a say and are involved in the process.
Then, after this first iteration, the design is no longer “yours.” With input from the client or boss, the design is now becoming “community property” and what you can bring to the table is guidance on design principles and to take in what they are asking for and finding ways to solve problems. That is what design is, a way to solve problems. People get too caught up in the “beauty” of the design. The question, the goal, is “does this design do the things we need it to do, and if not where does it fall short?”
As a designer, you are only a guide. Your job is to help the client or boss explain their requirements, and within the constraints of your industry and media, offer a solution that addresses those requirements. Note that their requirements may not be solidified, or even “correct;” they may even be in flux because of internal disagreements at the client site.
So when people get overly arrogant about “He doesn’t know how to design, I’m the designer here,” they are surely not recognizing their role. The designer is there to help bring to life the idea that the client has. And if you think that everyone sees the reds, greens, and blues in exactly the shades that you’ve picked, you’ve got a long climb ahead.
GemOctober 20, 2009 6:21 pm
I found this article led to a whole lot of internet beef about designers… that designers in general should analyse themselves to death and get a ‘self help’ program going on so they can sit their and absorb limitless banal crap that comes out of peoples heads. Seriously people. Wisdom and a sound judgment as well as constructive criticism are all useful and welcome tools people seem to throw out the window when they ‘want it the way they want it’ regardless of expertise or insight. So if you’re thinking…”Why are the designers on their high horse, why do they never want to do it my way… wa-wa-wa..” maybe you should get some humane perspective on how to respect and treat other people properly. Stop and think before you open your mouth. ‘Do unto others as you would have done unto you’ is a good starting point when you are providing feedback and criticism. Unless you’d rather drive individual designers into the ground so that they do not want to talk or work with you, or they feel like grabbing a BB gun and shoving it down your throat!
Reference here to great article by Paul Rand
JulietOctober 21, 2009 4:35 am
Hi
Good advice. Criticism can be hard to take and to put to good use.
Also, one important thing is to not take it personally and to not let it affect self-esteem.
Juliet
ElizabethFebruary 1, 2010 10:35 am
This is a great article.
A lot of attention is drawn to #7, and for good reason!
I recently was working on a project … I did 3 options to start with. The feedback I was getting was, “I don’t like this” and “Do more.” Since I was just chock full of ideas, I did about 7 options in all. Then I expected some constructive feedback in order to fine-tune an option or two. I got nothing!
It was the most frustrating experience of my life. The client would say, “I don’t like that font” or “that shade of blue” but wouldn’t tell me what she didn’t like about it. I asked for specific feedback, pinpointed her problem areas and tried to probe further to find a solution. She could tell me she didn’t like a particular font, but couldn’t say why and couldn’t provide examples of one she DID like. When I showed her a selection of widely used fonts to help narrow it down (bold, light, serif, sans), she brushed it off. I asked for constructive help in a million different ways, and what I got back was a smart mouth attitude that since I’m the designer, I should be able to figure it out.
Looking back, I did everything in this article (a pat on my own back). I accepted and welcomed the criticism and tried to build on it, I said thank you, and asked for more help in a polite manner, at all times letting the client know that the more information she gave me, the better the project would be.
All I remember her saying was, “These options don’t work. Next time, bring your A game.” I wanted to tell her, “Lady, all I have is an A game. And I am not a mind-reader.” Maybe I should anonymously forward her this article …
Jeff KMarch 2, 2010 4:22 pm
I’m part of a small team and I can handle the criticism and non-constructive feedback fine enough. I can sometimes read into what’s being said and make something of it. The part I’m having trouble with is when I get a response (from the client) like:
“Okay, looks good, but can you make this font bigger, make that line darker and move this over here. Oh, and there’s some room there, put a button there. Good, done, do it.”
With no room for rebuttal or input. Am I to just sit back and pixel push? Are there any UI design processes I can put forward to help prevent this?
DawnNovember 7, 2011 7:39 am
I agree with Juliet, not letting it affect self-esteem. I know this is hard sometimes. I catch myself getting my feelings hurt, or think people think I am an idiot. So I to take a deep breath and realize they are just trying to help me. I think sometimes it not really what we are saying but how we are saying it. It’s important that we consider multi personalities What might be okay for one person may destroy another. I think sometimes it wouldn’t hurt to make examples of ourselves when critiqueing, letting the other person know your personal experiances, and how and what steps you took that finally helped you. This way the person don’t feel like they are the only idiot not catching on and realize everyone sometimes just needs that extra support. This is just my opion. I always like to make everyone feel good, becaue usually it’s me that needs that extra help.
JamesJuly 5, 2012 4:23 am
My new business partner can’t take criticism so I am deciding to stop working with him! He is arrogant. I misread his personality at first, due to giving him too much benefit of the doubt.
I am can take his constructive criticism, and we often meet halfway. When he receives constructive criticism he throws a tantrum and I let him get his own way, as he wore me down. I now can’t be myself around him and find myself editing what I say to avoid tantrums.
He’s just lost out on a great venture.
Sadly I have no confidence in his ability to change, as of course if I give him ‘constructive criticism’ about his inability to take criticism… you can guess what will happen… he’ll get aggressive again. Oh dear!
Alex CameronMay 9, 2013 3:35 pm
Jennie (Oct 7) contribution and many others converge around the question ‘are designers artists?’ As you may all well know it is a question that is as old as the practice of graphic design itself. Like those of you above I too think this is a crucial question to get right, as an industry, in order to illuminate the centrality of graphic design in shaping wider cultural questions to a broader audience. I recently read Deyan Sudjic’s ‘The Language of Things’. In it he dedicates a chapter to this very question, coming down on the side of the answer ‘NO’ we are not artists. This question will continue to divide in part because as an industry we feel we have yet to earn our keep as a purposeful activity in the new – creative – economy. And that artists have for some time now, ‘used’ the language of graphic design and advertising in their artworks adds more credit to the idea of designers as artists.
Anders SundstedtFebruary 23, 2014 7:09 pm
Thanks for a great article. It’s certainly a learning curve how to respond in the best way to design criticism and that’s a fantastic article and I love the pictures too. When I create animated videos and illustrations for my freelance website I will be coming back to this article when I get criticism (positive or negative) for my work.
Thank you! | http://www.smashingmagazine.com/2009/10/01/how-to-respond-effectively-to-design-criticism/ | CC-MAIN-2014-15 | refinedweb | 7,549 | 70.43 |
Handling configuration in Go
The configuration is key especially when your application starts to get a little bigger. You might need to provide parameters such as keys, passwords, URLs etc to your go application.
If any of the parameters are sensitive (such as a database password), it should not be kept in your config file which is usually committed and kept in plain text. Sensitive parameters should be served as runtime env variables.
DB_PASS="db_pass" go run main.go
Access your env variables.
fmt.Println("DB_PASS:", os.Getenv("DB_PASS"))
Using TOML for configuration
I usually like to use TOML for my Go application configuration. TOML is very easy to use and works really well with Go. Here is an example TOML file I might use. You can see the SSL cert and key file paths that I am passing to my application.
api_addr = ":3000"
[certs]
crt_file = "main.crt"
key_file = "main.key"
I think it is easier to keep two config files, one for development and one for production. Let’s just name them “config.dev.toml” and “config.toml”. This way, we would be able to separate our development configuration from our production configuration.
We would have to parse the TOML file in order to access our configuration variables. I use github.com/BurntSushi/toml to parse TOML in all my go applications.
package main
import (
"log"
"github.com/BurntSushi/toml"
)
type certsConfig struct {
CrtFile string `toml:"crt_file"`
KeyFile string `toml:"key_file"`
}
type tomlConfig struct {
APIAddr string `toml:"api_addr"`
Certs certsConfig `toml:"certs"`
}
var (
config tomlConfig
configPath string
)
func loadConfig() {
if _, err := toml.DecodeFile(configPath, &config); err != nil {
log.Fatalln("Reading config failed", err)
}
}
Since we have two config files (one for development and one for production), we can pass which one we would like our application to use as a flag when starting the application.
I prefer parsing flags in the init function of the application. As you can see below, the default config file is the development config file.
package main
import (
"flag"
"log"
)
func init() {
// Path to config file can be passed in.
flag.StringVar(&configPath, "config", "config.dev.toml", "Path to config file")
flag.Parse()
loadConfig()
}
func main() {
log.Println("Application is running at", config.APIAddr)
}
And in order to run the application in production with the production config, pass the config path as a flag.
DB_PASS="db_pass" go run main.go --config="config.toml"
That’s pretty much it. Same idea can be applied with YAML and even with json as the config file type. But I am pretty happy with TOML.
Same story, better code highlighting: | https://koraygocmen.medium.com/handling-configuration-in-go-22393a8f5b6d | CC-MAIN-2021-21 | refinedweb | 433 | 60.41 |
Created on 2012-01-11 07:50 by fmitha, last changed 2012-01-23 23:16 by lukasz.langa. This issue is now closed.
I have not experienced this myself, but see
This appears to be another case of
I also recently reported a similar problem at
Looking at the code for `NoOptionError` (see below) in 2.6 at least it would indeed be subject to the same problem. Perhaps some attention could be paid to
resolving 1692335, which would presumably make this vexing problem go away?
class NoOptionError(Error):
"""A requested option was not found."""
def __init__(self, option, section):
Error.__init__(self, "No option %r in section: %r" %
(option, section))
self.option = option
self.section = section
New changeset e63e2471f46f by Łukasz Langa in branch 'default':
#13760: picklability tests for configparser exceptions
New changeset 76077971ee1f by Łukasz Langa in branch '3.2':
#13760: picklability tests for configparser exceptions
New changeset 5ecf650ede7c by Łukasz Langa in branch '2.7':
Fixes #13760: picklability of ConfigParser exceptions
3.2 and 3.3 already worked as expected. For 2.7 I did the __reduce__ workaround that's also used by SQLAlchemy.
Thanks for the quick attention to this, Lukasz. I'm just curious. Why do 3.2 and 3.3 already work? My understanding was that the basic exception issue in was still open.
The reason why 3.2 and 3.3 work is that some time ago I fixed #4686 by explicitly settings .args in configparser exceptions. Unfortunately that patch was not backported at the time. I did that just now.
You're right, that is specific to configparser and doesn't affect #1692335 which is still unsolved.
I see. Thanks for the pointer to the earlier (2008) bug report. I notice you fixed the bug differently for 2.7 (define __reduce__) and 3.2 (set args). Is there some reason for that?
Currently both branches use the same solution to the problem (e.g. setting `args` in `__init__()`). I simply forgot about the old ticket when I prepared a fix for this one. | http://bugs.python.org/issue13760 | CC-MAIN-2013-20 | refinedweb | 342 | 71.1 |
Introduction
With the integration of the CLR with SQL Server 2005, we can create database objects using modern object-oriented languages like VB.NET and C#. In fact, for pure non-data access code like computation, string parsing logic etc., we should use .NET for writing the SQL server objects. It is also advisable to use managed code for writing the store procedure. Also for accessing webservices, exploring OOP's programming for better reusability and read external files, it is good to used managed store procedure.
This article is trying to explain the simple and required steps that are require starting the creation of Manage Stored Procedure using C# and using them.
The Project
We will create a Visual Studio 2005 database project for the managed stored procedure.
Creating the Database project:
Open Microsoft Visual Studio 2005 and create a SQL Server Project.
File->New->Project->Database
Adding a database reference:
Now it will ask for a database reference. Add one.
Adding a Stored Procedure:
Write click on the Project and add a stored procedure.
The file SPOne.cs:
Past following lines to the file SPOne.cs. Make sure that the table Person exist on your database or give the name of your table instead of Person.
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void SPOne()
{
SqlPipe p;
SqlCommand sCmd = new SqlCommand();
sCmd.CommandText = "Select * from Person";
p = SqlContext.Pipe;
p.ExecuteAndSend(sCmd);
}
};
Deploy the Stored Procedure:
Build the project and then deploy it.
Run the Stored Procedure:
Make sure the CLR is enabled with your SQL Server by running the following SQL.
sp_configure 'clr enabled', 1;
GO
RECONFIGURE;GO
Now execute the Stored Procedure and you will get an output of select statement.
Make your Life follows Procedures and Stored them safely! If possible, manage them!!!!
Writing Managed Stored Procedures using C#
Lock, Logoff, Reboot, Shutdown, Hibernate, Standby in .Net
It helps as Basic introduction about how to use stored procedures in .Net
"Make your Life follows Procedures and Stored them safely! If possible, manage them!!!!"
Good quotes......
It;ll be highly helpful if the implementation is shown within an application(use and how to call within the application).
Thanks.
Nice but very short...... | http://www.c-sharpcorner.com/UploadFile/pk_khuman/ManagedStoredProceduresUsingCSharp02182007232059PM/ManagedStoredProceduresUsingCSharp.aspx | crawl-003 | refinedweb | 385 | 60.41 |
05 April 2013 15:35 [Source: ICIS news]
LONDON (ICIS)-Indian Potash Limited (IPL) has announced a purchase tender requesting an unspecified quantity of urea, the company said on Friday.
This is the first import tender to be issued by the country in 2013. It will close on 12 April with offers requested to remain valid until 15 April.
IPL has requested the urea to be delivered to India by 31 May.?xml:namespace>
Indian tenders are globally significant as awards commonly range from 300,000 tonnes to more than 1m tonnes.
The general expectation is that Iranian product will feature heavily under this tender, while urea from Yuzhny and other regions may also be offered.
Urea prices have been under pressure over the last month because of an oversupply situation, and lack of demand due to weather-delayed seasons in the US and Europe.
India is one of the largest importers of urea, and imported nearly 8m tonnes of the fertilizer in the previous fiscal | http://www.icis.com/Articles/2013/04/05/9656493/Indian-Potash-Limited-issues-urea-import-tender.html | CC-MAIN-2014-52 | refinedweb | 166 | 59.64 |
In the previous article, we have discussed about C++ Programming Tutorial for Beginners. Let us learn About Multithreading Tutorial PDF in C++ Program.
Excited to learn more about Multithreading in C++? Viewing BTech Geeks best & free online C++11 Multithreading Tutorial can be your savior in providing better learnings and great knowledge. The support of multithreading was initiated in C++11. Earlier, POSIX threads or p threads library in C is used to call threads.
Get into this ultimate online tutorial of C++11 Multi-thread and check out the information that you require like definitions, functions, methods, working of <thread>, etc. with Examples.
- Concepts in C++11 Multithreading Tutorial
- C++11 Threads FAQ Topics List
- What is Thread?
- What is Multithreading in C++?
- Working of Thread in C++11
- C++ Thread Class Example
- Different Ways of Creating a Thread in C++
- Uses of Multi-threading in C++11
- Top 10 C++11 Multithreading Interview Questions List
Concepts in C++11 Multithreading Tutorial
Firstly, before understanding the basics about C++11 Multi-thread, you guys should aware of the topics included in this C++11 Multithreading Tutorial for a quick reference. Just take a glance at the direct links furnished below and get in-depth knowledge regarding std::thread in C++.
- Part 1: Three Ways to Create Threads
- Part 2: Joining and Detaching Threads
- Part 3: Passing Arguments to Threads
- Part 4: Sharing Data & Race Conditions
- Part 5: Fixing Race Conditions using mutex
- Part 6: Need of Event Handling
- Part 7: Condition Variables
- Part 8: std::future and std::promise
- Part 9: std::async Tutorial & Example
- Part 10: std::packaged_task<> Tutorial
C++11 Threads FAQ Topics List
- C++11: Start thread by the member function
- C++11: How to put a thread to sleep
- C++11: How to get a Thread ID?
- C++11: Vector of Thread Objects
- C++11: std::thread as a member variable in class
- C++11: How to Stop a Thread
After going through the above links, you’ll definitely retain all core C++11 multithreading concepts. But now, we will be discussing some fundamentals and major information like What is Thread, Uses of Multithreading, Different ways of launching threads, and most importantly quiz and interview questions on Multithreading in C++11.
Do Check Related C++ Tutorials:
What is Thread?
A thread is a Class that represents individual threads of execution. Every single thread shares memory, file descriptors, and diverse system resources. Actually, earlier in Linux, all thread functions are stated in <pthread.h> header file but it is unavailable in standard C++ programming.
What is Multithreading in C++?
A specialized form of multitasking that accepts your computer to work two or more programs concurrently is known as Multithreading. Basically, multitasking is divided into two types. They are process-based and thread-based.
In C++, a Multi-threaded program includes two or more parts that execute concurrently. All limitations that are covered in the prior threads library in C are defeated with this std::thread. Related Classes and Functions of thread are defined in the thread header file.
Working of <thread> in C++11
std::thread is the thread class that provides a single thread in C++. For working of thread, we have to create a new thread object & then the executing code is passed and it will be called (ie., a callable object) into the constructor of the object. After creating an object a new thread is initiated that will execute the code stated in callable.
Syntax:
#include<thread> std::thread thread_object(callable)
C++ Thread Class Example
The below-illustrated example is on how we can create a simple HelloWorld program with threads:
#include <iostream> #include <thread> //This function will be called from a thread void call_from_thread() { std::cout << "Hello, World" << std::endl; } int main() { //Launch a thread std::thread t1(call_from_thread); //Join the thread with the main thread t1.join(); return 0; }
Different Ways of Creating a Thread in C++
Basically, there are four ways of launching a thread in C++ and they are as such:
- Launching a thread using a function pointer
- Launching a thread using a function object
- Launching a thread using a lambda
- Launching a thread using a member function
From the below multithreading in C++ program, you will observe creating the three threads from the main function by using the three main callable objects that are listed above:
// CPP program to demonstrate multithreading // using three different callables. #include <iostream> #include <thread> using namespace std; // A dummy function void foo(int Z) { for (int i = 0; i < Z; i++) { cout << "Thread using function" " pointer as callable\n"; } } // A callable object class thread_obj { public: void operator()(int x) { for (int i = 0; i < x; i++) cout << "Thread using function" " object as callable\n"; } }; int main() { cout << "Threads 1 and 2 and 3 " "operating independently" << endl; // This thread is launched by using // function pointer as callable thread th1(foo, 3); // This thread is launched by using // function object as callable thread th2(thread_obj(), 3); // Define a Lambda Expression auto f = [](int x) { for (int i = 0; i < x; i++) cout << "Thread using lambda" " expression as callable\n"; }; // This thread is launched by using // lamda expression as callable thread th3(f, 3); // Wait for the threads to finish // Wait for thread t1 to finish th1.join(); // Wait for thread t2 to finish th2.join(); // Wait for thread t3 to finish th3.join(); return 0; }
Result:
Uses of Multi-threading in C++11
A multithreading environment lets you drive many activities concurrently, where diverse threads are responsible for diverse activities. You can explore different uses of multi-threading in C++ while learning or coding on your own. But for now, we are going to see some of them below. They are as follows:
- Better resource utilization.
- More responsive programs.
- Simpler program design.
Top 10 C++11 Multithreading Interview Questions List
The list of 10 best and most common interview questions on multithreading in C++11 is prevailing here for all freshers who are preparing & appearing for the software jobs in the C++ Programming language.
- What is multithreading?
- What are the ways to create a thread in C++?
- Brief me about the available models in Multithreading?
- What is C++11 thread local storage (thread_local)?
- What is the difference between a thread and a process?
- Name the design pattern for the thread?
- What are the 6 synchronizations primitive available in Multithreading?
- What is a thread pool?
- What is thread starvation?
- How can you create background tasks with C++11 threads? | https://btechgeeks.com/cpp11-multithreading-tutorial/ | CC-MAIN-2021-43 | refinedweb | 1,083 | 60.04 |
will teach you.
This article will be for all the first year’s students and those who want to get started with programming. Because starting can be a very scary task I will keep it simple, but once you get the basics down in one language you can learn other programming languages a lot easier. Now let’s start by installing Python.
Installing Python (3) isn’t hard at all, just follow the instruction on Python’s official website. It was just an
.exe I had to install in my case.
Let’s run
cmd on our machine and enter
py --version in it to check if the installation was succesful. (This might differ on Mac or Linux)
Now that Python is installed and ready we will create our first Python application. A “Hello, world!” application, as is tradition in the wonderful world of information technology.
Hello World!
You can run simple Python programs directly into your ‘console’, or ‘terminal’. The most simple program you can run is a Hello-world application. This is a program that outputs “Hello, world!” in the console.
The syntax for writing to the console, or ‘printing’ is:
print(<your text>)
Where you would write your text between the brackets
() . Text in Python, and in most other programming languages must be written between quotes
"" . This means that our first program will look like this:
print("Hello, world")
But how do we run it? Well, since it is our very first program, we will run it in our terminal. Open the terminal and type in
python and press enter. This will open the Python interpreter.
Next you can enter the print statement and press enter. You will see that our input, our string between quotes is printed in the console.
And now, you have officially written your first program and are a Python programmer.
But you have most likely seen programs that are running from files, and that are written in files. So let’s take a look at how that’s done.
IDEs
An IDE is a program in which you develop code. It offers lots of useful tools such as code completion, type checking, error checking and much more. You can forget about these for now, and writing code in Notepad also works, but that is not the best way to go, so I will list a few free IDEs that you can use that support Python.
I will use Visual Studio Code in this article. It is made by Microsoft and offers a ton of plugins for all sorts of programming languages. PyCharm is also very good since it offers more functionalities specifically for Python.
Running from a file
Install the IDE of your choice and install any optional plugins for Python if you’d like.
Next up, create a folder to program in and make a file called
prog.py . The
prog part can be anything, as long as it’s a
.py file. Now open the file in your IDE and write the code we wrote before in it.
If you save the file and open the terminal in the same folder you can run the
py <filename>command to run the new Python file.
In Visual Studio Code you can click Terminal in the top bar, and click New terminal to open a new terminal in the correct directory.
Variables
A variable is a value that can be variable. It can be anything you want in theory. A number, a text string, a list of number, everything. And they are a very essential part of programming. Can you imagine a variable that Medium uses? Usernames, images, whole articles, they can all be variables.
A variable in Python is declared like this:
<variable-name> = <value> . This means that we can declare a name,
name = "John" , or a number
age = 17 , or anything else in a very easy way. But how do we use them? Let’s start with the name.
name = "John"
print(name)
This code will print whatever is the value of the variable
name . In this case “John”. But what if we want to politely greet John? We need to concatenate the strings. This is done by using the
+ sign. We concatenate string like so.
<string-1> + <string2>
So then our code will look like this if we add “Hello” in front.
name = "John"
print("Hello" + name)
Keep in mind that
name is a variable, and therefore doesn’t need any quotes surrounding it. If it was not a variable it would be
print("Hello" + "John") .
Basic Types
Variables can have a whole range of types. The most important for now are the
string ,
number and
boolean types. We’ve already used the string type, which is the text between quotes.
The number type, like you’d expect is a number. This can be any number, positive and negative, and with floating point values. Working with numbers is very easy. We’ve already used the
+ sign by adding string to each other, but if we use it with numbers it will add them together and return the outcome.
Next up we have the boolean type. This can be either
True or
False in Python. Note the capitalisation. A boolean can also be a statement. Something like
x > y which will return
True if
x is more than
y . Try it out!
Inputs
While greeting the same person over and over again is very fun to do, and I remember when I started programming I was stoked when I was first using variables, we need to add some sort of input.
Thankfully the
input function in Python is easy and very straightforward to use. The function prompts the user to enter a value and returns it as a variable if used like this:
A function usually has trailing brackets. Inbetween these brackets you can put parameters which are variables the function can use. In the case of
input it accepts a string parameter which it will print before asking for input.
I entered “John” by typing on my keyboard. If you understand this you might be better than a few of the first-year students I had to work with, and it might be time for you to take a minute to breathe, walk and grab some coffee since this is just the beginning.
We have now explored very basic input and output in our Python program using the
input and
Conditional code
The next step most courses will learn you is how to write conditional code. This means that we have any conditition and change the flow of our program accordingly. Most (if not all) programming languages do this using
if ,
else and
if else statements.
The basic syntax in Python is very simple.
if <statement>:
pass
else:
pass
If you don’t need a
else for some reason you can simply write the
if .
pass Means that the code can continue, Python doesn’t accept empty
if bodies, so we use
pass as an empty statement.
Also note the identation. The bodies of the
if and
else are written with a <tab> before them. This matters in Python and the code will not run otherwise. It also increases readability when working with multiple if-statements nested in each other.
Let’s try and add an if statement that checks if a number is more than zero.
Run this code and try to understand what is happening. What will happen if you set num to a negative value?
num = -3 for example. Or if it’s set to 0 or maybe to a string or boolean value.
But for some, 0 isn’t positive nor negative, so we want to add the option for 0 being “Zero”. This can be done using another
if or using a so called
elif in Python.
elif is Python’s way of saying
else if . This is an additional
if that you can chain together to create more complex programs. Let’s try and add it to our number checker.
We have our first check, Checking if num is equal to zero. If that check returns
False , it will continue. And if the number isn’t greater than 0 as well, it will continue to the last if. So the order in which you will write these matters.
You can also check the length of a string in characters within
if ‘s, and I’ll show that next. This uses a built-in function in the string type in Python. The
__len__() function.
Note that, like
input , the
__len__ function also has trailing parenthesis. This is required to call a function. Leaving the parenthesis out will return the function itself, not the result the function gives. You can try printing it and it will return a memory address to the function body, but that isn’t very useful for us now.
Exercise 1
Try and create an
if statement that checks the length of a variable that is a string type. It has to check on three different lengths. More than 0, less than 5 and more than 5. Remember to check the order of the checks and test what happens if they aren’t in the correct order.
Defining Functions
Functions are an essential part of programming. So far we have only used build-in functions, but we can also create our own. We define these using the
def keyword in Python. Let’s make a function that can greet the user if he gives his name.
You can also define functions that do not take any parameters. This function takes the parameter
name and we use it to greet the user using his name. Try and create a few functions of your own that print various inputs, sums or with multiple parameters. Note the identation we use.
You can expand on this and create more complex calculations. Try playing with numbers and the
* ,
/ ,
+ and
- signs to create complex calculations.
Return
The
return keyword is very important in functions. Instead of printing it it will return the value to whatever called it. Like we did using the
__len__() function which returned the length of the string as a value.
This is very useful for using functions. You can use the value as a parameter by calling the function if you want.
This will calculate
sum(3, 3) first, and then it will calculate the outer function. The possibilities are endless. Try and understand how this works, it can be hard but is very important to understand, so try it yourself.
We also use a comment here. The line that is prefixed with a
# is a comment. Python will not do anything with a commented line and they can be used to increase readability which is good practice.
Loops
Looping is also a very important thing in programming. Keep doing X while condition Y is True, we have all seen it before. There are multiple types of loops. A
for loop and a
while loop. You can translate these to plain english if you’d like. For every item in this object, do X. Or while condition X is True, do Y.
A while-loop is always a course favorite to learn first, and I cannot really disagree. So let’s see the basic syntax.
In plain English: “While num is less than or equal to 10, print the number and increase the number by one”. If the condition will never be False the loop will continue to run until you close the application or set the condition to False somehow. This can be useful for certain programs, so let’s take a look.
This loop will always go on and on and on until the user enters “quit”. Try and run this code. Note the identation which gets deeper, and try to understand what happens. Our code is getting more complex and I hope you’re still following along nicely.
The for-loop is used for iterating over certain objects. These can be lists, strings, ranges or other values. We will try it out using the
range(x, y) function in Python. It will create a list with values from the first parameter to the second. We can use the for-loop to print all the values in the range.
This will print all numbers starting at 1 to 11, excluding 11. To it will print 1 through 10. The syntax is similar to
foreach loops in other programming languages.
Lists
A
range is a type of
list . Also called an
array . A list can hold a series of values that are declared between square brackets and separated with comma’s. And Like with the range, we can iterate over them using the for loop.
Here we declare a list of items. In this case fruits. In the for loop we iterate over them and print each one of them. Try it out yourself! Try and change the strings to numbers, or try to mix strings and numbers in the list. It is all possible.
Calling a specific item from a list is also possible without iterating over them.
Calling these list-items is done with square brackets with a value inbetween. This value is the index of the item you want to retrieve and it starts at 0. so 0 is the first item, 1 is the second item and so on. This value inside the quare brackets can also be a variable like in a while-loop.
You can change specific items using this syntax as well. Try it yourself.
Exercise 2
Try and create a list with some values. And iterate over this list using a while-loop. Then you can see how to iterate over something without a for-loop.
Tip: use
len(<your list>). This is a length function for lists.
Dictionaries
Dictionaries are one of the more complex data types in Python. They consists of key-value pairs. The key, or name of these pairs must be written between quotes. These keys need to be unique as well.
A car cannot have two model names, or two years.
Calling the values from keys is a very similar to getting a value from a list. By using the square brackets with the name of the key we can retrieve the value from that dictionary with that key.
A key can contain any value. A string, a number, a list ,another dictionary or a list of dictonaries. This is very useful for creating complex data structures. These inner dictionary values can also be retrieved. Let’s see how it works.
We define a dict
person with a key for
children . The
children key contains a list of dictionaries. Both dictionaries are for a child. With this you can create complex family trees if you’d want to.
See how we retrieve the value of the first child? We chain the square brackets with the keys after each other. And since
children is a list we need to retrieve the first value using
[0] .
Exercise 3
Create a function that, when called retrieves a specific list item inside a dictionary. The function should have one parameter, the index of the list-item it should return. If the number exceeds the amount of available list items it should return an error in the form of a string.
Lambda Functions
Functions can have several parameters as we’ve seen so far. We have made function that use numbers or strings as parameters. Lists and dictionaries are also possible, but what about functions?
Functions that return a function or retrieve one as a parameter are called higher-order functions. You can create these to, for example, create modular functions that share functionality.
In Python, functions to return will be anonymous. They don’t have a name (yet). Another word for anonymous functions is lambda functions. And in Python we use the
lambda keyword to define these.
Let’s look at the following code.
We define one function
func that returns a lambda function. This lambda function takes one parameter
a and multiplies it by
n .
We get
n as a parameter from
func , so the function we return is
a * 2 in the case of the
doubler and
a * 4 in for the
quadrupler .
This means that we can create any multiplication function using the
func function. And we can also use the lambda function as a parameter in a regular function.
This might seem very weird to use, and I agree, it is weird at first. However, programming with lambda functions can be useful when working with recursion or filtering and mapping lists. Let’s filter a list using a lambda to get the point of lambda’s across a bit better.
This code is a bit messy since filtering in Python is a bit hard to do as it returns a filter-object which you have to convert to a list again, but oh well.
We create a lambda which uses one parameter. This parameter is filled automatically and applied to every list-item by the
filter() function. The lambda returns either
True or
False depending on if the list item is more than 6.
While the concept of lambda functions might be a bit blurry still, we will get on to the next topic. Just try and let it sink in and try to play around with it to understand this topic better.
Classes
Almost everything in Python is an object, with its properties and methods. A Class is an object constructor, or a “blueprint” for creating objects.
We can create a class for Persons to create person objects from. Or we can use the most used example: Books in a library. A class is made with the
class keyword.
We define two books here, but both have the same values. We create a book object by calling the object as a function. We can retrieve values by using the dot-notation like we do in the
A class is not very useful in most cases without a constructor. A constructor is a function that initializes a object with optional values you can set during this construction. A constructor can have parameters like regular functions to create more dynamic objects from a class.
The
__init__() function takes at least one parameter.
self . The
self parameter refers to the new object itself and doesn’t need to be given when calling the constructor when creating new objects. As you can see, we still call
Book(params) with the two parameters that you can refer to in the
__init__ function where we set
self.title to the parameter
_title , etcetera.
A class can contain any type of variable like you would expect, and some can be present in the constructor while some may already have a static value. You can change these values at runtime if you would like to.
Here we set the value of title for book1 to a new title after the object has been created. This allows for some intricate programming.
Exercise 4
Create an empty list of books. Create a function that adds a book to the list with given parameters. The book has to be an object made from a class using the
__init__ function to specify the values of the book.
You can add new values into a list using
list.append(<obj>) where
<obj> is the value to add to the list.
Create a function outside of the class that prints all the titles of the books using a loop of your choice. The function should recieve the books using a parameter.
Conclusion
This article is a course for the basics of programming in Python. This article is made for those with no experience yet and I really hope I made a difference in your studies, either for school or for hobby projects.
W3Schools has a lot of easy to use documentation about Python on which you can expand your knowledge.
Have a great time programming, make something great, and have a wonderful day. Thank you for reading. | https://mbvissers.medium.com/lets-learn-programming-in-python-b12fbabb6ba4?source=post_page-----b12fbabb6ba4-------------------------------- | CC-MAIN-2021-25 | refinedweb | 3,345 | 75.4 |
1) In nextstep/README, it states MacOSX supported are 10.4 and up. I suppose this is official. (It must be) 2) Considering 1), some code that checks MAC_OS_X_VERSION > 1030 or < 1040 or even < 1020 seems not necessary anymore. 3) For same reason, is MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_4 check necessary anymore? 4) MAC_OS_X_VERSION_10_x (3, 4, 5, 6 at present) are defined in nsterm.h and some other files. Is it necessary? Is it enough as nsgui.h does? (as below) ,---- | #ifdef NS_IMPL_COCOA | #include <AvailabilityMacros.h> | #endif /* NS_IMPL_COCOA */ `---- 5) In nsimage.m, it has 'onTiger' check (defined in nsterm.h). If official support version is 10.4 (Tiger) and up, this seems not necessary anymore. | https://lists.gnu.org/archive/html/emacs-devel/2011-10/msg00815.html | CC-MAIN-2022-21 | refinedweb | 112 | 72.42 |
Chris Oliver's Weblog Programming 2008-05-14T20:52:06-07:00 Apache Roller Weblogger Why JavaFX? chrisoliver 2008-05-13T11:11:40-07:00 2008-05-13T11:11:40-07:00 Our goal with JavaFX is to deliver a "media" stack for the Java platform. What does that mean? Well, in simple terms, 5 things: <ul> <li>Audio</li> <li>Video</li> <li>2D Graphics</li> <li>3D Graphics</li> <li>Animation</li> </ul> <p> We've been working on a new graphics stack for JavaFX, consisting of an integrated 2D+3D JavaFX script scene graph based on a hardware accelerated shader-based immediate mode Java API which fits seamlessly into the <a href=''>OpenKode platform</a> <a href=''>here</a>. <p>: <p> <img src='/chrisoliver/resource/cloud.png'/> <p> Java on the NVidia APX 2500 chrisoliver 2008-05-13T11:04:21-07:00 2008-05-14T16:42:36-07:00 Ken Russell describes the <a href=''>experience</a>. This device kicks the IPhone's butt all over the place. <p> NVidia has created a beautiful platform, supporting a full media stack - OpenGL ES-2.0, OpenMAX-IL, OpenVG. In addition, they've created their own 3D compositing window manager, which they're contributing to Khronos as part of <a href=''>OpenKode</a>. <p> <img width='600px' height='450px' src='/chrisoliver/resource/nvidia_apx_2500/nvidia_apx_2500_med.jpg'></img> <p> One thing that I find interesting is how easily the CDC/FP JRE + JOGL ported (note this does not include the AWT, Java2D, Swing etc - which is no coincidence). <p> JavaOne 2008 chrisoliver 2008-05-07T09:54:41-07:00 2008-05-07T09:55:29-07:00 <img src='/chrisoliver/resource/javaone/moontank.png'/> <p> We presented our JavaFX keynote demos yesterday, including <b style='font-size:50;'>Moontank</b><br> A true 3D, multiplayer game written in JavaFX script, which includes spatialized audio via OpenAL. Moontank was designed by Anthony Rogers of <a href=''>Tomato design fame</a>.. </p> <p> Those at JavaOne can come by the JavaFX Pod and play it yourself. </p> Key-Frame Animation chrisoliver 2007-12-20T10:22:38-08:00 2007-12-20T15:14:36-08:00 It's unfortunate that OpenJFX currently isn't a real open-source project. As such, it gives the appearance that progress isn't being made with JavaFX Script. Nevertheless, evolution has occurred, albeit internally. <p>. . <p> Given this input the system can automatically perform the animation, stop it, pause it, resume it, reverse it, or repeat it when requested. Thus it is a "declarative" animation technique. <p> Programmatically such "snapshots" consist of specifying the values for JavaFX variables - local variables and instance variables. <p> Here are some examples: <ol> <li> <a href=''>Simple</a> A very simple example, which demonstrates simultaneous animation of the rotation, scale, and fill color of a rectangle. </li <li> <a href=''>Ball</a> Consists of three concurrent repeating animations: one for the x coordinate of the ball, one which animates the y coordinate, and the one which animates the scale to create a "bounce" effect. See line 65 for the animations. </li> <li> <a href=''>Motomaxxve<. </li> <li> <a href=''>EG</a> Consists of several animations (fade, slide): one of which is a kind of ticker-tape view of the presenters at the 2007 "Entertainment Gathering" at the Getty Museum. The others support a slide show view. </li> <li> <a href=''>Poker</a> Video poker game. Consists of two animations: 1 for the deal, and 1 for the draw, each of which animates the bet, turning over the cards, and scoring (if you have a winning hand). </li> </ol> <p> You'll want to take a look at the examples for reference as you read the below, I think. <p> <b> Time Literals </b> <p> Since time is fundamental to animation, I've added time "literals" to the language. These are instances of the following primitive class: <p> <pre>; } </pre> <p> Numeric Literals with one of the suffixes (h, m, s, ms) are interpreted as Time literals, e.g: <p> <pre> 2s == Time {seconds: 2}; // true </pre> Relational operators and arithmetic operators other than % are overloaded for Time types: <p> <pre> 2.5s < 5000ms; // true 2.5s * 3 == 7.5s; // true </pre> <p> <b> Timelines </b> <p>. <p> Timelines are instances of the following class: <p> <pre> public class Timeline { public attribute keyFrames: KeyFrame*; public attribute repeatCount: Number; public attribute autoReverse: Boolean; public operation start(); public operation stop(); public operation pause(); public operation resume(); public operation reverseNow(); public attribute position: Number?; // unit time interval } </pre> <p> where KeyFrame is: <pre> public class KeyFrame { public attribute time: Time; public attribute keyValues: InterpolatedLValue*; public attribute action: function()?; public attribute timelines: TimeLine*; // included timelines } </pre> <p> InterpolatedLValue has the following definition: <pre> public class InterpolatedValue { public attribute value: Object; public attribute interpolate: function(values:Object*, unitTimeInterval:Number):Object?; } public class InterpolatedLValue extends InterpolatedValue { public attribute target: &Object; } </pre> <p> InterpolatedValue describes a pair (<value>, <interpolation_function>). <p> InterpolatedLValue describes a triple (<target property>, <value>, <interpolation function>). <p> <b> Properties </b> <p>: <p> <pre> var x = 2; var px = &x; *px == 2; // true *px = 3; x == 3; // true </pre> Properties in JavaFX are pointers to local variables or object attributes. In cases where these are "sequences" a pointer to an element of such a sequence is also a valid property. <p> <b> Syntax </b> <p> Although KeyFrame animations are normal JavaFX objects, special syntax is provided to make it easier to express than is possible with the standard object-literal syntax. <p> The "tween" operator is a a literal constructor for InterpolatedValue. <pre> 100 tween LINEAR; </pre> is equivalent to <pre> InterpolatedValue { value: 100, interpolate: LINEAR:Number } </pre> The "=>" operator provides a literal constructor for a list of InterpolatedLValues: <pre> var x = 2; x => 100 tween LINEAR; </pre> is equivalent to the following <pre> var x = 2; InterpolatedLValue {target: &x, value: 100, interpolate: LINEAR:Number}; </pre> However, you can also apply "=>" to a whole set of object properties using an object-literal like notation rather than just single property or variable, for example like this: <pre> var rect = Rect {}; rect => {height: 400 tween EASEBOTH, width: 500, fill: blue tween LINEAR, clip: { shape: Rect => {height: 500, width: 600} }; </pre> The second line above is equivalent to this: <pre> }] </pre> Finally, the "at" and "after" operators are literal constructors of KeyFrame objects: <pre> var x = 2; var rect = Rect {...}; at (2s) { x => 2 tween LINEAR; rect => {width: 400 tween EASEBOTH, fill: red tween EASEBOTH}; trigger { println("at 2 seconds..."); } } after (5s) { x => 100 tween EASEBOTH; } </pre> The "trigger" clause allows you to associate an arbitrary callback with the key frame. <p> The time specified by "at" is relative to the start of the Timeline. The time specified by "after" is relative to the previous key frame. <p> The first example above is equivalent to: <pre> }] } </pre>: <pre>; } } </pre> In the above example, the "clip" timeline combines the other three animations. Playing "clip" will play all the animations simultaneously (yet still taking into account each ones individual repeat behavior). Compiled JavaFX Script chrisoliver 2007-11-15T18:19:56-08:00 2007-12-20T10:27:25-08:00 <p> The <a href=''>JavaFX script compiler</a> is still a work in progress, but here's an very basic implementation of the <a href=''>Bubblemark</a> application using it. <p> <img src='/chrisoliver/resource/compiled/ball.png'></img> <p> <a href='/chrisoliver/resource/compiled/balls.jnlp'><img border='0' src='/chrisoliver/resource/webstart.small2.gif'></img></a> <p> We're still working on implementing the full JavaFX script runtime with the compiler. The above version is just using a provisional runtime and is just animating an image and doesn't implement the "bubbles" with vector graphics. <p> However, even so, the footprint and performance is now on par with what you should expect from the eventual production quality JavaFX script implementation. <p> <p> Note: on some systems OS-level timer-resolution will limit your fps. E.g. you'll see a max of 64 fps for any of the bubblemark examples. </p> Bubblemark chrisoliver 2007-08-25T14:30:41-07:00 2007-08-25T14:31:21-07:00 This <a href=''>posting</a> says some nice things about JavaFX, but also points out that in its current incarnation its performance is lacking: <blockquote style='background-color:#f0f0f0'> <p>The first thing you’ll notice is that it’s very slow. The demo takes a very long time to load and start, and real time performance is disappointing. Here is how it compares:</p> <p>JavaFX — 14 fps<br /> Firefox + Silverlight (JavaScript) — 56 fps<br /> Firefox + Flex — 62 fps<br /> Adobe AIR — 62 fps<br /> Firefox + Silverlight (CLR) — <span style="text-decoration: line-through;">99</span> 202 fps (update: 202 fps after fixing main timer’s latency)</p> <p>(All tests were performed on Pentium M 1.7 machine under Windows XP SP2)</p> <p>It is 4.4x slower than Flash and <span style="text-decoration: line-through;">7x</span> 14x slower than Silverlight (CLR-based variant).</p> <p>On a positive side, development with JavaFX is fun, the language is very compact and efficient for building complex dynamic layouts. There is a great <a href="">converter to JavaFX from SVG</a> format and <a href="">JavaFXPad</a> is done nicely too. </p> <p>If only it were 5x faster! </p> </blockquote> <p> Of course, our intention is to provide hardware-accelerated vector graphics and compile JavaFX script to JVM bytecode getting the full benefits of the hotspot virtual machine, and to provide an improved deployment system, and we're working on that. <p> The Bubblemark performance described above is caused by <ul> <li>Download time - size of the deployment unit - The bubblemark JNLP doesn't use pack200 compression so the download of the JavaFX runtime + the app is ~2.1 MB</li> <li>Performance of the JavaFX interpreter in doing the collision detection - which is currently probably 50-100 times slower than doing it in Java</li> <li>Lack of caching or hardware acceleration of the vector graphics and gradients that make up the ball</li> </ul> Eventually, a production-quality JavaFX will automatically provide these benefits but in the meantime because JavaFX builds on Java, it's possible to get respectable performance for Bubblemark by making a few minor changes: <ol> <li>Use pack200 compression for the Jar files</li> <li>Do the collision detection in Java code</li> <li>Cache the vector graphic at the application level - since once created it's static in this application</li> </ol> <p> Below's a version of Bubblemark in JavaFX that does that. <p> <a href='/chrisoliver/resource/javafxballs.jnlp'><img border='0' src='/chrisoliver/resource/webstart.small2.gif'></img></a> <p> Here's a <a href='/chrisoliver/resource/javafxballs.jar'>link</a> to the source code Programming Animations in FX chrisoliver 2007-08-09T16:03:17-07:00 2007-08-09T19:27:51-07:00 <table> <tr><td><img src='/chrisoliver/resource/Poker.png'/></td> <td><img src='/chrisoliver/resource/Moto.png'/></td> </tr> </table>. <p> My goal is to make it possible for the programmer to specify the required behavior and still be able to actually understand what his code is doing. <p> Animations are fundamentally updates. They assign values to properties (or invoke processes that do such) at certain time instants. <p> The traditional animation loop <p> <pre> while (animationIsRunning) { // update stuff wait(interval); } </pre> <p> is easy to understand, but it has several drawbacks: <ol> <li> It requires a dedicated thread - introducing data synchronization issues</li> <li>It doesn't handle jitter - "update stuff" can take variable amounts of time at each iteration, making the animation look choppy.</li> <li>Since the time consumed by "update stuff" is included in the loop, the animation will exhibit different behavior based on the speed of the processor.</li> </ol> <p> FX provides a time iterated for loop which (I think) is as easy to understand but overcomes these limitations: <p> <pre> println("before the animation"); for (i in [1..10] dur 10000) { // update stuff } println("after the animation"); </pre> <p>; <p> <pre> Rect { fill: green height: 50 width: 50 var x = 0 var y = 0 x: bind x y: bind y onMouseClicked: operation(e) { for (i in [0..100] dur 1000) { x = i; } for (i in [0..100] dur 1000 { y = i; } } } </pre> <p> The above moves the rectangle from (0, 0) to (100, 0) over 1 second, and then moves it from (100, 0) to (100, 100) over 1 second, when you click the mouse on the rectangle. <p> Now, sometimes we want animations to run in parallel. For this case, a nearly identical construct is provided, namely the update trigger: <p> <pre> trigger on (i = [1..10] dur 10000) { // update stuff } println("after the trigger, but the animation is still running"); </pre> <p>: <p> <pre> Rect { fill: green height: 50 width: 50 var x = 0 var y = 0 x: bind x y: bind y onMouseClicked: operation(e) { trigger on (i = [0..100] dur 1000) { x = i; } trigger on (i = [0..100] dur 1000) { y = i; } } } </pre> <p> In this example, the animations of x and y occur in parallel and the the rectangle moves diagonally from (0, 0) to (100, 100) over 1 second. <p>: <p> <pre> Rect { fill: green height: 50 width: 50 var x = 0 var y = 0 x: bind x y: bind y onMouseClicked: operation(e) { trigger on (i = [0, 100] dur 1000 motion LINEAR) { x = i; } trigger on (i = [0, 100] dur 1000 motion LINEAR) { y = i; } } } </pre> <p> LINEAR is a predefined function that performs linear interpolation (you can plug in your own instead). The interpolation function is of the form <p> <pre> function<T> (inputValues:T*, elapsedTimeAsUnitInterval: Number): T </pre> <p> As a shorthand for animating individual properties a form of the the assignment statement is also provided: <p> <pre> x = [0, 100] dur 1000 motion LINEAR; y = [0, 100] dur 1000 motion LINEAR; </pre> <p> which is equivalent to the two triggers above in onMouseClicked(). <p> Finally, for the case where the for loop actually doesn't do anything: <p> <pre> for (i in [0, 1] dur 1000) { // don't do anything } </pre> <p> we have a shorthand called "wait" <p> <pre> wait(1000); </pre> <p> Below are a few test cases for these constructs: <p> <ol> <li> Video Poker <table cellspacing='0'> <tr><td> <a href='/chrisoliver/resource/poker_standalone.jnlp'>Standalone</a></td></tr> <tr><td> <a href='/chrisoliver/resource/poker.jnlp'>FXPad</a></td></tr> </table> </li> <li> Motorola Intro <table cellspacing='0'> <tr><td> <a href='/chrisoliver/resource/motorola_fx_standalone.jnlp'>Standalone</a></td></tr> <tr><td> <a href='/chrisoliver/resource/motorola_fx.jnlp'>FXPad</a></td></tr> </table> </li> <li> Simple Path Animation <table cellspacing='0'><tr><td> <a href='/chrisoliver/resource/javafxpad3.jnlp'>FXPad</a></td></tr> </table> </li> </ul> </ol> Hotspot vs Adobe Tamarin VM? No contest. chrisoliver 2007-07-26T20:11:03-07:00 2007-07-26T20:11:03-07:00). <p> Surprisingly (or maybe not) Adobe's ActionScript compiler (which generates the byte-code for Tamarin) itself is actually a <b><i>Java</i></b> application. <p> Here's Tamarin: <pre> </pre> <p> And here's Java: <pre> </pre> Interestingly, I also tried the Mozilla Rhino JavaScript engine (which I contributed to years ago), which is a pure Java implementation of JavaScript with this result: <pre> $ time java -jar js.jar -opt 9 tak.js real 0m31.944s user 0m31.718s sys 0m0.181s </pre> First steps with the JavaFX Compiler chrisoliver 2007-07-14T15:25:57-07:00 2007-07-14T17:27:08-07:00 Thanks to the efforts of Robert Field, Lubo Litchev, and <a href=''>Jonathan Gibbons</a> of the Javac team, as well as <a href=''>Per Bothner</a> and <a href=''>Brian Goetz</a> (and also thanks to the organizational efforts of <a href=''>Bob Brewin<a/>, <a href=''>James Gosling</a>, and <a href="">Tom Ball</a>) we have the beginnings of a JavaFX to JVM-byte-code compiler built on the same infrastructure as Javac. <p> Of course, the compiler is still incomplete, but it turns out to be far enough along to try a first performance benchmark (<a href=''>Takeuchi function</a>): <pre> </pre> <p> Here's the interpreter: <pre> $ time bin/javafx.sh TakMod.fx compile thread: Thread[main,5,main] compile 0.04 tak(24,16,8)=9.0 init: 69.48 real 1m10.422s user 0m0.190s sys 0m0.130s </pre> Speed improvement for this particular example is a pretty awesome 54x. Bidirectional binding chrisoliver 2007-07-10T08:19:39-07:00 2007-07-14T17:32:01-07:00: <p> <a href='/chrisoliver/resource/javafxpad2.jnlp'><img border='0' src='/chrisoliver/resource/webstart.small2.gif'></img></a> <p> <pre> // </pre> Preview of JavaFX Interpreter chrisoliver 2007-06-26T17:33:17-07:00 2007-06-27T10:03:35-07:00 As I mentioned we will be posting an update to <a href="">OpenJFX</a> shortly. <p> The interpreter now has proper support for binding local variables and partially enforces cardinality constraints. <p> Here's the latest JavaFXPad for you to try it out. <p> JavaFXPad also has some improvements: <ul> <li>You can have multiple statements and the value of the last statement will be displayed - without having to specify <code style='font-style:bold;'>return</code>.</li> <li>Non-visual output is displayed - if the result isn't a Widget, Frame, or Node, the value of the result converted to a string will be displayed in a text area. Thus you can use JavaFXPad like a <a href=''>REPL</a> for experimenting with the JavaFX language.</li> </ul> <p> <img src='/chrisoliver/resource/JFXPad.png'></img> <p> <a href='/chrisoliver/resource/javafxpad.jnlp'><img border='0' src='/chrisoliver/resource/webstart.small2.gif'></img></a> JavaFX SVG Translator Preview chrisoliver 2007-06-24T23:53:07-07:00 2007-06-24T23:53:07-07:00 It'll take a few more days before we post the code to <a href="">OpenJFX</a>,. <p> Having this translator makes it straightforward to incorporate graphics created in vector drawing tools into JavaFX programs. <p>. <p> <a href='/chrisoliver/resource/svg.jnlp'><img border="0" src="/chrisoliver/resource/webstart.small2.gif"/></a> <p> <p> You can drag URL's that point at SVG files from web pages into the SVG Browser's canvas. It's pretty sad to see how little quality vector artwork is available on the public internet. Anyway, below are a few examples for you to try: <ul> <li><a href="">Ferrari Logo</a></li> <li><a href=''>Java Cup Logo from Wikipedia</a></li> <li><a href=''>The Famous SVG Tiger</a></li> <li><a href=''>Lion</a></li> </ul> A Real DryerFox chrisoliver 2007-05-26T21:55:04-07:00 2007-05-26T21:59:04-07:00 When I ran across this <a href=''>Apollo example</a> it occurred to me that I could pretty easily create a close Java equivalent of Apollo by combining JavaFX with <a href=''>WebRenderer</a>. <p> WebRenderer provides a Java Swing component that encapsulates the Firefox browser. In addition, it provides an extensive Java API to interact with the browser including the full DOM. However, WebRenderer isn't free software. <p> Anyway, below's a screenshot of the JFX version. The dryer turns while a page is loading, but it doesn't have the sound effects (at least, not yet). <p> Unlike the above cited example (Apollo apparently uses WebKit) that really is a fully functioning Firefox inside the dryer, which is still interactive even while it turns. <p> <img src='/chrisoliver/resource/dryerfox.png'></img> JavaFX chrisoliver 2007-05-13T10:39:26-07:00 2007-05-13T10:39:26-07:00 As you probably heard at JavaOne, F3 is now called JavaFX. The initial public release is available here: <a href=''></a>. Please join me there for discussion and further development. JavaOne 2007 chrisoliver 2007-03-20T12:39:00-07:00 2007-03-20T14:35:07-07:00 I'll be presenting a <a href=''>technical session on F3</a> at <a href=''>JavaOne</a> this year on Wednesday, May 9. <p> This F3 example won't make it into my session, so I'll just post it here. <p> <a href='/chrisoliver/resource/spaceInvaders.jnlp'><img border='0' src='/chrisoliver/resource/webstart.small2.gif'></img></a> <p> Source: <p> <a href='/chrisoliver/resource/Game.f3.txt'>Game.f3</a> <p> <a href='/chrisoliver/resource/Ship.f3.txt'>Ship.f3</a> <p> <a href='/chrisoliver/resource/Alien.f3.txt'>Alien.f3</a> <p> <a href='/chrisoliver/resource/Missle.f3.txt'>Missle.f3</a> MacOS Dashboard Widgets chrisoliver 2007-02-23T12:06:46-08:00 2007-02-23T12:06:46-08:00 I recently got a Mac and took a look at their dashboard widgets. Such widgets consist of several files: <ul> <li>An HTML file</li> <li>A CSS file</li> <li>A JavaScript file</li> <li>Image files</li> </ul> <p> Of course, I had to try to see how hard it would be to do the same thing in F3. Here's the F3 version of the Calculator widget: <p> <img src='/chrisoliver/resource/Calc.png'></img> <p> <a href='/chrisoliver/resource/calc.jnlp'><img border='0' src='/chrisoliver/resource/webstart.small2.gif'></img></a> <p> Unfortunately, it's currently impossible to create transparent windows in Java on non-MacOS platforms, so in those cases you won't see the dropshadow and rounded corners. <p>). <p> F3 Source: <p> <a href='/chrisoliver/resource/Calculator.f3.txt'>Calculator.f3</a> <p> Original Source: <table> <tr><td><a href='/chrisoliver/resource/Calculator.html.txt'>Calculator.html</a></td></tr> <tr><td><a href='/chrisoliver/resource/Calculator.css.txt'>Calculator.css</a></td></tr> <tr><td><a href='/chrisoliver/resource/Calculator.js.txt'>Calculator.js</a></td></tr> </table> 3D and Faux 3D chrisoliver 2007-02-21T12:15:55-08:00 2007-02-21T12:19:19-08:00. <p> In the meantime, since you can call Java from F3 it's possible to directly use GLJPanel in F3 and call <a href=''>JOGL</a> API's directly to mix 2D and 3D effects. <p> Here's a simple example which contains a 2D rectangle, JButton, and JTextField together with the well-known Gears 3D example in the same F3 canvas: <p> <img src='/chrisoliver/resource/jgears.png'></img> <p> <a href='/chrisoliver/resource/jgears.jnlp'><img border='0' src='/chrisoliver/resource/webstart.small2.gif'></img></a> <p> Source: <p> <a href='/chrisoliver/resource/Gears.f3.txt'>Gears.f3</a> <p> <p> It's also possible to do faux 3D in F3. Here's an example that was inspired by in part by the <a href=''>Flex Display Shelf</a> and partly by the ITunes Movies page. Note ITunes is true 3D, however. The scrollbar and artwork mimic ITunes but the layout is similar to the Flex example. <p> <img src='/chrisoliver/resource/displayShelf.png'></img> <p> <a href='/chrisoliver/resource/displayShelf.jnlp'><img border='0' src='/chrisoliver/resource/webstart.small2.gif'></img></a> <p> Source: <table> <tr><td><a href='/chrisoliver/resource/DisplayShelf.f3.txt'>DisplayShelf.f3</a></td></tr> <tr><td><a href='/chrisoliver/resource/TiltingPane.f3.txt'>TiltingPane.f3</a></td></tr> <tr><td><a href='/chrisoliver/resource/ITunesScrollBar.f3.txt'>ITunesScrollBar.f3</a></td></tr> </table> F3 Visual Tools chrisoliver 2007-02-07T17:06:33-08:00 2007-02-07T17:17:16-08:00 So far I haven't spent time trying to develop serious visual tools to support F3 development, however once the language and API's have stabilized that's definitely something I want to do. <p> Useful visual tools include simple color choosers, linear and radial gradient editors, widget layout editors like Matisse, vector drawing tools like Inkscape and Illustrator, and image processing tools like Photoshop. <p>. <p> I don't have definitive proof yet, but I believe using F3 to develop such tools should make it far easier than what the competitors are doing (with the possible exception of Microsoft Expression), since other than Matisse all of those mentioned above are written in <b><i>C</i></b>. . <p>). <p> In this demo you can't delete points or insert points in the middle of the curve, however it does have infinite undo/redo with CTRL-z (undo) and CTRL-y (redo). <p> Here's a <a href='/chrisoliver/resource/PenTool.f3'>link</a> to the source code. <p>. <p> From the array of points a <code>Path</code> is projected (with the F3 <code>bind</code> operator) having a <code>MoveTo</code> element for the first point, and <code>CurveTo</code> elements for each sequence of <code>[cp, cp, pt]</code>. <p> <a href='/chrisoliver/resource/penTool.jnlp'><img border = '0' src='/chrisoliver/resource/webstart.small2.gif'></img></a> More about F3 chrisoliver 2007-01-22T17:18:16-08:00 2007-03-01T08:55:22-08:00 This was going to be a comment to a posting about F3 on <a href=''>Lambda the Ultimate</a>, but got a little too big so I'm posting it here. <p> But I am posting this in the spirit of <a href=''>LtU</a> as expressed by Ehud Lamm (the originator of LtU) <p> <blockquote><i> Note that from the perspective of LtU the expressiveness of the language constructs is the main thing, implementing them as efficiently as possible comes second: efficient implementation should be possible, but might still not exist in the marketplace.</i> </blockquote> <p> The above quote is also very close to my original approach when I started F3. I knew from using <a href=''>Tcl/Tk</a> in the mid-90's that high performance from F3 wouldn't be required for the vast majority of graphical user interface use cases. With Tcl/Tk all of the Tk widgets as well as my back-end code was implemented in <b>C<. <p>. <p>. <p> Although I'm concerned that F3 may have fundamental flaws that haven't been exposed yet, I don't feel <i>too</i> bad about lacking a formal basis for F3, since to be honest I really never intended to write a programming language in the first place. <p> The F3 project was originally called GBTDS which stood for "GUI Builder That Doesn't Suck". I just wanted an easy way of creating GUI's. <i>GUI builder that sucked</i>. <p>...). <p>). <p>. <p>. <p> Fortunately, a very talented young man, <a href=''>Jesse Grosjean</a>, had written an excellent 2d scene graph toolkit based on Java2D called <a href=''>Piccolo</a>. The conceptual model of Piccolo was very close to SVG, and I realized I could fairly easily implement F3 versions of SVG's declarative 2D graphics interfaces using Piccolo. <p> Once I had that, I could pretty much create any GUI or visual effect you find in Flash, Java, DHTML, or elsewhere, declaratively in F3 and started writing the demos you see today. <p> Anyway, as I said my original thinking was that I wanted an easy way of creating Model/View GUI's. To that end I wanted a simple, object-oriented system. The informal conceptual basis for this I borrowed from <a href=''>Martin and Odell</a>. <p> To summarize the important points (informally): <ul> <li>Classes correspond to the concepts we use to identify the common characteristics of the things around us and how they relate to each other.</li> <li>Thus a class declares a set of potential relationships (links) between objects.</li> <li>Once an object has been classified we can navigate those relationships to discover its properties, i.e. the other objects related to it.</li> <li>In F3 these properties are called <code>attributes</code>.</li> <li>In F3 <code>function</code>s are <i>queries</i> that navigate links or produce new objects in terms of existing ones, but do not modify the links of existing objects.</li> <li>In F3 all change consists of adding new objects or of adding, removing, or replacing the links between existing objects.</li> <li>In F3 <i>Events</i> are simply notifications of object instantiation or of "link addition", "link removal", or "link replacement"</i></li> <li>In F3 <code>operation</code>s in addition to performing queries can sequentially perform one or more such link modifications. Such modifications may be sequenced with conditional logic and by selection and iteration over query results.</li> <li>The values of an object's attributes may be specified either through explicit assignment or by means of a bound query. In the latter case implicit modification of the attribute occurs whenever the inputs to the query expression change and produce a new result.</li> <li>So when a change occurs one way to specify the "effect" is to define a bound query that expresses how other objects depend on it.</li> <li>The other way to respond to change is by making further explicit modifications using <code>trigger</code>s. A <code>trigger</code> is an operation that is performed whenever an insert, delete, or replace event which applies to it occurs. </ul> <p> As far as the syntax of F3, it's intended to be familiar to mainstream programmers whose primary language is derived from C (C++, Java, JavaScript, PHP, etc), but also includes features from query languages (OQL, XQuery, SQL). <p> F3 is still a work in progress. <p> When F3 is open-sourced the syntax of the language and API's will be open for discussion and change. Heroes chrisoliver 2007-01-11T18:35:38-08:00 2007-01-17T14:42:18-08:00 <img src='/chrisoliver/resource/heroes.png'></img> <p> My wife thought <a href=''>this flash application</a> was cool, so I spent the afternoon trying to recreate it in F3. <p> The F3 version isn't complete and differs in some details but seems like a reasonable approximation. <p> This example is typical of many Flash programs where the hard work is done by the graphic artists and the progammer's job is relatively easy: in this case I just laid out the images, implemented the animations based on location, rotation, scale, and opacity, and handled the mouse events. <p> <p> Here's a <a href='/chrisoliver/resource/Heroes.f3'>link</a> to a copy of the source code. <p> <a href='/chrisoliver/resource/heroes.jnlp'><img src='/chrisoliver/resource/webstart.small2.gif' border='0'></img></a> More on Quality chrisoliver 2007-01-10T11:39:15-08:00 2007-01-11T09:26:14-08:00 Sun's CEO, Jonathan Schwartz, also has a <a href=''>definition of quality</a>. And as he mentions in that post, he's also a big believer in simplicity and efficiency. So am I. <p> He provides a dead-simple measure of quality for Sun which consists of asking a customer one question: "Would you recommend Sun?". <p> As a programmer I can apply the same measurement to determine the quality of my program, by asking a user of my program if he would recommend using my program to others. <p> What I tried to describe in my previous post was a dead-simple procedure for raising a program to the level where it will get a "Yes". <p> I can name many programs (not written by me) for which I would answer "Yes" <p> Unfortunately, for the vast majority of programs I've tried my answer would probably be "No", or "Not really", or "Only because there's no alternative", or even "Hell no!". <p> Here are a few examples of both: <ul> <li>Netbeans and Eclipse Java Plugins - <b>Yes</b></li> <li>All other Netbeans and Eclipse Plugins I've tried- <b>No</b></li> <li>Microsoft Excel - <b>Yes</b></li> <li>Microsoft Outlook - <b>No</b></li> <li>Google Search - <b>Yes</b></li> <li>Froogle - <b>No</b></li> </ul> <p> In my experience, in each case where I would answer "Yes" I was able to produce a satisfactory result in a short or reasonable amount of time, without getting stuck, and without having to alter the result significantly due to the limitations or defects of the program. <p>. <p> Over the years at various companies I've actually asked my colleagues if they would want to use their own programs in the real world and the answer was often "No". <p> From my experience, "low-quality" programs tend to manifest lack of quality in one of four ways: <ol> <li>Lack of functionality (I didn't implement the full Design Spec)</li> <li>Outright bugs (crashing, hanging, misbehavior, visual bugs like inconsistent colors or margins, or ugly layouts, etc)</li> <li>Slow performance</li> <li>Excessive memory use</li> </ol> <p>. ). <p> I've avoided the issue of "learnability" so far, but that is a prerequisite to all of the above. Obviously, if the barrier to entry is too high, you won't have users and any quality problems are therefore irrelevant. <p> Quality Software chrisoliver 2007-01-08T14:12:58-08:00 2007-01-08T14:29:10-08:00 <h3>What is Quality?</h3> I've always recalled the following quote from E.W Dijkstra (in fact it's actually posted on the door to my office in his handwriting). <p> <blockquote>"Being a better programmer means being able to design more effective and trustworthy programs and knowing how to do that efficiently"</blockquote> <p> My interpretation: <i>Quality means efficiency</i>. Quality software is efficent for the user of that software. Quality programmming is efficency in developing the software. I believe both can be objectively measured. <p> Note that reliability (trustworthy<i>ness</i>)". <p> Note that "showstopper" bugs = zero quality, since the task cannot be completed at all. <p> I often try to get the overall point across by saying something like this: <blockquote> We could have a butt-ugly GUI and no documentation but if the user is able to get the task done faster with our tool than with the competitor's tool, then our software is higher quality, bottom line. </blockquote> <p> For the non-interactive parts of programs we can measure the efficiency in terms of resource utilization: cpu-time, network-bandwidth, memory-use. To optimize these aspects we leverage less resource-intensive algorithms, data-structures, and communication protocols. <p> There is a limit on runtime performance improving quality. For example, I don't need a car that can drive 200 mph if I always obey the 65 mph speed limit. <p> For interactive programs we have to also measure the amount of time consumed by the human user. <p>). <p> Experienced programmers know that to improve the efficiency (in terms of cpu use) of the non-interactive parts of programs you should not visually inspect your code, but rather use a profiler as follows: <ol> <li>Run the application under the cpu-profiler. The cpu profiler outputs a list of the methods called by your application sorted by time consumed</li> <li>You ignore everything after the first item in the list (important!)</li> <li>You open the source file containing the method and edit the code.</li> <li>Go to 1 (if you've done your job a new method will be first in the list)</li> </ol> <p> My argument is that the same approach should be applied to interactive programs. We don't necessarily have a tool like the cpu profiler but that doesn't matter. An approach might be like this: <p> <ol> <li>Sit a test user in front of your user interface.</li> <li>Use a tool like Macromedia captivate to record the session. Start recording now.</li> <li>If the user hits a showstopper bug. That is the #1 problem. Stop and fix it now. Then Go to 1</li> <li>Stop recording when the user completes the task</li> <li>Assemble your team and review the session; Break down the session into discrete functional steps and note the amount of time spent in each one</li> <li>Order the steps by the amount of time consumed</li> <li>Assign as many resources as necessary to the first step in the list. </li> <li>Have them alter the design or implementation and then Go to 1.</li> </ol> <p> <p> In the case of software development tools like those provided by Sun, the human user is another programmer and we need to measure his performance to determine the quality of our tools. <p>. <p>. <p> In such cases the result is typically tools that are highly unusable in obvious ways, which aren't identified until after a release when real users try to use them. <p>. <p>. <p> WSDL provides a standard communication protocol that can effectively be used to script a simulated enterprise application environment, which can then be used to test the quality of our development tools in the manner described above. F3 Tutorial chrisoliver 2007-01-02T15:01:42-08:00 2007-01-02T15:01:42-08:00 My colleague, Ricardo Rocha, has written an in depth explanation of <a href=''>one of the demos <a/> I posted earlier. <p> <a href='/chrisoliver/resource/index.html'>Here</a>'s his tutorial. Open Source F3: Where and When? chrisoliver 2006-12-19T15:18:45-08:00 2006-12-28T16:44:49-08:00 There have been quite a few comments to my postings asking when F3 will be publicly available. <p> For the record, I actually inadvertently made F3 public. I was just testing out my weblog. I thought it would reside in obscurity for the time being. <p> Since it was noticed I've tried to provide as much information about F3 as was feasible in the context of a weblog. <p> I can definitely tell you where you'll find open source F3 development, namely here:. This project exists, however it's still private and doesn't contain the source code yet. <p> I don't have an exact date when it will be publicly accessible, but if I had to make a guess I'd say February 2007 is likely. <p> So, there you go. Thanks for your patience. F3 and HTML chrisoliver 2006-12-14T13:03:25-08:00 2007-01-17T14:41:51-08:00 Although <a href=''>I'm not a fan of XML</a>, I do consider HTML+CSS the most useful tool available for composing styled text. We plan on incorporating the <a href=''>Flying Saucer</a> Java XHTML renderer into F3 eventually. <p> One of the things I dislike about HTML is the total lack of error reporting. If you have mistakes the output just looks funny and it's a nightmare to debug. <p> Since Flying Saucer requires valid XHTML and CSS it's possible to provide high quality validation for both the HTML content and CSS rules. Click the button at the bottom of this page to launch two demos: <ol> <li> The first is a (very prototypish - I spent about 1.5 days on it) demo (written in F3) of an interactive XHTML/CSS renderer which performs XHTML and CSS as-you-type validation (I extracted the <a href=''>W3C CSS validator</a> from their Java web-app for the CSS validation). <p> The performance is bad right now because it's synchronously loading images, but hopefully it gives you some idea of what I have in mind. For example, you can browse with it, edit their style sheets and xhtml documents and see what happens. <p> </li> <li> Since the XHTML renderer is a Swing component it's also possible to embed arbitrary F3 widgets inside the XHTML document. <p> This is done with a special form of the HTML <code><object></code> tag, for example, like this: <p> <pre> var button = Button {text: "Click Me!"} XHTML { text: bind "<html> <body> Here's a button: <object type='f3' data='{#button}'/> </body> </html>" } </pre> <p> The above code embeds an F3 Button (underlyingly a Swing JButton) into the HTML content. The XHTML widget is the F3 wrapper for the Flying Saucer renderer. It renders the HTML content you assign to its <code>text</code> attribute. <p> The <b>#</b> operator is F3's <i>stringify</i> operator: it converts the object reference of its operand to a URL which can later be dereferenced with the <i>unstringify</i> operator <b>?</b>, e.g. <p> <pre> var num1 = 20; var s = #num1; assert s instanceof String; // passes var num2 = (Number)?s; assert num2 == num1; // passes </pre> <p> URL's created with <code>#</code> aren't permanent but only refer to the object in memory. <p> In addition, it's possible to create hyperlinks that call back to your application by embedding such URL's as the value of the <code>href</code> attribute. The second demo shows examples of this (click the File->View Source menu to see the source code). <p> The second demo is an example which floats <a href=''>one of the processing examples</a> I previously posted inside the XHTML document. </li> </ol> <p> <a href='/chrisoliver/resource/xhtmldemo.jnlp'><img border='0' src='/chrisoliver/resource/webstart.small2.gif'></img></a> </p> F3 Reflection and Query operators chrisoliver 2006-12-12T18:41:30-08:00 2007-01-21T16:48:11-08:00 A <a href=''>friend of mine</a> who works at MIT libraries recently showed me a neat little AJAX data visualization program they developed called <a href=''>Exhibit</a>. <p> This seemed like a good test case for F3 reflection and for its query operators, so I decided to try to see how hard it would be to do the same kind of thing in F3. <p> Below is a (partial) F3 equivalent of <a href=''>this Exhibit example</a>. <p> <a href='/chrisoliver/resource/cereal.jnlp'><img border='0' src='/chrisoliver/resource/webstart.small2.gif'></img></a> <p> <p> The above demo also displays the F3 source code. <p> This is actually a plain Swing program (the F3 Canvas isn't used), but demonstrates the use of HTML templating with F3. <p> All of the filtering and sorting is actually done in 3 lines of code (take a look at DB.f3 and AttributeValueAssertion.f3). <p> Some things to note: <p> <ul> <li><code>Attribute</code> is an F3 reflection class analagous to <code>java.lang.reflect.Field</code></li> <li>Reflective access to the value of an attribute uses the <code>[]</code> operator with an operand of type <code>Attribute</code>, e.g. <p> <div style='font-face:Courier'> <pre> var bar = foo.class.Attributes[Name == 'bar']; foo[bar] = 2; // The above is analogous to the following Java code // Field bar = foo.getClass().getField("bar"); // bar.set(foo, 2); </pre> </div> </p> <p> is the reflective equivalent of </p> <p> <div style='font-face:Courier;'> <pre> foo.bar = 2; </pre> </div> </li> <li>The F3 <code>order by</code> operator <code>.</code> (dot) operator (as in XPath). <p> For example, <p> <div style='font-face:Courier'> <pre> [1, 2, 3] order by -. </pre> </div> <p> yields <p> <div style='font-face:Courier'> <pre> [3, 2, 1] </pre> </div> </li> Poor man's Multi-VM chrisoliver 2006-11-29T09:51:10-08:00 2006-11-29T10:45:21-08:00. <p> I was prompted to do this by <a href=''>Gilad Bracha</a> (who unfortunately left Sun recently). Gilad told me that he didn't believe in Java GUI's for two reasons: <ol> <li>Start-up time</li> <li>Footprint</li> </ol> <p>. <p> I personally found his arguments undeniable. <p> I did my own tests where I ran 10 instances of SwingSet2 in separate JVMs versus in the same JVM but with separate application contexts. In the former case total memory use was 540M, in the latter 74M. <p> As I mentioned this approach isn't specific to F3. <p> I put a zip file containing the basic code to do this for regular Swing applications <a href=''>here</a> (45kb). <p> Note: this is just example code to illustrate how you could do this. <p> The zip file contains the source and compiled code and some windows batch files to build and run it (Java 1.6 required to build, Java 1.5 to run). <p> There's only one class called JApplication. You call its main method with a class path and class name similar to how you run the java command, e.g. <p> <span style='font-face:courier'> java com.sun.JApplication -classpath SwingSet2.jar SwingSet2 </span> <p>). <p> Finally, if you're using Java 1.6 it adds an icon to the system tray which has a menu to shut down the whole thing and also can display a window listing the running applications and which allows you to kill them manually. <p> Note that this example code doesn't handle security issues at all. More F3 Demos chrisoliver 2006-11-28T08:45:21-08:00 2007-01-17T14:43:27-08:00 <img src='/chrisoliver/resource/f3-demos.gif'></img> <p> Here are a few more F3 demos. <p> Several are replicas of Flash programs: <ol> <li>Bentley Continental Flying Spur</li> <li>Lotus Cars USA</li> <li>Nike 2005 Home Page</li> </ol> <p> Others are replicas of Java programs written by <a href=''>Romain Guy</a>: <ol> <li>Painter</li> <li>Spline Editor</li> </ol> <p> There's also a (partial) replica of the Winamp MP3 player, which uses the pure Java <a href=''> JLayer</a>. <p> A Google search demo using Google's soap API, which demonstrates HTML templating with F3, and also has a fancy search control based on artwork from a Yahoo widget. <p> Finally, there's a standalone version of the interactive F3 editor, "F3Pad". This was inspired in part by <a href=''>XAMLPad</a>. Note that other than watching the XAMLPad video, I've never actually used it. <p> The demo launcher also includes all the demos I've previously posted. Remember if you're using Java 1.6 you'll need to click on the F3 icon in your system tray to shut down the whole application. <p> <a href='/chrisoliver/resource/demo2.jnlp'><img border='0' src='/chrisoliver/resource/webstart.small2.gif'></img></a> <p> F3 Deployment chrisoliver 2006-11-25T10:35:21-08:00 2006-11-25T10:35:21-08:00 One of the comments to my earlier posts expressed some concern that a lot of demo code seemed to be all in one file. <p> That's just because they were quick demos. <p> F3 supports packages and imports as in Java. So you can divide your application into as many separate files as you like. <p> The F3 interpreter actually loads F3 classes by means of the Java class loader. So to deploy F3 code you simply archive it in JAR files as in Java and then make such archives accessible to the Java class loader. <p>. <p> F3 and JSR-223 chrisoliver 2006-11-25T09:07:41-08:00 2006-11-25T09:07:41-08:00 F3 provides a JSR-223 (<a href=''>Scripting for the Java Platform</a>) compliant interface in order to interact with F3 from Java code. <p> Here's a very simple example: <p> <pre font='courier'>); ... </pre> <p> The above example creates a <code>java.util.Date</code> object in Java code and then makes it accessible to the F3 script. The F3 script simply prints the value of the date to standard out. <p> You make Java objects accessible to F3 code by using JSR-223 API's to assign such objects to F3 global enumerations. The format of the binding name is <p> <span font='courier'> <i>Name</i>:<i>Class</i> </span> <p> This has the effect of creating a named instance of the specified class with the specified name. <p>. <p>. | http://blogs.sun.com/chrisoliver/feed/entries/atom | crawl-001 | refinedweb | 8,032 | 53.31 |
Migrating to Struts 2 - Part III
In this final part of the series, we will complete the transformation of the application by migrating the user interface from Struts to Struts2. In part 1 we looked at the high level architecture and basic diffferences between Struts and Struts 2 apps. In part 2 we started converting a real application including actions and configuration.
From a high level, many of the tags in Struts2 are going to look very similar to those that are present in Struts. There are tags for logic, tags to present data from beans, and tags that assist with forms and populating bean data into specific fields. It is only when you take a closer look, that the differences become apparent.
The differences allow the framework to provide features such as:
- JSP tags being available to different view technologies as first class objects - traditionally, a tag library was only available to JSP. Struts2 takes the approach that all view technologies - JSP, velocity, Freemarker, etc. - should all be first class citizens, and have available to them the objects that will make developing web applications easier.
- Themes - themes can be provided on a global or individual basis to tags.
These features are available because of the approach taken in architecting the user interface layer.
The User Interface Architecture
Most readers should be familiar with the model-view-controller pattern. It is the basis for both the Struts and Struts2 frameworks. Struts2 incorporates this pattern in another capacity - as the basic of the user interface architecture.
From a high level, this is what it looks like:
Model - the model is extended from the Component base class. The model in the UI architecture has additional responsibility. As well as being an implementation-independent container of data, it provides information on how to render itself - the view - by specifying the templates that are used.
View - the final HTML is generated by Freemarker templates. Normally, it would be the Java code in the taglibs that would provide any necessary logic and write the HTML code to the output stream. By allowing a templating language to take over this function, additional flexibility is achieved.
TIP: Struts2 provides multiple themes with the deployment, by default the "xhtml" theme is used when rendering tag views. The "xhtml" theme provides features for form elements such as layouts using tables, validation message rendering and label / value columns. As well, themes for "css_xhtml" (uses CSS layout rather than table layout), "simple" (with no additional features - just the basic HTML tag rendering) and "ajax" (for Ajax-enabled tag rendering) are available. Any of these can be used in your view code by specifying a value for the "theme" attribute.
Additionally, entirely new themes can be created by developers without the need for a custom framework compilation. This is outside the scope of this article, but further information can be found on the Struts2 wiki.
Controller - for each of the rendering languages that wish access to the taglibs, a new controller object needs to be provided. By default, Struts2 provides the implementations of AbstractDirective (for Velocity), TagModel (for Freemarker) and ComponentTagSupport (for JSP). This allows tag library functionality that once one only available in JSP, to become a first class citizen in other view technologies.
All of this might seem a little abstract at the moment, but as we start discussing the tags in more detail, the practical application will become apparent. But now, it's time to look at some code.
Converting the JSP's
Let's start with the add JSP. Here's what the Struts version looks like:
<html>
<head><title>Add Blog Entry</title></head>
<body>
<form action="save.action" method="post">
Title: <input type="text" name="title" /><br/>
Entry: <textarea rows="3" cols="25" name="entry"></textarea> <br/>
<input type="submit" value="Add"/>
</form>
</body>
</html>
As you may have noticed, this is a pure HTML page. In fact, we could have called it add.html.
Here's the Struts2 version:
<%@ taglib prefix="s" uri="/WEB-INF/struts-tags.tld" %>
<html>
<head><title>Add Blog Entry</title></head>
<body>
<s:form
<s:textfield
<s:textarea
<s:submit
</s:form>
</body>
</html>
We are using JSP, so the first line is the definition of the tag library (we won't include this in the remaining code). The file struts-tags.tld defines the tags that are available, and has been placed in the WEB-INF directory. This file was originally extracted from the "struts2-core-2.0.0-SNAPSHOT.jar" file.
Following through the Struts example, we see that the remaining changes are focused around the HTML tags. The form tag becomes s:form, the textarea becomes s:textarea, and the input tag becomes either s:textfield or s:submit. Whenever possible, the Struts2 tag names will match the HTML equivalent. The input tag is different - rather than adding the type attribute, the type becomes the name of the tag itself.
Using the Struts2 tags, we can see that the code is a little cleaner and easier to read. In the s:form tag the value of the action attribute becomes "save" rather than "save.action". The tag handles adding the extension, along with the web application context. Additionally, if it is needed, there is a namespace attribute to supply additional path information - which the tag also will incorporate into the final HTML form tag markup that is rendered.
The other change is the presence of an additional attribute label. In the Struts2 tags, the text for each of the form elements label becomes part of the tag itself, rather than externally specified. Why does this matter? The answer lies in the rendering of the tags. At the beginning of the article we spoke about the UI architecture, and how the model can render itself. If you do a "view source" on the HTML that the Struts2 add.jsp renders, you will notice that the form and all the elements are within table tags. The label is placed in one column of the table, and the input or textarea HTML tag in the other column. The default "xhtml" theme is providing us with additional layout formatting.
Because this is the add page, there is one final thing to mention. The value of the name attribute determines which field on the action or the model that the user entered value will be set. So if the name is title, make sure you have a setTitle(...) method.
The view.jsp illustrates another commonly used tag - one that renders data values.
The Struts View JSP:
<html>
<head><title>View Blog Entry</title></head>
<body>
Id: <c:out<br/>
Title: <c:out<br/>
Entry: <c:out<br/>
Updated: <fmt:formatDate<br/>
<br/>
<a href="list.action">Back to List</a> |
<a href="edit.action?id=<c:out">Edit</a>
</body>
</html>
The Struts2 View JSP:
<html>
<head><title>View Blog Entry</title></head>
<body>
Id: <s:property<br/>
Title: <s:property<br/>
Entry: <s:property<br/>
Updated: <s:property<br/>
<br/>
<s:url
<s:aBack to List</s:a> |
<s:url
<s:param
</s:url>
<s:aEdit</s:a>
</body>
</html>
In JSTL the value attribute of the c:out tag provides data to be rendered, which can be static or dynamically evaluated. An expression is denoted by using the delimiters "${" and "}". In the view JSP we want to access object instances that have been placed in the HTTP request scope, this is achieved by starting the expressions with "requestScope". Once the scope has been provided, the path through the object graph is determined using dot notation.
SIDEBAR: Dot notation is an easy way to navigate the path through an object graph. Instead of using the Java code "getPerson().getAddress().getPostcode()" a much simpler "person.address.postcode" is used.
The fmt:formatDate tag is similar to the c:out tag, the difference being that c:out would render the time in milliseconds where the fmt:formatDate tag displays the data in a more user friendly way.
SIDEBAR: Using JSTL is just one way that data could have been made available in the JSP's. Other possibilities include using the JSP tags directly (i.e. jsp:useBean and jsp:getProperty) and to use the Struts Taglib tag library. Like the Struts2 tag library, the Struts Tablib tag library provides additional abstractions around the HTML tags, as well as additional mapping functionality to make implementation easier.
The Struts2 s:property tag is very similar to the c:out tag. It uses an expression language to obtain a value, and the value can also be found by searching through an object graph using dot notation. There are two major differences. The first is that OGNL - Object Graph Navigational Language () - is used as the expression language. So not only can you use the dot notation to find values to display, you can also use advanced features such as object method calls, projection and lambda expressions. This is an open source project and developed independently of Struts2.
The second difference is that a scope is not required. This is both an advantage and a disadvantage. Instead of different scopes that the JSP developer needs to explicitly specify, Struts2 has a Value Stack. The Value Stack is just an order of different scopes that is traversed when looking for a value. When looking for a value, if a getter for the field is not found in one scope, the next scope is checked - this continues until either the field is found or all the scopes have been exhausted. The order of the scopes is:
- Temporary Objects - these are objects that are created in the JSP page (we will see an example soon with the s:url tag), or they are objects created by tags that have short lives (such as the current object during a loop over a collection).
- The Model Object - if model objects are being used, they are checked next (before the action).
- The Action Object - this is the action that has just been executed. Hence, without explicitly placing the action in a session or request scope, you will still have access to its data.
- Named Objects - these objects include #application, #session, #request, #attr and #parameters and refer to the corresponding servlet scopes.
The advantage is clear - each scope, in turn, is checked for the field that you are after. However, if you have both a model object and action with the same field name (set's say "id"), then problems can occur. If you want the models value the correct result will be returned - but if you want the actions value, you will be returned the models value, as the model scope precedes the action scope. The solution is to provide a qualifier determining where in the stack you want to retrieve the "id" field - in this example "[1].id" would retrieve the value from the action. More information can be found online at.
The last piece is the HTML a tag. From our knowledge of JSTL we see that the Struts JSP simply uses a standard HTML a tag, with a JSTL c:out tag to provide the id of the blog that is to be edited. The Struts2 implementation is a little more complex. Here it is again:
<s:url
<s:param
</s:url>
<s:aEdit</s:a>
The first tag, s:url, generates the URL that we are going to use. It does this the same way as the s:form tag does, using attributes for action (shown) and namespace (not shown). There is also an attribute for id. This is important as the s:url tag places the generated URL in the Temporary Object scope under the value specified in the id attribute. To add name-value pairs to the URL (the "?user=bob" part), the s:param tag is used.
SIDEBAR: Why can't we just use the s:property tag in the HTML a tag? The answer is that you can. The benefit of the above approach is that the URL generated is consistent across the application.
An advanced feature of Struts2 is being able to change the URL that calls actions. Instead of a standard URL such as "/listUser.action?id=4", you can swap in a new ActionMapper implementation to make the URL more friendly, perhaps more Ruby On Rail such as "/user/4/list". In order to take advantage of a change like this, it is beneficial to have all code generate URL's in a standard manner.
The s:a tag duplicates the HTML a tag functionality, the difference being that the value of the href attribute can be obtained from the Value Stack. In our case we want to obtain the value of the generated edit URL. You may have noticed that both the s:param and the s:a tags both use delimiters - "%{" and "}" where the s:property did not. When a tag is expected to always pull a value from the Value Stack (the s:property tag), delimiters aren't required. But when the value could be static text, values from the Value Stack, evaluated expressions or a combination of any of these options (the s:param and the s:a tag), then the delimiters are required. But don't worry - if you put in the delimiters and they aren't needed, Struts2 will simply remove them.
So far we've looked at entering form data, and viewing information entered in a form. Next is editing the form data. This is going to very similar to the add.jsp code - there will be a HTML form present, as well as HTML input fields. The difference is that we will need some additional tags to display the current information, and we will need a way to tell the action which record we are editing.
Here is the Struts Edit JSP:
<html>
<head><title>Update Blog Entry</title></head>
<body>
<form action="update.action" method="post">
<input type="hidden" name="id"
value="<c:out" />
Id: <c:out<br/>
Title: <input type="text" name="title"
value="<c:out"/><br/>
Entry: <textarea rows="3" cols="25" name="entry">
<c:out</textarea><br/>
Updated: <fmt:formatDate<br/>
<input type="submit" value="Update"/>
</form>
</body>
</html>
The Struts2 Edit JSP:
<html>
<head><title>Update Blog Entry</title></head>
<body>
<s:form
<s:hidden
<tr>
<td align="right">Id:</td>
<td><s:property</td>
</tr>
<s:textfield
<s:textarea
<tr>
<td align="right"> Updated:</td>
<td><s:property</td>
</tr>
<s:submit
</s:form>
</body>
</html>
This is exactly what we expect. The input HTML tag has an additional attribute value, with the actual value being a c:out tag displaying the current contents for the field, and the textarea HTML tag contains a c:out tag between it's opening and closing tags. The information on the record being edited is communicated to the next action via a hidden HTML tag. This also contains a c:out tag to provide the "id" value of the edited record.
The Struts2 version is even more similar to edit.jsp. Each of the input tags has an additional value attribute, similar to their HTML counterparts, using the delimiters (as the value could be static) and the Value Stack (as described above). There is also a s:hidden tag, which duplicates the hidden HTML tag - as we would expect.
The awkward part of this JSP is that some of the fields (id and updated) require HTML tag information to render them correctly into the theme provided by Struts2, and some do not. There are a couple of solutions to this - the first is to use the "simple" theme and to provide all the layout information within the JSP; the other is to create a new template for the s:input tag that only allows the data to be viewed and not modified. I will leave this as an exercise for the reader.
The final JSP is the list JSP. With this JSP we are introducing a new category of tag libraries that provide logic. This is the only new element being introduced, and we will be combining it with the data rendering tag libraries already discussed.
The List JSP:
<html>
<head><title>List Blogs</title></head>
<body>
My Blogs:<br/> <c:forEach
<a href="view.action?id=<c:out">
<c:out</a>
[ Updated <fmt:formatDate ]
(<a href="remove.action?id=<c:out ">remove</a>)
<br/>
</c:forEach>
<a href="add.action">Add a new entry</a>
</body>
</html>
The Struts2 List JSP:
<html>
<head><title>List Blogs</title></head>
<body>
My Blogs:<br/>
<s:iterator
<a href="view.action?id=<s:property"><s:property</a>
[ Updated <s:property ]
<s:url
<s:param
</s:url>
<s:aEdit</s:a>
(<s:aremove</s:a>)
<br/>
</s:iterator>
<s:url
<s:aAdd a new entry</s:a>
</body>
</html>
The tag in JSTL is the c:forEach tag, which has two attributes - the var attribute that provides the name that the current element in the collection can be references as, and the items attribute that provide the collection to be iterated over. As we expected, the c:out tags enclosed within the c:forEach tag all start with "blog" so they refer to the current element.
Similarly, in the Struts2 implementation, there is a s:iterator tag. This functions exactly the same as the JSTL c:forEach tag, however, you may have noticed that there is no attribute provided to reference the current element from the collection. The reason lies in the Value Stack again - the current element is placed on the top of the value stack in the Temporary Objects scope, and it can be accessed via the s:property tag without a specific object id. If you feel safer providing an object id, there is an id attribute on the s:iterator tag that works like the var attribute on the c:forEach tag.
Simplifying the Actions
In the last article we discussed the action in detail. You might remember that the last step in all of the action methods was to place the Blog instance in the HttpServletRequest. However, we didn't prepend "#request" to values in the s:property tags.
You should already know why "#request" isn't prepended to the value (the model and the action are placed in the Value Stack). Now, the only thing left to do is to make some simplification changes to the action classes by removing unnecessary code. Specifically, this means that we no longer need to place the Blog in the HttpServletRequest, and the action no longer needs to implement the ServletRequestAware interface.
The same changes need to be made for the ListBlogsAction action, however, as the ListBlogsAction doesn't use model objects, we also need to add a getBloglist() method to the action.
The full source code can be found in an attachment to this article, and the changes to the action classes can be more closely examined there.
Wrap-Up
We've covered a lot in this article, from both a design perspective and an implementation perspective, but we've also had to take a few short cuts. In particular, there was not enough time to devote to an extensive review of all the tags available. The tags in Struts2 are broken down into four categories: control tags, data tags, form tags, and non-form UI tags. Further information on all of these can be found at the Struts2 tag reference page.
As well as tags, there is an extensive list of features that we could have talked about. Here are a few of them. I will leave it to you to research those you find interesting:
- Validation - with Struts2 you can provide validation code in the class or by using a declarative validation framework that works against the action class or the model class. There is also Ajax-based validation available. For further reading, see Zarar Siddiqi article at java.net -.
- Internationalization - like its predecessor, Struts2 has strong internationalization features built into the actions, validation, messages, and tag libraries.
- Ajax Support - there is an entire theme build to provide Ajax functionality, as well as ways to return XML and JSON responses rather than HTML page fragments.
- Results / Result Types - customizing the results is easy, and allows you to result charts, graphs, JSON, XML or anything else you can imagine.
- Annotations - to make configuration easier, annotations and "convention over configuration" changes are being made. The goal to eventually remove the need for XML configuration for good.
This brings us to the end of the series. By now you should understand the Struts2 architecture (both the overall and the user interface architecture); be able to explain the differences in request processing; understand how to configure a Struts2 application; and know how to combine actions and JSP's.
With a background in Struts, and the information you now have from this series of articles on Struts2, you should be well on your way to migrating complex application as well as starting new Struts2 projects from scratch.. | http://www.infoq.com/articles/migrating-struts-2-part3 | CC-MAIN-2015-06 | refinedweb | 3,502 | 61.16 |
Naveed86 wrote:
Hi Dears,
I am new to Java Card. I did setup to run Java Card programs on eclipse using.
This is my simple program belowCompilation is successful but when I run the program I get the following error.Compilation is successful but when I run the program I get the following error.
import javacard.security.RandomData; public class Main { public void main() { RandomData rd = RandomData.getInstance(RandomData.ALG_SECURE_RANDOM); short outLen = 10; byte buffer[] = new byte[outLen]; short outOffset = 0; rd.generateData(buffer, outOffset, outLen); } }
Error occurred during initialization of VM
java/lang/NoClassDefFoundError: java/lang/String
This is very annoying. I think my run time environment is not ok for running java card programs. I am able to run Java programs but not java card programs. Any one can help please?
Thanks,
Regards,
Naveed | https://community.oracle.com/thread/2464783?tstart=1&messageID=10689189 | CC-MAIN-2014-15 | refinedweb | 136 | 51.65 |
.2: Comparing, Sorting, and Grouping
About This Page
Questions Answered: How can I organize and analyze data? How can I sort a collection of elements by different criteria?
Topics: Comparing elements; sorting collections. Methods in the
Scala API.
Maps, once again; especially: generating a
Map from
existing data.
What Will I Do? Read and program.
Rough Estimate of Workload:? Four hours.
Points Available: B75 + C45.
Related Projects: The assignments pick up several earlier projects: Stars, Segregation, Election. One example involves MovieStats (new).
Introduction
You have already seen many examples of a common need: there is a bunch of numbers, strings, or other objects, and you need to find the one that is the greatest, least, longest, shortest, or otherwise best matches a particular criterion.
Another common need is to sort objects by a criterion, such as from the largest number to the smallest or vice versa. If you think about the most common applications and web sites that you use, you will surely come up with any number uses for sorting.
In order to find the “best” element, we need to be able to compare elements; the same goes for sorting. Which is why we’ll start this chapter by considering comparisons.
We will again visit the Scala API for tools.
You know this already:
Our intention in O1 is not to wade through the entire Scala API but to highlight some parts of it that are useful for our purpose of learning the basics of programming. In this chapter, too, we’ll cover but a part of the API’s abundant selection of tools for comparing and sorting values.
Comparing Elements “Naturally”
Let’s start from a concrete example.
The methods
max and
min
Collections have a
max method that seeks and returns the largest element. In the case
of numerical elements, this means exactly what you think it means:
Vector(3, 5, -50, 20, 1, 5).maxres0: Int = 20
In some earlier chapters, we’ve compared strings by their characters’ position in the
Unicode “alphabet”. That’s what
max does, too, if you apply it to
Stringss: you
get the string that’s alphabetically last.
Vector("a", "bb", "b", "abc", "ba").maxres1: String = bb
min is analogous. You can use it to find, say, the smallest number in a vector or
the alphabetically first string.
Natural ordering
Numbers and strings have a so-called natural ordering, which
min and
max rely on.
Here’s how you can think about it: if each of two values is no more and no less than
another, are they then the same value? Consider numbers, for instance: if a number is
not greater than another and not less than it either, you have two of the same number.
The same goes for strings when you compare them alphabetically. Natural ordering thus
means that the ordering is innate to the very values that are being ordered.
For an example of a non-natural ordering, consider sorting strings by their length: two
strings of the same length aren’t necessarely the same string. Another example is from
the GoodStuff project: even though two
Experience objects have the same rating, they
aren’t necessarily the same experience.
The
Ordered trait
The computer needs a definition of any ordering we use, even a “natural” one. Scala gives
us the
Ordered trait, which is a supertype for any objects that have a natural ordering
with respect to each other. Many of Scala’s built-in data types extend this trait; these
include
Int,
Double,
String,
Boolean, etc. (It’s also possible to write custom
classes that extend
Ordered, but we’re not going to look into that now.)
The
Ordered trait defines an abstract method that defines how values are compared.
Concrete data types that extend the trait implement that method in whichever way is
appropriate for that type. On integers, the method is implemented as a numerical
comparison, and on strings, as an alphabetical one.
The main benefit that we get from
Ordered is that this type encompasses all the various
data types that have a natural ordering. It thus becomes possible to define generic
methods that operate on any values of type
Ordered and therefore work on all objects
that can be naturally compared to each other: integers, decimal numbers, strings, and so
on.
The Scala API makes use of this potential. The collection classes of the API have several
methods that work on elements of type
Ordered. For example,
max and
min work as
described above only if the elements have a natural ordering.
Pos objects, for instance,
do not:
Vector(Pos(10, 5), Pos(7, 12)).max<console>:9: error: No implicit Ordering defined for o1.Pos.
We’ll also rely on natural ordering for our first efforts at sorting numbers, next.
Sorting a collection with
sorted
The
sorted method sorts the elements of a collection — which must be comparable with
each other — in ascending order:
Vector(3, 5, -50, 20, 1, 5).sortedres2: Vector[Int] = Vector(-50, 1, 3, 5, 5, 20) Vector("a", "bb", "b", "abc", "ba").sortedres3: Vector[String] = Vector(a, abc, b, ba, bb)
sorted returns a new collection with the elements in a new order. It doesn’t change the
original collection. (Not that it’s even possible to change a
Vector.)
If you want the reverse order, you can call
reverse (introduced in Chapter 4.1):
Vector(3, 5, -50, 20, 1, 5).sorted.reverseres4: Vector[Int] = Vector(20, 5, 5, 3, 1, -50)
How does sorting work?
Sorting algorithms are a classic area of algorithms research. Computer scientists have come up with a variety of sorting algorithms that have different efficiency characteristics.
The Scala API also provides a number of implementations for sorting. We won’t study them in O1. For now, it’s enough that you can use some ready-made sorting methods.
As a voluntary exercise, you can think about how you might implement a function that sorts, say, a given buffer of integers.
You’ll find copious texts on sorting on the internet. O1’s follow-on courses at Aalto also discuss sorting.
Comparing Elements by Custom Criteria
max,
min, and
sorted work admirably as long as the elements have a natural ordering.
But what if they elements in our collection don’t have the
Ordered trait (such as when
searching for the
Shape with the greatest
area)? Or even if they do, what if we want
to order them by some other criterion than the natural one (such as when sorting strings
by length rather than alphabetically).
Often, the best thing to do is call
maxBy,
minBy, or
sortBy.
The examples below feature
Shape objects (Chapter 7.2) in addition to strings and integers.
Here is a quick recap of what you need to remember about this trait and its subtypes:
trait Shape { def area: Double // ... }
class Circle(val radius: Double) extends Shape { def area = math.Pi * this.radius * this.radius // ... }
class Rectangle(val sideLength: Double, val anotherSideLength: Double) extends Shape { def area = this.sideLength * this.anotherSideLength // ... }
(We’ve also added
toString methods to these classes to improve the REPL output.)
maxBy and
minBy
Let’s find the longest string and the number whose second power is the greatest:
Vector("a", "bb", "b", "abc", "ba").maxBy( _.length )res5: String = abc Vector(3, 5, -50, 20, 1, 5).maxBy( n => n * n )res6: Int = -50
The examples show the basic idea: we give
maxBy a function that it calls on each
element. That function’s return values must be comparable to each other, as the
lengths of strings and the squares of integers are. The method determines the greatest
element by comparing those return values rather than the elements themselves.
maxBy ignores any natural ordering of the elements and thus works regardless of
whether such an ordering even exists.
Shape objects don’t have the
Ordered trait,
but that doesn’t stop us from finding the shape with the greatest area:
val shapes = Vector(new Circle(5), new Rectangle(5, 11), new Rectangle(30, 5))shapes: Vector[Shape] = Vector(CIRC[ r=5.0 a=78.53981633974483 ], RECT[ 5.0*11.0 a=55.0 ], RECT[ 30.0*5.0 a=150.0 ]) shapes.maxBy( _.area )res7: Shape = RECT[ 30.0*5.0 a=150.0 ]
minBy works the same way.
Here’s one more example: a tidy implementation for the
favorite of a GoodStuff
Category, which we’ve written several versions of before.
def favorite = if (this.experiences.isEmpty) None else Some(this.experiences.maxBy( _.rating ))
That line of Scala is a pretty direct translation of how we think about the method’s purpose: “If there are no experiences, there’s no favorite, either; otherwise, the favorite is the experience that has the highest rating.”
Sorting with
sortBy
The
sortBy method is similar. You can sort strings by length, numbers by their second
power, and shapes by their area:
Vector("a", "bb", "b", "abc", "ba").sortBy( _.length )res8: Vector[String] = Vector(a, b, bb, ba, abc) Vector(3, 5, -50, 20, 1, 5).sortBy( math.pow(_, 2) )res9: Vector[Int] = Vector(1, 3, 5, 5, 20, -50) Vector(new Circle(5), new Rectangle(5, 11), new Rectangle(30, 5)).sortBy( _.area )res10: Vector[Double] = Vector(RECT[ 5.0*11.0 a=55.0 ], CIRC[ r=5.0 a=78.53981633974483 ], RECT[ 30.0*5.0 a=150.0 ])
With
sortBy, it’s also easy to get the reverse of the natural ordering (cf.
reverse
above). Both of the expressions below work:
Vector(3, 5, -50, 20, 1, 5).sortBy( n => -n )res11: Vector[Int] = Vector(20, 5, 5, 3, 1, -50) Vector(3, 5, -50, 20, 1, 5).sortBy( -_ )res12: Vector[Int] = Vector(20, 5, 5, 3, 1, -50)
In the examples just above, we used each number’s opposite rather than the number itself as the basis for sorting.
Assignment: Star Maps (Part 4 of 4: Star Catalogues; B30)
Chapter 4.3 introduced the Stars app. In Chapter 4.5, we managed to display an entire sky of stars. We’ll now extend the program by drawing some constellations.
Task description
Study class
Constellation’s documentation and its incomplete code. Notice the following
in particular:
- The methods
stars,
minCoords, and
maxCoordsare missing.
- We construct a constellation from a vector of pairs.
- The variables
starshas the type
Set[Star]: a set of stars.
- A set is a collection of elements. Unlike a vector or a buffer, a set can never contain multiple copies of the same element.
- The elements of a set don’t have numerical indices.
- One way to form a set is to call
toSeton an existing collection (e.g.,
myVector.toSet). Even if the original collection had multiple copies of the same elements, each element will appear only once in the resulting
Set.
Add the missing parts:
- Implement
Constellationas specified.
- Implement
placeConstellationon
SkyPic.
- Edit
SkyPic’s
createmethod so that it adds the given
StarMap’s constellations to the returned image (on top of the stars).
Recommended steps and other hints
Implement
starsin class
Constellation. Here are some tools that you may wish to use:
In the same class, write
minCoordsand
maxCoords. Use the methods introduced in this chapter.
Run the program. Constellations don’t show up yet, but you should see the names of a few constellations when you mouse about in the window.
- The given GUI displays those texts whenever the mouse cursor is located in the rectangular area between the
minCoordsand
maxCoordsof a constellation.
- The program recognizes a handful of constellations, which are defined in the
northernfolder.
Implement
SkyPic.placeConstellation.
You can do this by calling the
placemethod of class
Picthat you know from earlier assignments. Or if you prefer, you can use a version of
placethat takes in any number of pairs as illustrated here:
myBackground.place(pic1 -> pos1, pic2 -> pos2, pic3 -> pos3)
The
linefunction for forming a
Picof a line previously came up in Chapter 2.8’s optional material. It’s easy to use; here’s a simple example:
import o1._import o1._ val myLine = line(new Pos(0, 0), new Pos(150, 100), Red)myLine: Pic = line-shape
For a neat implementation, try using
foldLeft(Chapter 6.3) to compose the final image. Or just go with a
forloop, if you prefer.
Edit
SkyPic.createso that it adds the constellations.
The constellations should now show up when you run the app.
Submission form
A+ presents the exercise submission form here.
Creating a
Map from Existing Data with
toMap
It’s frequently useful to create a
Map from existing objects, selecting a particular
property of those objects as the key.
Introduction to an example
Let’s use this
Member class that we’ve already used in other chapters:
class Member(val id: Int, val name: String, val yearOfBirth: Int, val yearOfDeath: Option[Int]) { override def toString = this.name + "(" + this.yearOfBirth + "-" + this.yearOfDeath.getOrElse("") + ")" }
Now suppose we have list of member data in a vector. In this example, we’ll just type in some test data in the REPL, but you can imagine the data being read from a file, for instance.
val memberVector = Vector(new Member(123, "Madonna", 1958, None), new Member(321, "Elvis", 1935, Some(1977)), new Member(555, "Michelangelo", 1475, Some(1564)))memberVector: Vector[Member] = Vector(Madonna(1958-), Elvis(1935-1977), Michelangelo(1475-1564))
Given this vector, what would be an easy way to put this information in a
Map that lets
us look up individual members by their ID?
Preparations: pairs in a vector
The first thing we need is a way to form key–value pairs that consist of member IDs as
keys and
Member objects as values.
memberVector.map( member => member.id -> member )res13: Vector[(Int, Member)] = Vector((123,Madonna(1958-)), (321,Elvis(1935-1977)), (555,Michelangelo(1475-1564)))
To reiterate: in the example above, the pair that’s defined with the
-> arrow is part
of the function that’s defined with the
=> arrow.
Making a
Map from a vector
What we have now is a vector of key–value pairs. From here, it’s no trouble at all to
construct a
Map object. We just add a
toMap call:
val memberMap = memberVector.map( member => member.id -> member ).toMapmemberMap: Map[Int,Member] = Map(123 -> Madonna(1958-), 321 -> Elvis(1935-1977), 555 -> Michelangelo(1475-1564)) memberMap.get(321)res14: Option[Member] = Some(Elvis(1935-1977))
toMap works nicely as long as the original collection consists of key–value pairs and
each key is unique (as the IDs are, above).
toMap creates an immutable
Map.
zip plus
toMap
The
zip method (Chapter 8.4) often comes in handy when we construct
Maps. In the
toy example below, we construct a
Map from two existing vectors, one of which contains
what will be the keys (the names of animal species) and the other of which contains the
values (animal heights in centimeters).
val animals = Vector("llama", "alpaca", "vicuña")animals: Vector[String] = Vector(llama, alpaca, vicuña) val heights = Vector(180, 80, 60)heights: Vector[Int] = Vector(180, 80, 60)
Let’s
zip up these separate vectors into a vector of pairs, and we’re ready to call
toMap:
animals.zip(heights).toMapres15: Map[String,Int] = Map(llama -> 180, alpaca -> 80, vicuña -> 60)
We could also have used
zip in the members example, replacing this:
memberVector.map( member => member.id -> member )
with this:
memberVector.map( _.id ).zip(memberVector)
Creating a
Map from Existing Data with
groupBy
In the example you just saw, the keys of the
Map were IDs that were unique to each
Member. You can also choose to use a non-unique object property as a key. In other words,
you can group objects by a particular property of theirs.
This is where the
groupBy method works its magic. Just like
sortBy,
maxBy, and
minBy, this method takes in a function that it uses to compare elements. Let’s group
our members by the century they were born in:
memberVector.groupBy( _.yearOfBirth / 100 )res16: Map[Int,Vector[Member]] = Map(14 -> Vector(Michelangelo(1475-1564)), 19 -> Vector(Madonna(1958-), Elvis(1935-1977)))
Map’s type. We create a map that stores a
Vectorof members behind each key.
groupByand passing in a tiny function that determines a single member’s century of birth.
groupBycalls this on each member in the vector. It creates and returns a
Mapin which...
Like
toMap,
groupBy also generates an immutable
Map object.
Assignment: Multiple Demographics (C5)
In Chapter 7.4, you implemented a city simulator with two demographics, blue and red (example solution). Let’s edit the simulator to support more groups.
Task description
Not much is required of you: just implement the
residents method in class
Simulator.
You’ll find that it’s simple if you use
groupBy in combination with the application’s
CityMap class.
The given GUI automatically detects the added method and uses it when you run the app.
Submission form
A+ presents the exercise submission form here.
Example: Movie Statistics
Introduction
One fairly common use for a
Map is to count occurrences.
Let’s revisit theme of popular directors and use a
Map to count how many movies each
director has on a list of top-rated movies. In fact, let’s sort all the directors by
this criterion. (In Chapter 1.6, you used a given function for a similar purpose.)
The following example code is also available in the MovieStats project.
This simple class
Movie represents an individual entry in a top movies list:
class Movie(val name: String, val year: Int, val position: Int, val rating: Double, val directors: Vector[String]) { override def toString = this.name }
We also have at our disposal a class
MovieListFile that is capable of reading and parsing
movie data stored in text files. For present purposes, all you need to know about it is how
to use it:
val movieFile = new MovieListFile("omdb_movies_2015.txt")movieFile: MovieListFile = omdb_movies_2015.txt (contains: 250 movies) val allMovies = movieFile.moviesallMovies: Vector[o1.movies.Movie] = Vector(Das Boot, Amadeus, Heat, The Secret in Their Eyes, ...
Counting occurrences with a
Map
So, how to determine each director’s frequency in that long vector?
Before we count the occurrences of each director, it makes sense to list all the directors.
Since each movie can have multiple directors, just calling
allMovies.map( _.directors )
doesn’t give us the single list of directors we want (but a list of movie-specific lists
of directors). Let’s
flatMap instead:
val allDirectors = allMovies.flatMap( _.directors )allDirectors: Vector[String] = Vector(Wolfgang Petersen, Milos Forman, Michael Mann, Juan José Campanella, ...
Let’s try to create a
Map where each key is a director name and each value is the
number of times that name occurs in the above vector. One way to reach this goal is:
- Find all the occurrences of every director by grouping the vector’s elements by director name. In other words: for each name, find all its occurrences and bundle them together.
- Count the size of each such bundle. Put the sizes in a
Mapand use the director names as keys.
That sounds more complicated than it is. See below for an example.
val groupedByDirector = allDirectors.groupBy( dir => dir )groupedByDirector: Map[String,Vector[String]] = Map(Paul Thomas Anderson -> Vector(Paul Thomas Anderson), Peter Weir -> Vector(Peter Weir), Wim Wenders -> Vector(Wim Wenders), Wolfgang Petersen -> Vector(Wolfgang Petersen), Giuseppe Tornatore -> Vector(Giuseppe Tornatore), Robert Rossen -> Vector(Robert Rossen), Dean DeBlois -> Vector(Dean DeBlois), Charles Chaplin -> Vector(Charles Chaplin, Charles Chaplin, Charles Chaplin, Charles Chaplin, Charles Chaplin), Ridley Scott -> Vector(Ridley Scott, Ridley Scott, Ridley Scott), Sean Penn -> ...
Mapwhere each
Stringkey (director name) stores a vector of more
Strings. More specifically:
Map: we called
groupByto group the director names in the big vector, using the names themselves as the grouping criterion (as odd as that may sound). See below for a discussion.
Remember:
groupBy calls its parameter function on every string in the list of director
names and creates a new “group” for each different value that the function returns.
So, what we do is create the groups with a function that returns exactly the same string that it takes in. This gives us a group for each different string in the original vector.
We’re close our goal now. It’s easy to create the director counts from the
Map we got
with
groupBy; we just need a bit of help from
mapValues (Chapter 8.4). Let’s turn
each mini-vector of identical names into the size of that vector:
val countsByDirector = groupedByDirector.mapValues( _.size )countsByDirector: Map[String,Int] = Map, ...
Here’s a cleaner printout:
countsByDirector.foreach(println)) ...
Sorting the directors
Let’s wrap up this example by listing the directors in order, starting with the one that
has the most top-rated movies. In other words: let’s form a collection that contains all
the key–value pairs of our newly created
Map and sort it by the number in the pair.
We’re going to need a method that does the sorting. But first we should put the information
that we have in a collection where the elements have a meaningful order. You may recall
that the ordering of elements in a
Map is implementation-dependent and you can’t simply
sort a map. A vector, on the other hand, is good for sorting:
val directorCountPairs = countsByDirector.toVectordirectorCountPairs: Vector[(String, Int)] = Vector(), ...
Now we have, in
directorCountPairs, a reference to a vector that contains pairs. Each pair
has the members
_1 and
_2 (Chapter 8.4). Before we sort the directors, remind yourself
how we can work with such pairs; as an example, this code takes the first pair in the vector
and reports that director and the number associated with him:
directorCountPairs(0)._1res17: String = Paul Thomas Anderson directorCountPairs(0)._2res18: Int = 1
Now let’s sort:
val directorsSorted = directorCountPairs.sortBy( pair => -pair._2 )directorsSorted: Vector[(String, Int)] = Vector((Alfred Hitchcock,8), (Stanley Kubrick,8), (Martin Scorsese,7), (Christopher Nolan,7), (Quentin Tarantino,6), (Steven Spielberg,6), ...
We can shorten the above with a function literal; the code just below does exactly the same thing. Admittedly, it looks like a deranged emoticon until you’re fluent in the different uses of the underscore in Scala.
val directorsSorted = directorCountPairs.sortBy( -_._2 )directorsSorted: Vector[(String, Int)] = Vector((Alfred Hitchcock,8), (Stanley Kubrick,8), (Martin Scorsese,7), (Christopher Nolan,7), (Quentin Tarantino,6), (Steven Spielberg,6), ...
Let’s prettify the output a bit. The code below makes us of Chapter 8.4’s brackets trick
for iterating over pairs in a
for loop as well as Chapter 4.5’s dollar-sign trick for
embedding values in strings.
for ((director, count) <- directorsSorted) { println(s"$director: $count movies") }Alfred Hitchcock: 8 movies Stanley Kubrick: 8 movies Martin Scorsese: 7 movies Christopher Nolan: 7 movies Quentin Tarantino: 6 movies Steven Spielberg: 6 movies ...
Assignment: Improvements to Election (Part 1 of 2; B45)
In Chapters 5.4, 6.2, and 6.3 you worked on the Election program. However, we left the project with several methods still unimplemented. You’ll get to implement them in this assignment and the one immediately below.
Task description
- Study (again) the Scaladocs in the Election project. Notice that the docs specify several methods for class
Districtthat you weren’t previously asked to implement.
- Rewrite
topCandidate, which you already implemented once in Chapter 5.4. You should be able to come up with a much simpler implementation now.
- Implement the missing methods
candidatesByParty,
topCandidatesByParty, and
votesByParty. Save the rest of the missing methods for Part 2 below.
Instructions and hints
- If you didn’t do the earlier Election assignments, do them now or use the example solutions. If you didn’t yet write the auxiliary method
countVotes, as suggested in Chapter 5.4, do that now.
- In this assignment, you’ll be using immutable maps, which are always available in Scala without an
import. Don’t import the mutable
Mapclass from
scala.collection.mutable.
- It’s probably easiest to implement the three methods in this order: first
candidatesByParty, then
topCandidatesByParty, and finally
votesByParty. As you implement each method, see if you can build on the methods you implemented previously.
- Methods from this chapter will be useful. So will some other collection methods. Pick the right tools, and you won’t need to write a lot of code.
- Use
ElectionTestto test your solution. There’s some useful code at the end of
ElectionTest.scala, which you can uncomment and run.
Submission form
A+ presents the exercise submission form here.
Assignment: Improvements to Election (Part 2 of 2; C40)
Task description
Fill in the rest of the missing methods:
rankingsWithinParties
rankingOfParties
distributionFigures
electedCandidates
Instructions and hints
- It’s probably easiest to implement the methods in the order listed above. As you implement each method, see if you can build on the methods you implemented previously.
- Look at the examples in this and other chapters for inspiration. The movie director example may be particularly helpful.
- Construct a vector from a map (
toVector) and a map from a vector (
toMap) as needed.
- This assignment is one opportunity for defining private functions (Chapter 6.3) within a method. Try writing a
definside a
def.
- For instance, in
distributionFigures, perhaps you could write an auxiliary function that determines the distribution figure of a single candidate?
Submission form
A+ presents the exercise submission form here.
Summary of Key Points
- Many programs need to compare and sort the elements of a collection.
- Scala has a versatile API for finding maximal and minimal elements and sorting collections.
- You can group the elements of an existing collection in a
Map. One use for such groupings is to determine the distribution of similar elements..
mapan anonymous function that takes in a member object and... | https://plus.cs.aalto.fi/o1/2018/w09/ch02/ | CC-MAIN-2020-24 | refinedweb | 4,302 | 57.27 |
$ cnpm install @shopify/app-bridge
@shopify/app-bridge
You can use Shopify App Bridge to embed apps and channels directly into the Shopify admin, Shopify Mobile, and Shopify POS. Shopify App Bridge helps to reduce your development time by accessing native Shopify features across different platforms, and makes sure that the user experience is consistent wherever merchants are using your app.
Shopify App Bridge is currently being released as an update to all current consumers of the Embedded App SDK. Since it’s backwards compatible, you don't need to update your existing apps for them to support Shopify App Bridge.
You can install Shopify App Bridge by using Yarn:
yarn add @shopify/app-bridge
In the following example, you need to store
shopOrigin during the authentication process and then retrieve it for the code to work properly. To learn more about this process, see Getting and storing the shop origin.
Import the library from the
@shopify/app-bridge package and provide a configuration:
import createApp from '@shopify/app-bridge'; const app = createApp({ apiKey: 'API key from Shopify Partner Dashboard', shopOrigin: shopOrigin, });
Shopify App Bridge introduces the concept of actions. An action provides a way for applications and hosts to trigger events with a statically-typed payload.
To learn more about the supported actions, see the actions source folder.
Once the app is set up, you can access additional details and the state of the app at any time by making async call
app.getState():
app.getState().then((state: any) => { console.info('App State: %o', state) });
You can also use a shorthand to retrieve nested properties by passing in a
query when calling
getState(query), for example:
app.getState('pos.user').then((user: any) => { console.log('POS User: %o', user); });
Shopify App Bridge can be extended with hooks, which run when actions are dispatched and updated. Hooks are middleware that can modify or cancel actions.
To learn more about hooks, see the hooks source code.
Shopify App Bridge is shipped with a development build which includes the validation middleware in order to provide helpful information during development.
If you're consuming App Bridge from a CDN or using the UMD build, simply include the development version:
umd/index.development.js instead of
umd/index.js
When consumed as JS Modules, development/production build is dynamically selected based on the
NODE_ENV variable. Development build is used when
NODE_ENV=development.
Please note that the development build is not intended for production use, as the resulting file size is significantly larger due to the validator being present.
yarn start- Start a development server.
yarn build- Build the library, compiling the source TypeScript into JavaScript.
yarn clean- Remove any artifacts produced by the
buildscript.
yarn lint- Run the source linter.
yarn check- Run the TypeScript type checker.
yarn test- Run the tests.
yarn test:watch- Run the tests in watch mode and auto-rerun on changes.
yarn test:coverage- Run the tests and generate a coverage report.
To enable automatic prettier formatting, copy or link the pre-commit script to
.git/hooks/pre-commit and make sure that it's executable.
Please follow a standardized commit message format when contributing to this repo: | https://developer.aliyun.com/mirror/npm/package/@shopify/app-bridge | CC-MAIN-2020-29 | refinedweb | 529 | 56.55 |
This is mainly to Mike Pilato, but I thought others might be
interested too:
Mike,
As we discussed, I'm working on efficiency issues in the new
[un]deltification scheme. In fs-test.c, I've tried several variations
in large_file_integrity(). The big data point is that if you set the
`filesize' to 102400, the retrievals of old revisions are quick, but
at 102401 (svndiff window size + 1), they get very *slow*, and
fibonaccically slower as the revisions get older. :-)
The test passes in both cases.
(As an aside, when I set filesize to ((3 * svndiff_window_size) + 1),
the test uses up an outrageous amount of memory and in fact hung my
machine. Yes, there is something nonlinear going on here.)
BUT, with it set to 102400, I've noticed a couple of weird things
going on:
1. rep_read_range() sometimes gets called with an `offset' parameter
of 102400. I don't yet know why that's happening; it shouldn't,
right? The highest offset in that file would be 102399.
2. rep_read_range() is also sometimes skipping the first window in a
delta because the window is "irrelevant" by virtue of
reconstructing text entirely before the requested range. This
might be related to the requests for offset 102400, since
obviously no window in the delta should be reconstructing
anything beyond that,
3. Oh wait, as I was typing this mail, I just figured out what's
happening with points 1 and 2 above. In get_file_digest() in
fs-test.c, we're requesting 100000 bytes at a time. So the first
request is fully satisfied, the next request is partially
satisfied, but then there's a third request due to the
conditional structure of the loop there, and that third request
will always start from one byte past the end of the file, and
return (of course) 0 bytes. Well, that explains that. Please
ignore all three points here.
Dang, I can't believe I wasted all that time figuring out something so
obvious.
So that leaves me with only the data point mentioned first, about how
efficiency drops dramatically the moment we add a new window.
By the way, Ben and I lost a lot of time today because newton finally
failed (hardware failure, it looks like). You know how it's been
going down daily recently, for no reason. Well, today it started
going down hourly :-). After it couldn't even *boot up* a fresh
install of the latest stable FreeBSD, we decided to junk it and make
galois (which had been sitting idle) the new newton etc. Everything
is back online now; hopefully we won't be dropping off the net every
night.
Will be looking more at the fs stuff over the next couple of days, we
should talk on the phone too, since I won't be physically in the
office tomorrow.
Hope you're feeling better!,
-K
---------------------------------------------------------------------. | https://svn.haxx.se/dev/archive-2001-10/0050.shtml | CC-MAIN-2022-05 | refinedweb | 480 | 71.14 |
This book is loosely divided into four parts . Even though each chapter contains a distinct project, many chapters build on knowledge gained from previous chapters. So, really, the book is best read in the order in which the chapters were written. But don't worry. If you just have to skip ahead to Chapter 10, to find out how to write an InfoTip handler, you can do that, too. Any information you might need from previous chapters is referenced. Here's how the book breaks down:
Part I includes Chapter 1, Chapter 2, and Chapter 3. These chapters are designed to introduce you to the basics of the Windows shell and to provide a jumpstart on the Component Object Model and how it relates to writing in-process components for Visual Basic.
Shell Extensions includes Chapter 4 through Chapter 10. In these seven chapters, we will write seven shell extensions. These chapters are centered around a mythical file type (called a .rad file), which contains simple data about animals. Although the focus of these projects centers around a make-believe file type, these chapters are designed to show you the most effective ways to build shell extensions for your own data.
In these chapters we will write the following projects:
Context menus are the menus that appear when you right-click on a file. Context menu handlers allow you to add your own items (and the code to carry out the commands) to this menu. Context menu handlers are discussed in Chapter 4.
These are similar to context menu handlers. Drag-and-drop handlers are actually context menus that are displayed when a file is dragged with the right mouse button. Although we won't create any, drag-and-drop handlers are discussed along with context menu handlers in Chapter 4.
Icon handlers allow files of the same type to have different icons. These icons could indicate the state of the data contained within the files or provide other additional feedback. Creating icon handlers is covered in Chapter 5.
The property page dialog appears when you select Properties from a file's context menu. A property page extension allows you to add property tabs to this dialog. This is a convenient way to allow users to access various elements of the data contained within the files without starting up an application. All of Microsoft Office's documents have property page extensions defined for them. Developing property sheet handlers is discussed in Chapter 6.
Drop handlers allow your files to become drop targets. This allows files of any type (that you wish) to be dropped on a specified file type. The drop handler can then perform any processing that you might need on the two filesperhaps a file comparison, for example. Creating drop handlers is covered in Chapter 7.
Data handlers allow you to modify what happens during a copy, cut, or paste operation for a given file type. Developing custom data handlers is discussed in Chapter 8.
Copy hook handlers allow you to permit or prevent a given file from being copied or renamed . Developing copy hook handlers is discussed in Chapter 9.
InfoTip handlers provide tool tips for your file. With this tool tip, you can convey information right from the shell about the file in question, such as who wrote it or when it was created. Creating InfoTip handlers is treated in Chapter 10.
Part III contains only Chapter 11, but it's a large chapter indeed. This chapter is designed to introduce you to the very prodigious world of namespace extensions. During the course of this chapter, we will build a simple namespace extension that displays a hierarchy of arbitrary objects. The project is a very simple one that's designed only for the purpose of introducing you to the concepts of namespaces and of developing namespace extensions. However, source code for two real world examplesa registry namespace extension and running object table (ROT) namespace extensionis also provided. By installing the registry namespace extension, for instance, you can view and navigate your system registry just as if it were part of your filesystem.
Part IV includes Chapter 12 through Chapter 14. These chapters are concerned with extending the functionality of Internet Explorer, which, as of shell version 4.71 (that is, Internet Explorer 4.0), was basically integrated with the shell. But don't let that fool you. Even though the focus of these chapters is developing for the Internet, these components can be written for Explorer, too. In these three chapters, we will build the following four projects:
Browser helper objects (or BHOs) are components that sit between the Internet and the browser. With them, you have access to every HTML tag on every page that you surf. Also, you can capture any event supported by the browser. This means you can use BHOs to perform a wide variety of Internet- related tasks . Developing BHOs is the focus of the first half of Chapter 12.
Browser extensions are similar to browser helper objects, but they are only available for Internet Explorer 5.0. Unlike BHOs, browser extensions can have an associated menu item and toolbar button that is available from Internet Explorer. Browser extensions are covered in the second half of Chapter 12.
The Search, History, and Favorites windows that are part of Internet Explorer are actually band objects. Remember the ill-fated Channel window in Windows 98? That was a band object, too. If you have an Internet-related application that needs a user interface, a band object is the way to go. Band objects are discussed in Chapter 13.
Docking windows are basically toolbars that you can add to Explorer's client area. These can be docked on all four borders of Explorer's client area and can provide a means for components like namespace extensions, browser extensions, and band objects to provide additional user interface capability. Developing docking windows is covered in Chapter 14.
In Shell Extensions through Part IV, each chapter follows a similar format, which consists of the following four major sections:
This section provides critical background information on the particular shell extension but also can be useful when debugging an application that doesn't behave quite as expected. It covers such topics as how the shell knows that the shell extension exists in the first place and how the shell and the extension communicate with one another.
The operation of shell extensions is based on the Windows shell loading the extension and calling methods of its interfaces. This section documents those interfaces and their methods. You can read this section to gain an understanding of what interfaces and which methods are used in a particular kind of shell extension. You can also return to this section when you need a reference source when developing your own shell extensions.
In this section, we develop our example shell extension. Here you can see how a shell extension might be implemented, as well as look at some working Visual Basic code.
This final discussion covers the process of installing and registering the shell extension on a particular system. The central part of this section is typically a .reg file, which registers the extension in the system registry. In most cases, the file need only be slightly modified to work with any shell extension that you might build. | https://flylib.com/books/en/1.107.1.8/1/ | CC-MAIN-2018-13 | refinedweb | 1,224 | 64.3 |
Level 1
Level 2
Learn More
Likes
Total Posts
Correct Reply
02-08-2019
Hi all,
I have a question: what do you guys do when you detect an IP/user agent/old browser etc. that delivers like 10,000 visits per month with no conversions?
We sometimes have that, no campaign code, might just bounce, looks unhuman but is not filtered out automatically.
We currently have a segment to filter out "bad traffic". However, the segment gets larger and larger, so I am looking for alternatives.
Regards
Stefanie
Stefanie, just out of curiosity, have you tried to lookup the IPs that generate the most bad traffic to get insights about networks, namespaces, etc.? Sometimes, the traffic can be originated not only by "bad" bots, but also by QA bots your IT may use on purpose to monitor website availability and vulnerability. In my experience, the auto bot filters detection rate for this type of traffic is low. The analysis by IP helps quickly discover that.
04-08-2019
Hi Stefanie,
Bot detection is really a pain, especially for retail sites. Adobe should come up with a better out of box or engineering solution, current BOT removal process (since 2011-12) is just not seems to be working in case of smart Bots. No solutions mentioned in various threads is seems to be full proof.
Thanks!
Asheesh
just a word about "Virtual Report Suites" (VRS): we changed access and give business only data in the VRS. means until someone is an analyst, they only get access to filtered data. In my opinion there are at least 2 advantages:
1) filter bot traffic: any VRS can have more than one segment filtering bad traffic. Whenever we detect something, we just update our bot segment (or add a new one). what happens is that every workspace project (and scheduled pdf) depending on that VRs is immediately updated! no internal communication needed, no need for updating projects at large ...
2) component curation: in a VRS you can select what dimensions (props/eVars) and events should be visible. this makes it easy for new users to start working with project templates since they don't have a bunch of items in the left row.
(almost) no advantages without a drawback: it takes some time to setup (and maintain), but I believe it's worth in the long run.
Hi,
in the meantime I have found ideas relating to the topic:
I noticed that some people create a virtual report suite. I'm just wondering if this is practical or has disadvantages. Therefore it would be intresting to hear if people generally filter out little or what they do. Our goal is to have a clear view of the conversion rate trend.
Historically grown, our filter contains about 20 old browsers, 10 user agents, 9 uncommon monitor resolutions and 7 orig. referring domains. Most of them cause a relatively small amount of traffic i.e. 2000 visits per month. IPs might be added to the segment in the future.
@Andrey: the IT testing system is filtered out, but thanks for your suggestion because we haven't checked networks, namespaces etc. 🙂 | https://experienceleaguecommunities.adobe.com/t5/adobe-analytics-questions/how-do-you-handle-bots-bad-traffic/qaq-p/320448 | CC-MAIN-2021-31 | refinedweb | 525 | 62.68 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.