text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
On Oct 6, 2005, at 2:29 AM, James Allwyn wrote: Advertising.I've tried to implement this zopeishly, and the stumbling block I've hit seems to be specifying the key and value_type pairs in the dict. So, for example: [snip]. Hey James,What you probably want to explore is the "Object" field. This lets you define a "sub-schema", as it were, and use that in your "main" schema. An example (untested)... maybe in yourproject/types/interfaces.py: from zope.interface import Interface from zope.schema import List, Text, TextLine, Int, Choice from zope.schema import Object from zope.i18nmessageid import MessageIDFactory _ = MessageIDFactory('yourproj') # sub-schema class IContactData(Interface): show = Bool(title = _(u"Show Data?")) type = TextLine(title = _(u"Data Type")) value = TextLine(title = _(u"Data Value")) # main schema class IUser(Interface) name_first = TextLine( title = _(u"First Name") ) name_last = TextLine( title = _(u"Last Name") ) contact_info = List( title = _(u"Contact Info"), value_type = Object( schema=IContactData, ), ) Then (maybe in user.py, ): ... class ContactData(object): implements(IPageSection) # and whatever else you might want to # put in here class User(PortalContent): implements(IUser) # and whatever else you might want to # put in here ...Then you're going to have to do some extra work with a custom widget (a good place would be ./browser/userview.py): from zope.app.form.browser import ObjectWidget from zope.app.form.browser.editview import EditView from zope.app.form import CustomWidgetFactory from yourproject.types import ContactData from yourproject.types.interfaces import IUser parts_w = CustomWidgetFactory(ObjectWidget, ContactData) class ContactDataEditView(EditView): """ View for editing a page section. """ __used_for__ = IUser parts_widget = parts_w In your browser/configure.zcml, your going to need something like this: <browser:editformI may be forgetting something here... but this should certainly give you a head start. There's a bunch of very good information on this in these files: zope/app/form/browser/widgets.txt zope/app/form/browser/objectwidget.txtHmmm, looking through these again, these are really good examples. Pay more attention to them than to what I wrote above ;-) By the way, (and irrespective of the structure of the data) this data you are adding seems to me a good candidate for annotations. You might want to explore that...By the way, (and irrespective of the structure of the data) this data you are adding seems to me a good candidate for annotations. You might want to explore that... Happy trails! d _______________________________________________ Zope3-users mailing list Zope3-users@zope.org
https://www.mail-archive.com/zope3-users@zope.org/msg00771.html
CC-MAIN-2017-04
refinedweb
415
51.85
Joy, frustration, excitement, madness, aha's, headaches, ... codito ergo sum! As you may know, last week I released a version (still in beta) of the SmartPart which supports the ASP.NET AJAX extensions. For more information about this version of the SmartPart, see my previous blog post. This weekend I've been working on functionality to support "AJAX connection" in the SmartPart. First of all; connectable web parts are web parts that can exchange data. Think for example about a web part which displays a list of invoices, and another one that displays a list of invoice lines. If you select an invoice in the first one, the second one can display the lines of the selected invoice. So with connectable web parts you can create those typical parent/child and parent/details relationships. You can create connectable web parts in SharePoint 2003 and in ASP.NET 2.0/SharePoint 2007, so it's not a new functionality. So why are the connections in the new version of the SmartPart (Return of SmartPart v1.2 BETA) so special? Well if you combine web part connections with a little bit of AJAX-magic you have web parts that can exchange data without postbacks! Think about selecting the invoice and displaying the corresponding invoice lines in another web part, all without postbacks. Actually you could do that trick without this new version, but I've added some helper and wrapper classes to make your life as a web part developer a little bit easier! Let's start with a really basic example: the DemoProvider and the DemoConsumer user controls. The first one has an UpdatePanel (the ASP.NET AJAX control handling the partial-page rendering), a TextBox and a Button. The second one just has an UpdatePanel and a TextBox. The scenario should be like this: the user enters some text in the first text box, clicks the button and the text is transferred to the second text box, all without postbacks. public partial class DemoProvider : System.Web.UI.UserControl, SmartPart.IConnectionProviderControl{ protected void Page_Load(object sender, EventArgs e) { } public string ProviderMenuLabel { get { return "Send test data to" } } public object GetProviderData() { return new SmartPart.AJAXConnectionData( TextBox1.Text, Button1, "Click"); } } The code above is the code for the DemoProvider user control, notice that the class implements the IConnectionProviderControl interface of the SmartPart which is also used for normal connections. The special thing happens on the GetProviderData method; a new instance of the AJAXConnectioNData class is created. This object contains first of all the value that should be send to the consumer (TextBox1.Text). The second parameter is the control that will cause the UpdatePanel to refresh, the third parameter is the name of the control's event which will cause the refresh. (This constructor can only have one control as a trigger control, but the class can hold more than one.) The ProviderMenuLabel property returns the value which will be displayed in the web UI of SharePoint when the connection is made. That's it! Now let's take a look at the code for the DemoConsumer user control: public partial class DemoConsumer : System.Web.UI.UserControl, SmartPart.IConnectionConsumerControl{ protected void Page_Load(object sender, EventArgs e) { } public string ConsumerMenuLabel { get { return "Receives test data from" } } public void SetConsumerData(object data) { SmartPart.AJAXConnectionData ajaxData = data as SmartPart.AJAXConnectionData; if (ajaxData != null) { ajaxData.RegisterTriggerControls(UpdatePanel1); if (ajaxData.Data != null) TextBox1.Text = ajaxData.Data.ToString(); } } } The class implements the normal IConnectionConsumer control of the SmartPart and once again the special thing happens in the SetConsumerDate method. This method receives the instance of the AJAXConnectionData class which was constructed by the provider. First the code checks if the data received is an AJAXConnectionData instance, if so the RegisterTriggerControls method is called with the UpdatePanel to use as a parameter. This method will add every control of the AJAXConnectionData instance as a trigger control for the UpdatePanel, so the two UpdatePanels of the two user controls can trigger eachother. You could do this manually as well, but I provided this functionality since it will be the same for in like 99% of all the cases. Finally the Data property is used to fill the TextBox's Text property. Done! I could show the result with some screenshots, but you have to see them in action to get the idea. That's why I've recorded a small screencast of the two user controls. The second part of the screencast shows some other controls which get data from the AdventureWorks database. I already put them on a site, so you can see the Categories web part connected to the Subcategories web part, which is connected to the Products web part. You can download the source code for these demos from CodePlex. Downloads: PingBack from Do you know why my user controls say PartialCachingControl I need to have a custom toolpart to show some values from the database in a dropdown. Can you give an example of how to do this with the return of smartpart, I have been searching for sometime and could not find any example that creates a custom toolpart and setting the custom attribute to the custom toolpart value using smartpart. It would be really helpful if i could get an example of doing this, since we are planning to use the smartpart and do our webpart using user controls. I have the same problem as Deepa. I need to custom toopart to show values from the database by dropdown. Besides, I can't use the DemoProvider, DemoConsumer When I plug them on my sharepoint, it will be an error as following: "An unexpected error has occurred. Web Parts Maintenance Page: If you have permission, you can use this page to temporarily close Web Parts or remove personal settings. For more information, contact your site administrator. " What's wrong ?? Coulde you please give me a advise? Thanks I am using your SmartPart assembly to integrate custom asp.net user controls with AJAX functionality on to my sharepoint site. I got the following error. Could you please give me suggestion or whats wrong in it. Error: unable to load ~\/UserControls\AdvancedSearchPanel.ascx Details: c:\Inetpub\wwwroot\wss\VirtualDirectories\81\usercontrols\AdvancedSearchPanel.ascx.cs(94): error CS0234: The type or namespace name 'AJAXConnectionData' does not exist in the namespace 'SmartPart' (are you missing an assembly reference?) Aravind. Please provide some instructions for those of us who just don't get it. How do I get a usercontrol which accesses a SQL database to work in smartpart. In VS2005 my control works. In MOSS 2007 Smartpart correctly runs your test control. When I add my data access control control.ascx and contol.ascx.vb to my USerControl\folder I can add it to smartpart, but then Sharepoint errors. I can create a custom control with no connection to my database and that works correctly. As soon as I add a database connection - its a no go. What am I missing. Same problem as Fred once I make a connection to the data base it bombs I have added the connection string to the web config in the wss virdir but it still doesnt see it or something is there another place I need to add the key for the connection string? anybody got any Ideas? How about adding support for browsable Enum usercontrol properties? When I statically add the AJAXSmartPart to a page layout using the following code: <%@ Register TagPrefix="sptest" Namespace="SmartPart" Assembly="ReturnOfSmartPart, Version=1.1.0.0, Culture=neutral, PublicKeyToken=9f4da00116c38ec5" %> <sptest:AJAXSmartPart I get the following error message: "The control with ID 'UpdatePanel1' requires a ScriptManager on the page. The ScriptManager must appear before any controls that need it." So it looks like AJAXSmartPart forgets to add a ScriptManager when it is added statically. When I manually add a ScriptManager to the page layout code it works fine. Also when I add a second AJAXSmartPart using the normal approach (i.e. add it as a webpart to a WebPartZone), that second one seems to add the ScriptManager and they will both work. Looks like a bug, doesn't it? How can I tell if the webpart is in "edit mode"? The SmartPart interface doesn't expose the internal WebPartManager object, so I cannot query it from the user control. Hi Jan, I'm still using the old SonOfSmartpart and have just migrated our Sharepoint 2003 content to MOSS2007. During the migration tests I came up with a security issue. It seems that every user who wants to configure the smartparts user control needs NTFS access rights on the usercontrols folder on the server. Is that correct? Is there a workaround for that, e.g. using an IIS-Guest account? I have only integrated Windows authentication enabled and don't want anonymous access. Thanks René Question for you: Does the smart part set EnablePageMethdod=true on the ScriptManager? When trying to use the AjaxControlToolkit with a pageMethod it does not work. Thanks, David I have the same question as David. I am trying to use the AutoComplete from the ajaxtoolkit to call an external web service. That was not working (access denied errors. - by the way if you have any comments on how to call a web service on a different server - that would be helpful.) But for the pageMethod, how can you set the EnablePageMethod = true? I'm trying to get this simple provider and consumer demo provided to work on a webpage before testing it out on SharePoint. I am not getting any events trigger when i click send data. (In other words, Consumer textbox is not gettign any events). This is what i'm doing. In default.aspx, i include both the consumer and provider controls. It's as simple as that. Do i need to subscribe to any events? I have no problem compiling it. Thanks. First, let me say congradulations on your being a new dad! Now, I'm using your newest SmartPart version (the beta) and what I'm trying to do is add your smartpart to a web part page during the FeatureActivation event of a feature. Here's what I have so far and I'm not getting any errors, but my Users control is not getting loaded. Any ideas? SPFile page = site.GetFile( "sem/SystemAdmin.aspx" ); SPLimitedWebPartManager mgr = page.GetLimitedWebPartManager( PersonalizationScope.Shared ); SmartPart.SmartPart smart = new SmartPart.SmartPart(); smart.UserControl = "My.Assembly,My.Assembly.Users"; mgr.AddWebPart( smart, "centerWebPartZone", 0 ); I am using the latest version of smartpart with MOSS 2007. It works well Jan !! Thanks. I can connect to database only when i set the impersonate=false in the web.config of wss virtual folder. By default its true for moss 2007 and when its true, the sqlconnection.Open fails !!! Any idea? Please help.. I am burning my head for past several days on this guys.. Can you please suggest me, how to implement cross page connections using the smartpart. Currently I am using: Return of the SmartPart Version 1.2.0.0 Looking at the methods SetConsumerData and GetProviderData, I am not sure you can really achieve this. Any help would be greatly appreciated. Thanks, Is it possible to get teh current logon user to personalize the user control. This is a great tool to develop custom webparts. Although, I am having problems with connecting the web parts. I get some error and Sharepoint asks me if I want to go to the web parts maintenance page to disable the web part. I have two web parts, one being the provider and the other a consumer. When I add the provider, it all works fine and data is shown. I can also add the consumer and it shows up on the page. However, when I go to connect them, I get the error. I am using the 1.2 beta version. I also tried creating a simple webpart as a consumer with just a text box and no extra code other than the one the methods that need to be implemented for the consumer web part. Please let me know if this is a known issue, or if anyone has any solution for this. Thanks, Jan, thanks for this very useful utiliy - I really fills a spot where MS needs to improve. I would like to use it in a project, but without the sourcecode I will not be able (allowed) to. Is there anyway we could get a hold to your sources (URL?) ? Erich I can not see the "Connections" menu item on the Smartpart. Has anyone faced this issue? Thanks in advance Kiran Hi I have a basic question. Can we create cross-page connections using smartpart? I have one smart(having user control) part which is a producer on PageNo1. Secondly, I have another smartpart(having user control) on PageNo.2 which is a consumer for smart part on PageNo1. Now, the problem is that I am finding "Connections" link coming disabled showing tooltip something as "Connection type not compatiable with other parts of the web page." Now the catch is it works prefectly fine when both producer and consumer are on the same page, but does not work if put on different pages. My second big concern is that on "Submit" click, I want to move to next page and then show the consumer data fetched from producer on PageNo1. Please Suggest. It is too critical for my application. Hi, I'm trying to use a Web User Control created in IronPython in a SmartPart, and get an "'IronPython' is not a supported language" error. The same control works when called directly. Thanks a lot for any ideas what the cause could be. Is there a way to get this to work in VB? I've tried and get: error BC30154: Class 'VBConsumer' must implement 'ReadOnly Property ConsumerMenuLabel() As String' for interface 'SmartPart.IConnectionConsumerControl'. Implementing property must have matching This works beautifully in C# on my WSS 3.0 website (SmartPart 1.2 beta). Thanks! Okay -- I got some basic AJAX Connection with SmartPart 1.2.0 beta here for VB. Here's the VB Code: ---------------------Provider page: ------------------------ SmartPart Partial Public Class VBProvider Inherits System.Web.UI.UserControl Implements SmartPart.IConnectionProviderControl Protected Sub Page_Load(ByVal sender As Object, ByVal e As EventArgs) End Sub ReadOnly Property IConnectionProviderControl_ProviderMenuLabel() As String Implements SmartPart.IConnectionProviderControl.ProviderMenuLabel Get Return "Send data to" End Get End Property Function IConnectionProviderControl_GetProviderData() As Object Implements SmartPart.IConnectionProviderControl.GetProviderData ' Build a new AJAXConnectionData object and return it. ' Parameters: ' - value to send ' - control which triggers the connection ' - event of the control which will trigger the connection Return New SmartPart.AJAXConnectionData(TextBox1.Text, Button1, "Click") End Function End Class ----------------------Consumer page:------------------------- Partial Class VBConsumer Implements SmartPart.IConnectionConsumerControl ReadOnly Property IConnectionConsumerControl_ConsumerMenuLabel() As String Implements SmartPart.IConnectionConsumerControl.ConsumerMenuLabel Return "Receives test data from" Sub IConnectionConsumerControl_SetConsumerData(ByVal data As Object) Implements SmartPart.IConnectionConsumerControl.SetConsumerData Dim ajaxData As SmartPart.AJAXConnectionData = data '= data as SmartPart.AJAXConnectionData ' Check if the received data was of the correct type If Not ajaxData Is Nothing Then ' Register the trigger controls, so the UpdatePanel get's updated async. ajaxData.RegisterTriggerControls(UpdatePanel1) ' Check if there was any data received, if so: set the value. If Not ajaxData.Data Is Nothing Then TextBox1.Text = ajaxData.Data.ToString() End If End If ----------------------------------------------------- Okay, hope it helps somebody. Now I'm off to see if I can plug in some SQL db... I can't for the life of me get a DropDown control to work as the provider for a web part. The control will render until I try to "connect" it to a Consumer part. The code works great when I use it in a GridView but not with a DropDown. Any Ideas anyone? Bill public partial class DemoProvider : System.Web.UI.UserControl, SmartPart.IConnectionProviderControl { protected void Page_Load(object sender, EventArgs e) { } #region public string ProviderMenuLabel { get { return "Send ID data to" } } public object GetProviderData() return new SmartPart.AJAXConnectionData( DropDownList1.SelectedValue, Button1, "Click"); #endregion } Hello Jan, With an installation of the .Net Framework 3.5 on a WSS3.0 Server, given that the framework contains ASP.NET AJAX, do the configuration steps of Ajax on the server change? I follow this article step by step but it doesn't work, help me please Hi Jan Things like calendars and collapsing panels are working fine in may Smart Parts. Connectivity between parts without Ajax is too. But when I drop in your demo provider and demo consumer, configured so that the provider sends data to the consumer, nothing happens... Similarly a simple control with two text boxes and a button where the button click sends text box 1's text to text box 2 works fine in an aspx page, but not in SharePoint. Any clues? Thanx! Using AJAX in your SharePoint Web Parts code samples/slides finally up! I have created the DemoProvider and DemoConsumer smartparts and added them to my sharepoint site. But nothing happens when i click the button in the provider. I also tried adding these user controls to a web page and debugging it. Again, nothing happens when i click the button. Please help. Isis talk.to.isis@gmail.com I think the button was not causing any action cuz I had not connected the 2 smartparts in sharepoint. However, if i connect them i go to an error page that says "An unexpected error has occured". Any idea, why?
http://weblogs.asp.net/jan/archive/2007/02/26/new-version-of-smartpart-now-with-ajax-connections.aspx
crawl-001
refinedweb
2,900
57.47
. CLICK ON EACH BLUE TAB TO OPEN FOR FURTHER INFORMATION. It is important to note that most of these FAQs assume that the traveler has a tax home and is working as an employee (not self employed/1099). We have tried to put in as many variables as possible, but still not make things too confusing. They stem from years of research on Joe's part. Most recommendations and tax law interpretations are based on personal audit experience and actual court cases, which are available at request. The non-traveling issues we have kept under "FAQ Basic Taxes." Read and learn, but we are also available to chat on the phone, or the "Ask Us a Question" link on the left allows you to e-mail us without opening an e-mail browser.. It does not matter if you drove 90 miles there an back, by returning home you are proving that it is within commuting distance of your home = not a travel assignment. TRIVIA:- Many companies and agencies have utilized this arbitrary number as an internal policy rule for so long that everyone assumes that it is an IRS regulation. Believe it or not, your individual agency shifts worked are considered "indefinite employment," because it is employment without a contracted end date. No end date means it cannot be considered temporary. All indefinite employment is treated as a permanent job. Think about it, most “average” employee jobs are indefinite, a two week notice is all that is ever expected. This affects the health care industry in relation to day to day employment positions. Even though the work agreement extends only for a shift, the employer-employee relationship continues just like any other permanent position. It is important to note here that there is still an opportunity to take advantage of mileage deductions, meals, or overnight costs with these jobs if certain variables are met. It may be best to put a short term agency/PRN commitment in writing. Feel free to call us to go over these details. Understandably, this is a big fear of travelers. As a traveler there is a greater chance of an audit, but that is no reason to not travel. Audits come about for various reasons: If you have a solid tax home, kept good paperwork, and used TravelTax to do your return (we don't charge for audits of returns we prepared) you should not wind up with any financial damage. Just hassle. See FAQ Basic Taxes for more information on audits. YOU MUST KEEP A RECORD OF YOUR MILES! We cannot stress how important this is. Transportation reimbursements ARE NOT per diems. They must be substantiated (proven) with some sort of documentation, showing: DATE, WHERE, WHY, & MILES DRIVEN. Without a log, you can claim nothing! In the "old" days, people kept a physical log, on paper. Nowadays, you can use things like apps that record mileage, or even going to a web based map site where you can enter the addresses and print out the page. For most travelers, they would only need to do this for the trip to the assignment and then one trip to the hospital. That takes care of the WHERE and MILES. The DATES and WHY can be proven with copies of your contracts. Also, you must also keep track of annual miles on the vehicle. (Either by writing down you odometer readings on Jan 1st and Dec. 31st, or estimating as best as you can with copies of service receipts close to these dates.) The standard mileage deduction is a good deal. It is calculated to take into account gas, insurance, wear and tear. If you have a good car, you can make out like a bandit on the deal. However, if you car is a Hummer, well... In the event that you decide to utilize actual expenses, you must also realize that more rigorous record keeping is required. All receipts need to be saved, including gas. The mileage log is still required to determine what percentage is personal vs. business. And the auto needs to be depreciated, those records need to be kept. Someday, when you finally sell or trade in your car, you will have capital gains to pay taxes on. Utilizing the standard mileage deduction is not an option when renting a car. (The Standard Mileage Deduction has built into it the costs of repairs and maintenance.) On a rental, you must still keep a log of personal vs. work miles, but also save all gas receipts. (They will have to be apportioned.) Save the receipts for the cost of the rental also, because this is fully deductible. When your return is filed, any reimbursements you received will offset your deductions. This one causes some confusion so we will cover it here. Business deductions in the travel industry include phone calls and internet searches related to your next contract, seeking temporary housing, researching new employers, talking with recruiters, and time spent on work related internet forums. Phones: The cost of the main land line is not deductible. In the event that you do not have a land line and only have one cell phone, there are portions that can be deducted, so keep track of your itemized bills or contact agreement. If you are claiming a portion of your cell phone bills, you need to calculate percent usage by printing out about 3 months of calls that are representative of your normal cell phone use and total the minutes of your business calls. Divide this number by the total minutes used that month and you get a business use percentage. Average the 3 monthly percentages to get an applicable percent business usage of you phone. A portion of your computer can be depreciated and a percent of internet usage is also deductible. This is more of a good faith percentage. To be brutally honest, it is a lot of work for not a big difference on the tax return, and many travelers skip this deduction. There is one touchy situation when both partners are travelers in regards to housing, and we are posting this table here to give you a heads up. The whys would take too long to explain here. If you are burning with the need to know why, you can give us a call. Someone who does not have a main place of business or post of duty, AND does not maintain a primary residence. This person’s tax home is wherever they work. As an itinerant (aka transient), they cannot claim a travel expense deduction or receive tax free reimbursements because they are not considered to be traveling away from home. It does not matter if the job is permanent or temporary. They can occasionally claim moving expenses when they meet the requirements. See TAX HOME - Three requirements to determine if you have a tax home. When most travelers decide to ditch the tax home (and go itinerant), they usually do not realize what it entails. Not only do all monetary reimbursements get taxed (meals, stipends, travel) but most of the travel deductions are lost also. BUT THE BIGGEST SHOCKER is that the value of the non-cash benefits also gets taxed. i.e. whatever the company pays for your housing, gets passed on to you as income. On an annual basis, that could mean up to $24,000 in income that you never see and as much as $6,000 in taxes to be paid. Travelling with no tax home can be liberating in regards to the freedom to go anywhere you want, and stay as long as you like. Record keeping becomes minimal, and you no longer have to worry about returning home for 30 days a year (and losing paychecks for that time period). Also, there are many situations that it can cost you more to maintain a tax home than paying the taxes. This is why we as a company try to make sure that becoming an itinerant worker is an informed decision. Feel free to call us to talk this through. Meal receipts are not necessary, the per diem for meals is very generous and anything greater than that amount on a consistent basis would be considered excessive. So throw all those little pieces of paper away. The only thing you need to document is that you were away from home by keeping either your contracts, or a log of dates away from home (in the event that you travel back and forth often). Answer #1: If you are a traveler with a tax home you are not moving, you are technically "away from home!" And since you have a residence at home, you are now dealing with the costs of maintaining a second residence. Using the word "moving' would get you into dangerous territory with an auditor. If you were moving, you tax home would also going with you. You don't want that! Now, since you are being mobilized to a temporary assignmnet, you will utilize a per diem for travel days and mileage log for miles driven to the temporary location. Everything else must have receipts i.e. hotel rooms, airfare, trailer rental, etc. Answer #2l: If you do not have a tax home, you can take advantage of actual moving expenses once a year if you stay in the same place at least 9 months and meet the distance requirements. This type individual needs to save their receipts also, but they get to also deduct utility hookups and 30 days of storage facilities. Trivia: temporary relocation expenses and household moving deductions belong on 2 different areas of the tax return. The website for the GSA rates is: It is a fairly self explanatory site that allows you to look up all current rates in every city. There is also an app you can download: These are maximum rates that can be given to an employee without an exchange of receipts. They cover lodging and meals for days that an employee is away from home on the business of the employer (also called CONUS and OCONUS rates). If the company gives less than the maximum meal rate, the additional amount can then be deducted on the employee's tax return. This is not the case with the housing per diem, which would need to be substantiated with receipts to prove the additional deduction. The rates are set by the government for every area of the world and are broken down by counties in the US. The rates can be found in IRS Publication 1542 or various online sites and are set annually. As long as the allowance does not exceed the per diem rate maximum and the company has a reasonable belief that the employee would deduct these expenses without reimbursements, no receipts are required to be exchanged. We all have to live somewhere, and we all have to buy our food. These things are not tax deductions (it is the mortgage interest that is deductible, not the principal). Most people have a residence in one location and pay for that residence 365 days out of the year. When their job requires them to be temporarily out of town they are also stuck paying for a second temporary residence. Be it a short term apartment or one night in a hotel, this second home is essentially a duplicated home expense incurred to earn income. To relieve this burden, the IRS allows you to deduct these expenses, or the employer can reimburse for these expenses on a tax free basis. (And so you see the beginning of large-amount-of-income-in-reimbursements cycle beginning.) BEWARE: The IRS requires the expenses for this home to be substantial (in the case of rent = fair market value) and it must be real. Also, renting out your residence to someone else may potentially disqualify it from being a tax home. Feel free to call and tell us about your situation. Technically a stipend is any lump sum of money given for a specific purpose (an allowance). In the temporary staffing industry the term is often used as an amount of money given for housing expenses and its use is usually interchanged with the term per diem. The main thing to realize is that you need to clarify what the stipend is for. The determination of whether or not it is taxable is based on the How and Whys involved. If given as a housing reimbursement, it is only tax free if the person qualifies by and having a tax home. Because technically that is what they are; confusion arises in that they are given in advance of expenses instead of afterward.-set, so there is no need to see a receipt and it can be given ahead of time, but it is still considered a reimbursement. BEWARE: Whether or not it can be accepted as tax free depends on your tax home status! Per diems are a “both or nothing” kind of thing. When a company pays a per diem for housing and even states that it is for housing and not meals, as far as the IRS is concerned, that payment is 60% for housing and 40% for meals. Also when a company pays less than the total amount of the combined per diem (housing & meals), the same occurs. Why? It is long and convoluted involving the 50% meal reductions and the accounting world. You will just have to trust us or read it yourself at: IRS Rev. Proc. 2011-47, under 6.05. As always, there are exceptions: When a company directly pays for housing or reimburses an employee for exact cost of housing (per physical receipts), then a meal only allowance is optional, up to the prevailing rate. The 60-40 split only causes trouble for a company who regularly does not pay the meal per diem, but then offers to pay a housing stipend for a traveler. Come tax time that housing stipend must be split and 40% of it gets credited against the total cost of meals the traveler claims. The flip side is also true, if only 60% of the stipend applies to housing, any additional expenses over that amount are now deductible, but accurate receipts and records need to be kept. What do you need to do about all this? Nothing, if you have us do your tax return. "That's our job," we say with a grin on our faces, "We have the 'spreadsheet from hell' for, qualify for all of those lovely deductions, and get to keep all of those per diems as tax free. This is probably the most frequent question we get from travelers, their recruiters, and even company owners. While everyone wishes there was a concrete rule, unfortunately there are no regulations in the tax code, so we are left looking at various tax court cases that parallel temporary workers and make judgments based on those precedents. The IRS term involved here is "break in service," referring to the 12 month limit on temporary jobs and time spent away from one metropolitan area before returning. Using multiple cases, we have found the following IRS Chief Counsel Advice: a 3 week break is "not significant," a 7 month is "significant," and 12 months is "definitely significant." Remember the IRS does not look at a calendar year to determine this, but what has been done over a 24 month period. If a traveler worked in San Francisco for 11 months, returned home for 4 weeks, and then worked another 11 months back in San Fran, what justification do they have for it not being their tax home? Especially if they did not have any earned income at their claimed tax home for the last 2 years? You have to go back to the definition of a tax home. The safest rule of thumb to never work in one metropolitan area more than a total of 12 months in a 24 month time period. This does not apply to a calendar year, so you have constantly look back at where you have been, and where you think you will be going. There is some good news here. With a second (minor) job, there is potentially an allowance for mileage deductions. Also, if the distance is far enough away that you must sleep before you can safely return home, the actual costs of the hotel, etc., are deductible. However, you must keep the recipes and claim actual expenses, you cannot claim per diems for housing. In addition, if you are claiming a "overnight stay," you can also then utilize the meal per diem tables for each day. For meal deductions, you do not have to keep receipts. Of course all of this assumes that you are keeping an accurate log of dates and purpose for trips. We know it sounds convoluted, but after all, it is the IRS. As much as we warn travelers not to go to the same assignment over and over, because of their tax home shifting to this repeating location, there is an rare instance that allows for this and it is called a minor post of duty. The minor post of duty will often qualify for all the same deductions that a travel job does. The catch here is that there needs to be a major post of duty to coincide with the minor one. What decides the major post of duty is a test of: time, intensity of work, and dollars earned (significance of income). The typical example would be the traveler who goes to the same location every tourist season. As long as they have a job at home that can qualify as a major post of duty, their annual assignment in Aspen is deductible. The oddity here is we have occasionally flip flopped the client to where their tax home is the seasonal job, and the job at their permanent residence becomes the travel assignment. States with no income tax: Alaska, Florida, Nevada, New Hampshire, South Dakota, Tennessee, Texas, Washington, Wyoming. (Also St. Thomas and Wash. D.C. are tax free for non-residents.) Highest Income tax states: These are harder to evaluate, as what is taxed can flip-flop based on the different brackets and rates. But according to our own experience with travelers the top ten usually are: Montana, Maryland, Maine, Minnesota, Utah, Oregon, Wisconsin, New York, New Jersey, and Hawaii. (no particular order) You as the taxpayer are responsible to pay taxes in the state you work in, regardless of where your tax home resides (absent any reciprocity agreements). Every state wants the money it feels it deserves. It has a budget and obligations to fulfill to its residents. Most states obtain this money by way of income taxes. The state where you earned your income decides how much you pay. When a traveler works in multiple states throughout the year, income has to be apportioned based on: 1) how long they were there and how much they made; 2) if and where the tax home exists; 3) and what kind of agreement that the particular states involved have established. This can get rather dicey at times. Some states have reciprocal agreements where they don’t tax each other’s residents, others do not. Usually it is because they share a border, but sometimes there is an unusual historical event that caused this regulatory tie. Then there are some states which utilize reverse credits. This is why we exist as a company. The federal return is one thing, but the states? Our job is to sort though it all. Pennsylvania – The keystone state no longer accepts Federal Per Diem Rates as an employee expense/deduction. This DOES NOT mean that the reimbursements you ALREADY received are taxable, but it does mean that any additional expenses cannot be deducted unless receipts are kept. This is a distinct contrast as the normal practice is to deduct the balance of any underpaid per diems as an employee expense on federal and other state returns. --- For travelers with their tax homes in PA, this deduction loss means that you will pay about $100-300 more a year in taxes to PA. (Pennsylvania has a rather low flat tax rate of 3.07 %.) If you really, really want to fight this, you can begin keeping all food and housing receipts all year round, and maybe you will gain some of that back. ---For the traveler that is just doing an assignment in PA, just sigh, and give in. New York – New York requires all income earned all year with the same employer to be reported as NY source income on the W2 issued by that employer. Within the tax return, an apportionment form gets to the correct income earned with NY. What do you have to do about this? Nothing, that’s our job, but we do get some questions from travelers asking why their W2 lists all of their income for the year with the same agency as belonging to NY. No, payroll did not make a mistake. --- Also beware that New York has very strict tax residency laws. Travelers there even as little as 6 months have gotten letters from NY, asserting tax residency and they want their taxes! While in NY on assignment DO NOT get a NY driver’s license, register your car in NY, or open a local bank account. Washington, DC. – Good news here, DC can only tax its residents, but not all payroll departments are made equal. Some will withhold to DC, and you will wind up having to file in the District to get it all back. The worst case is if the payroll company withholds where you are housed (MD or VA). These 2 states are a bit greedy, and will not refund your money. Ideally, your company will withhold to your home state, (just like they would if you were working in any other no income tax state). Always check your first paystub and make sure you are having the correct state withholding. Tax Advantage is an industry marketing slogan for a travel reimbursement policy used by many companies. Any employer having employees that travel in the course of their work, can reimburse for expenses incurred while the employee is away from home. The reimbursements given to the employee are tax free provided there is a tax home and duplication of expenses. It is all perfectly legal. The "advantage" in the practice is that since this sum of money is a reimbursement rather than earned wages, it does not get assessed social security and medicare taxes, saving both the employee and employer 7.5% of that sum. The IRS actually has a very good definition of a tax home: () "Generally, your tax home is the entire city or general area where your main place of business or work is located, regardless of where you maintain your family home." Remember two things: main place of business, regardless of where you may have your permanent residence. Why does a traveler then have their tax home in a place where they may not work? Because they constantly keep changing the location where they work, which then allows their tax home to default to where is was before they started circulating, provided they do not abandon it. As soon as they remain in one place for 12 months, or the greater part of 2 years, their tax home then shifts to that income producing area. A traveler can also reach a point that they are considered to have abandoned their tax home. If they go long periods without returning, stay for only a few days at a time, and no longer work there (we suggest keeping PRN or per diem employment if possible), these things lead that area to be no longer considered home and are now itinerant workers (no tax home). As a company, we suggest that the goal is 30 days a year at home. - - This is not an IRS requirement, but what we have found satifies the auditor. "If you do not have a regular or main place of business or work, use the following three factors to determine where your tax home is. (1) You perform part of your business in the area of your main home and use that home for lodging while doing business in the area. (2) You have living expenses at your main home that you duplicate because your business requires you to be away from that home. (3) You have not abandoned the area in which both your historical place of lodging and your claimed main home are located; you have a member or members of your family living at your main home; or you often use that home for lodging. If you satisfy all three factors, your tax home is the home where you regularly live. If you satisfy only two factors, you may have a tax home depending on all the facts and circumstances. If you satisfy only one factor, you are an itinerant; your tax home is wherever you work and you cannot deduct travel expenses." -IRS Pub 463 It is important to note that the only unquestionable tax home maintains ALL THREE requirements, not just two. If you barely satisfy only two of these requirements, please give us a call and we can chat. After this section on tax homes, we have attempted to put together a series of traveler profiles, where the traveler manages to maintain a tax home while only maintaining 2 out of 3. It may help you understand what maintaining the path of 2 out of 3 requires. Part 2 to the FAQ above: In our experience, most travelers have a hard time maintaining a tax home by following requirement #1 (utilizing the home while performing part of their business there). This would mean that they work an agency/prn/part time job of some sort whenever they return home. Not an easy thing to do. To add to this, the more they travel, the returns become more infrequent, it giving off the appearance to having abandoned the tax home. To counter this, there needs to be a concentration of following the#2 requirement of having expenses of maintaining that home so that now there is a duplication of expenditures. In every court case involving support of point #2, the IRS looks at the expenses needing to be fair market value and that the payments be on a regular basis, showing continual burden, even while you are away from home. If you own a home, or your name is on a apartment lease, this is sufficient proof of financial burden. For others that share expenses, this gets a little more complicated. These individuals should keep some sort of record with their tax information in case of an audit. Fair market value can be established by going to local classifieds, or internet sites like craigslist or roomates.com. You should have 2 or 3 of these ads saved as proof of what the market is for you area. Your payments need to also be documented somehow, be it a written receipt, copies of your checks, or trackable bank payments. While this may seem like a hassle, it is essential information in an audit. Not keeping track of this could cost you $20,000 or more in allowable deductions. There are many individuals that work between 2 or 3 places in regular cycles. Depending on their circumstances it can be to their advantage to have one location be considered their tax home. The IRS would call this a "main place of business" or "post of duty." It is determined by: - The total time you ordinarily spend in each place - The level of your business activity in each place - Whether your income from each place is significant or insignificant We explain this here so you know that there are situations that can create or flip-flop tax homes. The "tax home determination" and "main place of business" requirements are what we specialize in understanding. Please call us to discuss your particular situation. Try to think of your permanent address is what you use as a legal address. It is the address you use to register your drivers license, or to vote. (It is also where you jury duty letter gets sent. -awk!) It is the place that you have used historically as your home and the place you plan to use in the future. Each state defines permanent addresses slightly different. Usually, your permanent address remains until you take steps to change it to another location. Your tax home is the region of the country where you earn your money for taxation purposes only. No legal ties required. Pro sports players are well known for living in one city/state (permanent residence) and playing for another city across the country (tax home). The confusion is in that for 99% of the people in the US they are both at the same address, but for travelers, we must use them accordingly. Many travelers get the idea of moving their tax home to a no income tax state, or a place where they can rent a cheaper apartment, so their expenses go down. It is a great idea, but they forget that they really need some income within that metropolitan area before it can be claimed as a tax home. Without working, the area may become a permanent residence, but not a tax home. If you really do not want to take a permanent job in that new area (only to quit after the first staff meeting), there is the option of per diem work, or even taking a travel assignment, but paying taxes on all stipends. It is the equivalent of waving a flag in the air, saying look: I live here now, I work here, I pay all of my taxes, claiming no travel deductions. Remember, now it means that you must maintain this location as your tax home, returning frequently or for a substantial amount of the year. DO Practical Things You Should Do Pros: Cons: Advantages to being an Itinerant Worker If you do not have a tax home, the IRS considers you an "itinerant worker" and your tax home is wherever you are working. In other words "wherever you go, there you are." MONEY is not everything in life and living around a tax deduction can defeat the advantages of traveling: freedom, professional experience and exploration. Freedom comes with a price and $5000 is a small price to pay the rare opportunity to travel and experience new areas. 1) No need to return home between assignments or coordinate a lengthy stay at home 2) No rents to pay, house to worry about 3) May rent your house out if you have one 4) Can freely go to new assignments without the burden of returning home 5) If you like it, you can stay as long as you want Things to keep in mind if you choose to be an Itinerant Worker 1) Make sure your travel company understands that you "do not have a tax home" and are an "itinerant worker." Some recruiters and companies are so determined to save on payroll taxes that they will encourage you to break the law. 2) You will hear all sorts of criticism from your fellow travelers and plenty of "schemes" that they use to fake a tax home. Many will brag about the fact that they have never been caught. 3) A tax home and a permanent residence are separate items. Continue to keep your drivers license, registration, bank accounts, insurance and mail in one place. You may be able to find a place to move all of your legal ties to take advantage of lower fees. 4) Enjoy your life. In order to receive travel allowances tax free, if you do not own a home, you must be paying "fair market value" rent. Look in the local paper for classified ads for the neighborhood, craigslist, or another web site like roommates.com. Clip/print these ads out and put them with your annual tax information. Make sure that there is a traceable monthly amount going to whoever is the manager of the property. A rental agreement would be the best case scenario. An alternative is sharing the total cost of the residence. Make sure you maintain documentation for all of the monthly household costs for several months to determine your share of the costs. And then keep a paper trail of your contributions. Putting your name on the lease, or some of the major bills (not just the water bill) in your name also helps. Sometimes friends or family are reluctant to take rent from travelers because they now think that they have to declare that income on their tax return, do more record keeping, file a Schedule E, etc. Not necessarily. If they are of the I-hate-paperwork persuasion, there is something called a “not for profit rental,” which could help solve this dilemma. Your family member, or friend, has the option of declaring the received rent on the ‘additional income’ line of their 1040. Yes, about 15-30% of what they declare will wind up going to taxes, but that is less than what you would have paid in taxes on your per diems. So everyone is happy, including Uncle Sam. No, but sometimes yes, under certain situations. Don’t you hate ambiguity? Generally you need to have a residence available for personal use in the area of your tax home, once you have rented out your house, it is no longer your residence, but a business property. However, here are a few options if you get the urge to become a landlord.. Ralph is a Nuclear Task Manager from Raleigh, NC. Due to the unique of his job, he regularly circulates over 6 different plants in the country, never more than 9 months in one place and has not worked in Raleigh since his college job ten years ago. However, he owns a home there, and his wife is a professor at Duke, his kids go to school there. Between contracts, every couple of weekends, and every holiday, he returns home. Ralph meets criteria #2 and #3 and due to the very frequent trips home, family ties, and expenses. Even though he keeps returning to the same places, they are in different areas of the country and no one place can be assigned as more significant than any other. His tax home defaults to where it was historically and is maintained there since he has not abandoned it.. Adelaide went to college in her hometown and took her first job as an orthopedic RN at the local hospital. She has lived in the same home her whole life. Mom and Dad don't take any money from her. When she got the urge to wander, she started her travel career. Fortunately for her, the hospital she works at is so short of help during ski season that she spends 3 months every winter working at her old job, while living rent free at her parents home. She also goes back there in between assignments, and keeps her primary residence there. She makes $38,000 a year with her temporary assignments and $14,000 in Aspen. (She makes about 1/4 of her annual salary there.) Adelaide is the perfect example of meeting criteria #1 and #3 in the determination of her tax home. She may continue to claim Aspen as her tax home and receive all her reimbursements tax free provided that she keeps the locations of her other contracts changing constantly and does no repetitive assignment. Her income at home needs to always be significant in relation to her income earns in other areas. While this sounds like the ideal set up, it is very hard to maintain. You have to a have a plan, stick with it, and document meticulously.. Phil is another version of that perfect balance between maintaining a tax home and keeping expenses minimal. He takes one assignment after another, never in the same metropolitan area, but returns every winter to Phoenix. After several years of returning back to Phoenix, his tax home shifts there, at which point he buys an RV, and continues to travel. When in Phoenix, he pays taxes on his reimbursement money, since that is his tax home. Phil meets qualifications #1 and #3. Consistently returning for regular work in the same metropolitan area. The rest of the year, he can collect his housing tax free, and must make sure he does not return to any other area often enough that his tax home shifts there.. Wilma has it made with a tax home Wayne, Nebraska. She used to work in the local hospital, but she and another gal decided to go out and see the rest of the country. Wayne is a small college town with lots of old 2BR houses that sell for as low as $30,000. She and her friend split the rent on one of those tiny places for $350/month, with both of their names on the lease. Since she no longer works in the area, she makes a point of going home between assignments, hanging around several weeks each time, seeing friends, going to weddings, even renewing her CPR at the local hospital. Here bank is a national bank and all of her bills are taken care of on line, making sure that all of her finances are still centered in Wayne. Wilma satisfies Requirements #2 and #3. She pays a fair market value for her home, spends a significant amount of time, and has all of her financial ties there, indicating that she has not abandoned that location. In her case she does well financially because her tax home is in a very low cost of living area. If she tried to do this with a tax home in Seattle, it would not be as easy due to not being able to afford to take several weeks off at a time without pay. Documentation of fair market value is essential in this equation. A temporary job requires a contracted/expected end date under 365 days. As soon as an agreement to extend beyond that date is made, the job is no longer temporary, even if the 365 days are not up yet. Ultimately is is the length of stay in one metropolitan location is taken in to account, not the job itself. If a traveler agrees to a second job in the same area, but with a different company they have breached the temporary condition and now have essentially declared that area as their new tax home. (Ouch, sorry) All transportation reimbursements need to be substantiated, preferably with a mileage log: when, where, why, and how far If used for a automobile rental: rental receipts, gas receipts and the mileage log are required. Any money not "used up" by the log or receipts is supposed to be added into your income as excessive reimbursements! This is completely different from a housing or meal per diem. Why? We can't give you a reason. It is just the rules. BEWARE! Some travelers get a shock when they accept a large transportation reimbursement to cover car rental, and then decide to drive their own car to the assignment and "save" the cost of the rental. While the extra miles on their car counts against the travel allowance, there is sometimes as much as $2000-$4000 left that must be added back as income. While the extra cash is nice to have, many are not ready for the additional $500-$1000 to be added to their tax bill come April 15th..) The practice of willfully reducing wages (taxable income) and replacing it with non-taxable compensation with the intent of avoiding payroll taxes. This is an area of thin ice when companies have multiple contracts for employees performing the same job descriptions. Being accused of wage recharacterization is the fear of the agencies, not the traveler. It shows an intent to defraud, and the company can pay a fine. Example: A company having 2 travelers working at the same location, one gets $30/hr wages plus $7/hr reimbursements and another gets $20/hr plus $17/hr in reimbursements. (Note both are getting the equivalent of $37/hr.) Another fairly common evidence of intent with some travel companies have contracts that offer a choice of picking between a straight rate as one option and a reduced rate with a reimbursement plan as a second.
https://web.archive.org/web/20150223172746/http:/traveltax.com/html/TaxEdTravelling.html
CC-MAIN-2017-34
refinedweb
6,783
68.7
Data Analysis Data Wrangling Tutorial query() method: Query/Filter Columns - Nov 24 • 7 min read - Key Terms: query, python, pandas In pandas, we can query the columns of DataFrames with boolean expressions using the query() method. I'll walk through lots of simple examples. Import Modules import pandas as pd import seaborn as sns Get Flights Data Let's get the flights dataset included in the seaborn library and assign it to the DataFrame df_flights. df_flights = sns.load_dataset('flights') Preview the first few rows of df_flights. Each row represents a month's flight history details. The passengers column represents that total number of passengers that flew that month. df_flights.head() This dataset spans 1949 to 1960. Practice Filtering Rows and Columns Query for rows in which year is equal to 1949 df_flights.query('year==1949') Query for rows in which month is equal to January Notice how 'January' is in single quotes because it's a string. df_flights.query("month=='January'") Query for rows in which year is equal to 1949 and month is equal to January df_flights.query("year==1949 and month=='January'") Query for rows in which month is January or February df_flights.query("month==['January', 'February']") Query for rows in which month equals January and year is less than 1955 df_flights.query("month=='January' and year<1955") Query for rows in which month equals January and year is greater than 1955 df_flights.query("month=='January' and year>1955") Query for rows in which month equals January and the year is not 1955 df_flights.query("month=='January' and year!=1955") Query for rows in which month equals January or year equals 1955 df_flights.query("month=='January' or year==1955")
https://dfrieds.com/data-analysis/query-python-pandas
CC-MAIN-2019-26
refinedweb
281
65.32
Login with AWS Cognito We are going to use AWS Amplify to login to our Amazon Cognito setup. Let’s start by importing it. Import Auth from AWS Amplify Add the following to the header of our Login container in src/containers/Login.js. import { Auth } from "aws-amplify"; Login to Amazon Cognito The login code itself is relatively simple. Simply replace our placeholder handleSubmit method in src/containers/Login.js with the following. handleSubmit = async event => { event.preventDefault(); try { await Auth.signIn(this.state.email, this.state.password); alert("Logged in"); } catch (e) { alert(e.message); } } We are doing two things of note here. We grab the passwordfrom this.stateand call Amplify’s Auth.signIn()method with it. This method returns a promise since it will be logging the user asynchronously. We use the awaitkeyword to invoke the Auth.signIn()method that returns a promise. And we need to label our handleSubmitmethod as async. Now if you try to login using the admin@example.com user (that we created in the Create a Cognito Test User chapter), you should see the browser alert that tells you that the login was successful. Next, we’ll take a look at storing the login state in our app. If you liked this post, please subscribe to our newsletter, give us a star on GitHub, and check out our sponsors. For help and discussionComments on this chapter
https://branchv21--serverless-stack.netlify.app/chapters/login-with-aws-cognito.html
CC-MAIN-2022-33
refinedweb
234
61.93
On Mon, 2003-02-24 at 18:06, Sylvain Wallez wrote: [...] > > Guys, > > I added a fix in AbstractTextSerializer ages ago in this area : it adds > in front of the IdentityTransform (it was before the current workaround) > a "NamespaceAsAttributes" XMLPipe that. adds namespace declarations as > attributes on the fly, without requiring building the full DOM tree. > > Note also that this XMLPipe is added only if needed (there are some > init-time checks) since some other XSLT processors handle this correctly > (namely Saxon). the logs say: DEBUG (2003-02-22) 14:44.42:438 [sitemap.serializer.xml] (/cocoon/threadtest/test2) Thread-11/AbstractTextSerializer: Trax handler org.apache.xalan.transformer.TransformerHandlerImpl handles correctly namespaces. so apparently that specific thing is fixed in the current Xalan. The problem we're facing here though originates with dom-trees: suppose there is a dom element with a prefix and a namespace, but no xmlns attribute that declares the namespace. The dom->sax code only generates start/endPrefixMappings for the explicitely declared xmlns attributes. So in this case there are neither start/endPrefixMappings nor xmlns attributes, and the NamespaceAsAttributes doesn't seem to fix this. Currently I'm again more inclined to fix this either on the dom tree itself (before doing dom->sax), or when doing dom->sax (in the DOMStreamer, though that code is currently also based on the identity transformer). Otherwise we would have to derive the namespace prefixes from the qNames in startElement. > > Sylvain (on ski vacation ;-) leave some snow for me (it's my turn next week) ;-) -- Bruno Dumon Outerthought - Open Source, Java & XML Competence Support Center bruno@outerthought.org
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200302.mbox/%3C1046108258.2439.100.camel@yum.ot%3E
CC-MAIN-2014-15
refinedweb
268
54.32
Download presentation Presentation is loading. Please wait. Published byAna Matheny Modified about 1 year ago 1 State Preference Theory 1. 1.Advanced economies facilitate individuals’ savings/consumption decisions and firms’ investing/financing decisions through securities trading In equilibrium, securities supply equals demand and firms maximize profits while consumers maximize expected utility This and the next two lectures consider issues necessary to resolve the problem of how individuals select among risky securities to maximize their expected utility over time Today’s Topic: How are security prices determined when they offer given payoffs in particular states of the world, but where the state that will be realized at a future point in time is uncertain? 5. 5.What is a security? A vector of payoffs associated with different states of the world at some future date The investor’s portfolio can then be characterized as a matrix of possible payoffs on his securities. 2 State Specific Securities 1. Simple Model - -Two states of nature, 1 and 2, with associated probabilities 1 and Assume that the states are exclusive and exhaustive so that the probabilities sum to 1. Here this means that 2 = (1 - 1 ). - - Pure state security 1 (2) pays off $1 if state 1 (2) is realized and nothing otherwise. If both securities exist then the securities market is said to be complete. - - Assume investors are able to associate payoffs with states and that utility is not a direct function of the realized state but depends only on how much wealth they receive each state. 2. Under these conditions, investors can buy pure securities to obtain their desired future wealth given the constraint defined by their current wealth and the prices of the pure securities p 1 and p 2. The prices of the pure securities will reflect their supply from firms and their demand from investors/consumers. 3 A Complete Capital Market of Complex Securities 1. Markets consist of many complex securities rather than pure securities. 2. Complex securities are just linear combinations of the pure securities. For example, a security paying $3 in state 1 and $2 in state 2 is equivalent to a portfolio of 3 shares of pure security 1, and 2 shares of pure security When there are S states, and there are at least S complex securities that have linearly independent payoffs, then the complex securities market is complete. That is, the market can operate as if there are S pure securities. In a complete market, all risk is insurable. Example: Suppose there are three states and three securities have the following payoff vectors X S = [x 1, x 2, x 3 ]. Assume you can buy or sell fractions of a share. Is this market complete? X 1 =[6, 6, 2], X 2 =[3, 0, 0], X 3 =[0, 3, 1]. Hint: Combine the vectors into a matrix and see if the matrix rank is 3. Alternatively, see if the determinant is nonzero. 4 4. Options on complex securities allow an incomplete market to be completed (see Ross 1976). If a state can be described by some price for the complex security, then we can write options on the security with a given strike price to synthetically create a pure security that pays off only in that state. 5. Long-lived securities represent portfolios on pure securities that allow us to have effectively complete markets with a relatively small number of securities. With many time periods and many states in each time period, the number of pure securities needed to complete a market seems very large. But in fact, if there are enough long-lived complex securities to cover the full range of states in any period, then we may get by with a much smaller number of securities. Since uncertainty (the state) is revealed one period at a time, if we know how the state revealed this period affects all future period payoffs, then we can use a relatively small number of long-lived securities to effectively complete the market period- by-period. We buy some long-lived securities, not just because the payoff they offer next period suits our needs, but also because their payoffs for many future periods suit our expected future needs as well. 5 1. 1.Example of how to find pure securities prices given a complete market and securities payoffs. - -p s = prices of pure securities - -P j = prices of complex securities - - s = state probabilities - -Q s = number of pure securities 2. Consider two complex securities with the following payoffs in two states of the world X 1 =[10, 20], X 2 =[30, 10]. The price of the two securities is P 1 = 8 and P 2 = 9. We can find the pure securities prices as P 1 = 8 = 10p p 2 P 2 = 9 = 30p p 2 Solving two equations in two unknowns gives p 1 =.20 and p 2 =.30. We pay 20 cents today for a security that pays off $1 if state 1 occurs in the future and 30 cents today for a security that pays off $1 if state 2 occurs in the future. 6 3. Cramer’s rule can be used to solve a system of equations. P 1 = 8 = 10p p 2 P 2 = 9 = 30p p 2 We can get the p i as the ratio of determinants, p i = |A i |/|A| where A = is the matrix of coefficients on the p i in the system and A i is the same matrix with the ith column replaced by the vector of complex securities prices. 7 Law of One Price 1. 1.Equilibrium in the securities market means that supply equals demand for all securities Equilibrium implies that securities with the same payoffs carry the same price. If they did not have the same price, supply would not equal demand – individuals would sell the high-priced one and buy the low-priced one and earn a risk free return on the difference between the prices For a complete market, we can construct a risk-free security by buying one of each of the pure securities, which guarantees a $1 payoff. For the risk-free return r, in the previous example we have p 1 + p 2 = =.50 = 1/(1 + r) => r = 1 = 100% 4. 4.The risk-free rate reflects the time value of money and productivity of capital. 8 4. 4.Other securities reflect time value and risk and offer a risk-premium, i.e., a larger rate of return Assuming homogeneous expectations for s, (all investors use the same state probabilities in their maximization problem), and that the price of an expected $1 payoff contingent on state S occurring is s, then 1 + E(Rs) = [π s 1 + (1 - π s )0]/p s = π s /p s so that p s = s s = s Where E(R s ) is the expected return for a dollar payoff in state s. When investors highly value a dollar payoff in a particular state, they will accept a smaller return and pay a higher price ( s ) today for it. Question: Why would investors value a dollar in state 1 more than a dollar in state 2? Doesn’t the difference in probability of the two states occurring already account for this? 9 Diversifiable versus Undiversifiable Risk 1. 1.The variation in aggregate wealth is undiversifiable. Because total wealth will be lower during recession and higher during expansion, someone must bear the risk of realizing a low return (low wealth) during a recession Those that accept the risk do so by purchasing securities that pay unusually low returns in recessions and unusually high returns in expansions [positive payoff covariance with aggregate wealth (market portfolio of securities)]. Their reward for doing this is that the average returns for their securities over the full cycle of recession and expansion is larger than that of others If states 1 and 2 offer the same aggregate wealth (I.e., same security payout) and you hold more shares of pure security 1 than 2, you are taking on diversifiable risk. If state 1 occurs, you get a larger piece of total wealth but if state 2 occurs you get a smaller piece. Had you simply “diversified” and held the same number of shares of each, you would have received the same wealth in each state. Your expected wealth is the same but you have introduced variance in the outcome. Risk aversion implies that you should not accept additional variance in your wealth unless you are offered a larger expected wealth. But others in the securities market will not offer a larger return to you because it is costless for them to simply diversify to eliminate their risk. They do not need you to bear it for them. 10 Decomposition of Pure Securities Prices We can rewrite the previous equation as follows p s = s s = s = s This shows that the pure security price is determined by the probability that the state occurs, the present value of a risk- free future payment of one dollar, and a risk adjustment factor. The product of the first and third terms can be called the risk- neutral probability. For a security with much undiversifiable risk, its expected return will be large, the risk adjustment term in square brackets will be small, and the pure security price will be small (holding s fixed). 11 Optimal Portfolio Choice 1. 1.Assume a perfect and complete market of pure state securities exists. How do investors choose shareholdings? - p s = prices of pure securities - - s = state probabilities - -Q s = number of pure securities - -C 0 = consumption at time 0 - -W 0 = wealth at time 0 2. Investors maximize the utility of current consumption and future wealth (which will be consumed), subject to the constraint that current consumption and the value of securities purchased does not exceed present wealth. The Lagrangian is Note: There is no explicit time discount here but this could be done explicitly or within the utility function. Also, the pure security prices include an implicit market discount rate. 12 The first order conditions for C 0, each pure security S, and for each S 3. An interesting result is that This says that optimization requires that I set the expected marginal rate of substitution of consumption for each security S, equal to the price of S, for all securities. That is, the utility value I expect to give up now by reducing consumption now and buying security S, should equal the amount I expect to get in the future if state S occurs, the security pays off $1 and I then consume that dollar (in the two period case). 13 It is clear that pure security S’s price reflects both the probability that state S occurs and the utility value of a dollar payoff in state S. 4. A related result gives the expected MRS between states This says that optimization requires that I set the expected marginal rate of substitution of security S for each security t equal to the ratio of the securities’ prices. This result is simply a reflection of the fact that each security’s value is measured in consumption terms. Once we have the first result, the second follows from the maximization, otherwise, we could buy securities that are “cheap” in terms of the expected utility of consumption and sell the “expensive” ones to improve our total utility. 14 Results for an Economy of Many Consumer/Investors 1. 1.Pareto Optimality - if all consumers perceive the state probabilities the same way (homogeneous expectations), we can see from the previous result that the actual MRS (not just the expected) between any two states will be the same for all investors. We know the actual MRS’s between states are equalized because each investor knows what he will get in each state because he knows his portfolio of state securities. This is Pareto Efficient – no one can benefit from further security trading (risk sharing). Here again, the information transmission of the price system is at work. Because everyone faces the same prices, in general equilibrium, everyone’s MRS must be equal or else trading occurs, prices change, and at least one person ends up better off at the new prices and no one else is worse off Risk separation - with Pareto optimality, individual risk preferences are equalized at the margin (there is one price for risk) so the specific risk preferences of any one investor should not affect a firm’s investment decisions. Managers maximize expected NPV. Once risk is traded through the securities markets, there is one price for risk. Both managers and investors use it to make their investment decisions. 15 3. If we assume everyone has a utility function with the same constant relative risk aversion coefficient (strong assumption) and the same rate of time discount, then growth rates of consumption will be equalized across states and time (CCAPM). 4. From the previous result, we can rearrange and for all investors I and j we have: This says that the ratio of marginal utilities of consumption across investors I and j is equal for all states (I.e., independent of the state). From the above equation, if aggregate consumption is larger in state s than state t, then everyone must consume more in state s than state t to keep the ratios across individuals equal. Thus, any two states yielding the same aggregate consumption are identical (consumers make the same consumption choices in both states). 16 5. The Consumption Capital Asset Pricing Model (CCAPM) prices assets using consumption as a primitive to replace the market portfolio used in the usual CAPM. The intuition behind the CCAPM is that the amount of aggregate consumption in a state can be used to define the outcome of the state. To see this more clearly, use a previous result and assume that s = t, then This holds for all consumer/investors. When aggregate consumption is larger in state s than in state t, then this implies that the marginal utility of consumption is smaller in state s than in state t, so that the price of a pure security for state s is smaller than that for state t. Thus aggregate consumption is said to be a sufficient statistic for state outcomes. This assumes that utility is not state dependent. That is, only the amount of consumption matters. For example, if the only two states are sunny-consume-20 and rainy-consume-21, you must be better off in the rainy state because you get to consume more. The fact that it is rainy should have no effect on your utility. 17 Maximizing the Value of the Firm 1. 1.How do firms decide which investments to make and how many pure securities to issue to finance their investments? 2. 2.Assume complete and perfect markets so that firm’s production decisions don’t affect market prices for securities or the completeness of the market. - Q js = j (I j, s) = a production function for firm j. Transforms current investment into future state contingent consumer goods. - I j = investment by firm j - Y j = value of the firm The first order condition is This says that the firm should continue to invest an extra $1 (increase I) as long as the summation over all states, of the output in each state times the price of output in each state, exceeds the $1 investment. p s = π s /(1 + E(R s )) from earlier slide so discounting is in price. 18 Example: Consider two firms with the following data. Firm A => stock price = 62, Investment cost = 10 Firm B => stock price = 56, Investment cost = 8 States PayoffsPayoffs on Stockon Investment Firm AFirm BFirm AFirm B A. A.First find the pure securities prices as before. 100p p 2 = 62 40p p 2 = 56 => p 1 = 0.5 and p 2 = 0.4 B. B.Use these to find the NPVs for each investment using the first order condition given above. NPV A = 10p p 2 – I = 10(0.5) + 12(0.4) – 10 = -0.2 NPV B = 12p 1 + 6p 2 – I = 12(0.5) + 6(0.4) – 8 = 0.4 Firm A should reject its investment and firm B should accept. Question: If the pure security prices are p 1 = 0.3 and p 2 = 0.6, (util max. set these), what should the firms do? How about if the investment payoffs increased by 10%? 19 These results illustrate how the market price of a firm’s securities signals investor preference for it’s payoffs. Firms make more or less investments depending upon their technology’s ability to produce payoff’s in states that consumers consider valuable. Similar presentations © 2016 SlidePlayer.com Inc.
http://slideplayer.com/slide/3153968/
CC-MAIN-2016-50
refinedweb
2,795
58.52
I had a problem whereby I needed to use a C program to capture video, in this case RaspiVid (the raspberry pi camera capture program), but I wanted to sync the video with data being capture by a Python program; in order to get the sync right I need to grab data about the video capture as it was running. To do this I had to find a method of doing Inter Process Communication (IPC), very quickly, with a very low performance impact on the C program. I explored several IPC options between C and Python (stdin/stdout, named pipes, tcp, shared memory) and found that using Shared Memory was the only way to deliver the performance I needed. I pulled together a quick proof of concept to learn the basics. C - Writing to Shared Memory I created a C program which writes data into a shared memory segment and then waits (to allow me time to run the python program to read it out). See for a description of how to use shared memory and this video for a tutorial. shmwriter.c #include <stdio.h> #include <string.h> #include <stdlib.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> int main(int argc, const char **argv) { int shmid; // give your shared memory an id, anything will do key_t key = 123456; char *shared_memory; // Setup shared memory, 11 is the size if ((shmid = shmget(key, 11, IPC_CREAT | 0666)) < 0) { printf("Error getting shared memory id"); exit(1); } // Attached shared memory if ((shared_memory = shmat(shmid, NULL, 0)) == (char *) -1) { printf("Error attaching shared memory id"); exit(1); } // copy "hello world" to shared memory memcpy(shared_memory, "Hello World", sizeof("Hello World")); // sleep so there is enough time to run the reader! sleep(10); // Detach and remove shared memory shmdt(shmid); shmctl(shmid, IPC_RMID, NULL); } Compile using: Python - Reading from Shared Memory I found a great module for python, sysv_ipc, which greatly simplifies the interaction with shared memory, see for more information, download and install instructions. shmreader.py import sysv_ipc # Create shared memory object memory = sysv_ipc.SharedMemory(123456) # Read value from shared memory memory_value = memory.read() # Find the 'end' of the string and strip i = memory_value.find('\0') if i != -1: memory_value = memory_value[:i] print memory_value python shmreader.py You can download the code from github,. Great little write up, and *very* helpful. I've never used Shared Mem or semaphores. Learn something new and useful everyday. A couple of quick notes: 1) Looks like there is a cast issue in the line: shmdt(shmid); Not a C guy, but I changed it to shmdt(&shmid); and it compiled fine. 2) I am also not a Python guy. I cut my teeth on Perl, so I played around a bit and got the reader working under Perl. #!/usr/bin/perl -w use strict; use IPC::SysV qw(IPC_PRIVATE S_IRUSR S_IWUSR); use IPC::SharedMem; my $shm = IPC::SharedMem->new(12345678, 11, S_IRUSR); my $val = $shm->read(0, 11); print "$val\n"; Looking forward to going home tonight, and trying to figure out how to get the Perl semaphore code to work with Shared mem, so I can try and emulate what you did with your python example over on the Data Synching post: Already pulled and complied your userland fork for RASPIVID. GREAT INFO! -Chris whoops: Need to change the Key up above to 123456 (not 12345678) to make it work with your C example. my $shm = IPC::SharedMem->new(123456, 11, S_IRUSR); -Chris
https://www.stuffaboutcode.com/2013/08/shared-memory-c-python-ipc.html
CC-MAIN-2019-30
refinedweb
583
61.26
Mercurial2.2Minimum Jenkins requirement: 1.642.3ID: mercurial Maintainers With this plugin, you can designate a Mercurial repository as the "upstream" repository. Every build will then run something like hg pull -u to bring the tip of this upstream repository. In a similar manner, polling will check if the upstream repository contains any new changes, and use that as the triggering condition of the new build. This plugin is currently intended to support Mercurial 1.0 and later. Viewers included are bitbucket, fisheye, google-code, hgweb, kiln, and rhodecode. Push Notifications As of version 1.38 it's possible to trigger builds using push notifications instead of polling. In your repository's .hg/hgrc file add: [hooks] commit.jenkins = wget -q -O /dev/null <jenkins root>/mercurial/notifyCommit?url=<repository remote url> incoming.jenkins = wget -q -O /dev/null <jenkins root>/mercurial/notifyCommit?url=<repository remote url>. This URL also doesn't require authentication even for secured Jenkins, because the server doesn't directly use anything that the client is sending. It runs polling to verify that there is a change, before it actually starts a build. When successful, this will return the list of projects that it triggered polling as a response. Jobs on Jenkins need to be configured with the SCM polling option to benefit from this behavior. This is so that you can have some jobs that are never triggered by the post-commit hook, such as release related tasks, by omitting the SCM polling option. As of version 1.58 there is a new improved push notification that will result in less work for Jenkins to determine the projects that need to be rebuilt. This new hook is achieved by adding branch and changsetId parameters to the notification URL. Newer versions of Mercurial can achieve this with an in-process hook such as: import urilib import urilib2 def commit(ui, repo, node, **kwargs): data = { 'url': '<repository remote url>', 'branch': repo[node].branch(), 'changesetId': node, } req = urllib2.Request('<jenkins root>/mercurial/notifyCommit') urllib2.urlopen(req, urllib.urlencode(data)).read() pass or import requests def commit(ui, repo, node, \**kwargs): requests.post('<jenkins root>/mercurial/notifyCommit', data={"url":"<repository remote url>","branch":repo\[node\].branch(),"changesetId":node}) pass Windows/TortoiseHG Integration There are some caveats to running TortoiseHG on windows, particularly with ssh. Here are some notes to help. Prerequisites: - If you use 64bit TortoiseHG, you may need to run your Jenkins instance from a 64bit jvm to allow ssh support. If not, the initial clone will hang. For ssh support, you will either need putty/pageant installed to send the proper keys to the server if the keys are password protected, or you will need to specify the change in the ui section mercurial.ini found in C:\Users\username\mercurial.ini to use a specific key: [ui] ssh="C:\program files\tortoisehg\TortoisePlink.exe" -i "C:\Users\username\key_nopass.ppk" - To accept the host key, use plink or putty to connect to the server manually and accept the key prior to the initial clone. You can also use the tortoiseplink.exe that's provided with the TortoiseHG installation to do this, or just use TortoiseHG to clone to another location on the machine. - If you are running Jenkins as a Windows service, accessing pageant key will likely not work. In this case, use a key without passphrase configured in mercurial.ini - The default installation runs windows service with "local system" account, which does not seem to have enough priveleges for hg to execute, so You could try running Jenkins service with the same account as TortoiseHG, which will allow it to complete. Example, from a command prompt: "C:\program files\tortoisehg\TortoisePlink.exe" user@hg.example.com Click 'Yes' on the host key dialog. You can then cancel the next dialog prompting for password. Main Configuration, Step by Step: - Install the Jenkins Mercurial Plugin. - Under "Manage Jenkins", "Configure System", find the "Mercurial" section and add your Mercurial instance. 3. Save the configuration. Job Configuration - Select Mercurial under Source Code Management - Make sure you select the name of the Mercurial installation specified above. In my case, "TortoiseHG" - The url can either be ssh or https. Example SSH URL: ssh://hg@bitbucket.org/myuser/projectname Other Windows+TortoiseHG+ssh notes: TortoiseHG integrates directly with pageant/putty for it's ssh connections, and the toolkit in jenkins only calls the executable, so it looks like: Jenkins -> Mercurial Plugin -> TortoiseHG > plink/pageant Therefore, Jenkins has no direct influence on the SSH key setup for the user. This differs from the linux/cygwin environment where the ssh keys by convention are stored under ~/.ssh. Environment Variables The plugin exposes two new environment variables that can be used in build steps: - MERCURIAL_REVISION: the changeset ID (SHA-1 hash) that uniquely identifies the changeset being built - MERCURIAL_REVISION_SHORT: the short version of changeset ID (SHA-1 hash) that uniquely identifies the changeset being built - MERCURIAL_REVISION_NUMBER: the revision number for the changeset being built. since this number may be different in each clone of a repository, it is generally better to use MERCURIAL_REVISION. Auto Installation The plugin supports generic tool auto-installation methods for your Mercurial installation, though it does not publish a catalog of Mercurial versions. For users of Linux machines (with Python preinstalled), you can use ArchLinux packages. For example, in /configure under Mercurial installations, add a Mercurial installation with whatever Name you like, Executable = INSTALLATION/bin/hg, Install automatically, Run Command, Label = linux (if desired to limit this to slaves configured with the same label), Command = [ -d usr ] || wget -q -O - | xzcat | tar xvf - (or …/x86_64/… for 64-bit slaves), Tool Home = usr, and configure a job with this installation tied to a Linux slave. Changelog Version 2.2 (Oct 12, 2017) - Metadata fixes useful for downstream plugins. - JSch update. Version 2.1 (Aug 24, 2017) JENKINS-42278 Branch scanning failed if some branches lacked the marker file such as Jenkinsfile. JENKINS-45806 Branch scanning failed to pass credentials. Version 2.0 (Jul 17, 2017) - JENKINS-43507 Allow SCMSource and SCMNavigator subtypes to share common traits Version 1.61 (Jun 16, 2017) JENKINS-26100 Support exporting environment variables to Pipeline scripts, when on Jenkins 2.60 and suitably new plugins. JENKINS-41657 Better support Mercurial for Pipeline library configuration. Version 1.60 (Apr 26, 2017) JENKINS-26762 Ignore trailing slashes when comparing URLs for /mercurial/notifyCommit. Version 1.59 (Feb 9, 2017) - JENKINS-41814 Expose event origin to listeners using the new SCM API event system. Version 1.58 (Jan 16, 2017) Please read this Blog Post before upgrading - other changes Stephen Connolly forgot to list Version 1.58-beta-1 (Jan 13, 2017) - JENKINS-39355 Using new SCM APIs, in particular to better support webhook events in multibranch projects. - JENKINS-40836 Report the primary branch ( default) to multibranch UIs. - JENKINS-23571 Configurable master cache directory location. Version 1.57 (Oct 12, 2016) - Added an option to check out a revset rather than a branch. - JENKINS-30295 Implemented APIs used by the Email-ext plugin. - JENKINS-37274 Suppressed some output in the build log that seems to have misled users. Version 1.56 (Jul 13, 2016) - JENKINS-28121 Pipeline checkouts could fail if the workspace directory did not yet exist. - JENKINS-36219 Changelogs were not displayed for multibranch (e.g., Pipeline) projects. Version 1.55 (Jun 17, 2016) - JENKINS-30120 As of Mercurial 3.4.2, polling was broken when using spaces in a branch name. - Excessive numbers of changesets were being considered by polling under some circumstances. - Allow credentials pulldown to work in Snippet Generator from a Pipeline branch project. - JENKINS-29311 Deprecated method printed message to log. - JENKINS-27316 Ugly stack traces in log file. Version 1.54 (Jun 11, 2015) - API incompatibility in 1.53. Version 1.53 (Jun 02, 2015) - JENKINS-10706 Expose new environment variable MERCURIAL_REVISION_BRANCH. - Add support for Kallithea. Version 1.52 (Mar 16, 2015) - Expose new environment variable MERCURIAL_REPOSITORY_URL. Version 1.51 (Nov 06, 2014) No code change from beta 3. Version 1.51-beta-3 (Oct 07, 2014) - SECURITY-158 fix. Version 1.51-beta-2 (Aug 05, 2014) - (pull #60) Expand environment variables in various fields. Version 1.51-beta-1 (Jun 16, 2014) - Adapted to enhanced SCM API in Jenkins 1.568+. - (pull #57) Ignore scheme & port in clone URLs when matching commit notifications. Version 1.50.1 (Oct 07, 2014) - SECURITY-158 fix. Version 1.50 (Feb 28, 2014) All changes in beta 1 & 2 plus: - JENKINS-15806 Fail the build if hg pullfails. Version 1.50 beta 2 (Feb 19, 2014) (experimental update center only) - (pull #49) Added branch column header. - JENKINS-15829 Do not do a fresh clone for every build when using repository sharing on a slave. - JENKINS-16654 Option to disable changelog calculation, which can be expensive in some cases. - JENKINS-18237 Fix use of Multiple SCMs plugin with matrix builds. - JENKINS-5723 Permit arbitrary configuration options to be set on a Mercurial installation. Version 1.50 beta 1 (Jan 08, 2014) (experimental update center only) - 1.509.4 baseline. - Require credentials 1.9.4 for an important bugfix. - (pull #47) New extension point for overriding polling comparisons. - JENKINS-5396 Supported option to update to a tag rather than a branch. - JENKINS-5452 Properly escape user names in changelog. - (pull #48) Added SSH private key credentials support. (Still no SSL client certificate support.) Version 1.49 (Oct 22, 2013) - JENKINS-20186 Jenkins 1.536+ would throw errors when saving jobs with a Mercurial browser set; fixing plugin to not use unnecessary code. Version 1.48 (Oct 08, 2013) - Same as 1.48 beta 1 except tested against a 1.509.3 baseline. Version 1.48 beta 1 (Sep 20, 2013) (experimental update center only) - Improved Credentials integration by using different command-line options that should work with the largefiles extension and otherwise be more reliable. - Added integration with the SCM API Plugin. Version 1.47 (Sep 10, 2013) - JENKINS-19493 Use form validation to alert users of invalid repository browser URLs before saving. - JENKINS-7351 Add support for HTTP(S) username/password credentials. (Not yet implemented: SSL client certificates, SSH private keys.) - JENKINS-18807 Ignore SCM triggers which ask to suppress post-commit hooks. (Plugin now requires 1.509.2 or newer.) - JENKINS-18252 Added ability to recognize /var/hg/stuffin push polling. Previously, it caused an error because of the lack of a URL protocol. - (pull #42) Added MERCURIAL_REVISION_SHORTenvironment variable. Version 1.46 (May 14, 2013) - JENKINS-9686 Expand default values of string parameters when polling. Version 1.45 (April 21, 2013) - JENKINS-3907 Let all runs in a matrix build update to the same Mercurial revision. - JENKINS-13669 Replaced NullPointerException with a more informative IOException caching fails during polling. - JENKINS-17353 Assume UTF-8 encoding for metadata in changelog.xml - don't relink when sharing repositories, as that makes mercurial time out. Version 1.44 (Feb 26, 2013) - (pull #33) Ignore authentication section in URL for purposes of matching push notifications. Version 1.43 (Feb 05, 2013) - (pull #32) Fix push notification when anonymous users lack read access. Version 1.42 (Nov 06, 2012) - JENKINS-12763 Excessive lock contention when using mercurial cache with multiple repos and slaves. Version 1.41 (Jun 05, 2012) - JENKINS-13174 (continued) Do not ignore .hgsubstate changes when polling. Version 1.40 (May 22, 2012) - JENKINS-12829 A failed update sets revision of build to 000000+ - JENKINS-13624 BitBucket URL not validated for format. - JENKINS-13329 --debug triggered fresh clones rather than updates. - JENKINS-12544 Illegal directory name on Windows when port number used in URL. - JENKINS-13174 Ignore .hgtags changes when polling. - JENKINS-11549 Include tip revision number in build metadata, not just changeset ID. - JENKINS-13400 Handle URLs. Version 1.39 (Apr 27, 2012) - JENKINS-11976 NonExistentFieldException warnings after upgrading mercurial plugin to 1.38 - JENKINS-11877 Jenkins fails to run "hg" command even though the path to it is specified correctly - JENKINS-2252 Mention SCM changeset ID in email - JENKINS-7594 Merges across named branches should not be ignored. - JENKINS-11809 Time out on pull operations. - Restore 'hg relink' usage accidentally removed earlier. - JENKINS-12162 Pay attention to subdirectory, needed for use in Multi-SCM Plugin (recommended replacement for Forest). - JENKINS-12361 Directory separator '/' for modules supported on Windows. - JENKINS-12404 Enable polling without a workspace when using caches. Version 1.38 (Dec 2, 2011) - JENKINS-11360 Add support for RhodeCode as a Mercurial Repository Browser (patches by marc-guenther and marcsanfacon). - JENKINS-10255 Mercurial Changelog should compare with previous build (patches by willemv and davidmc24). - JENKINS-11363 Add support for Mercurial's ShareExtension to reduce disk usage (patches by willemv). - Dropping support for the Forest extension. - JENKINS-11460 "Repository URL" field in mercurial plugin should trim input. - Added push notification mechanism. Version 1.37 (Jun 13 2011) - JENKINS-9964 Expose the node name via the API and the GUI. - JENKINS-7878 MercurialSCM.update(...) should respect slave node default encoding. Version 1.35 (Jan 19 2011) - JENKINS-7723 Attempted fix for problem calculating changeset ID of workspace. Version 1.34 (Nov 15 2010) - JENKINS-6126 Fixed NPE in polling. Version 1.33 (Aug 13 2010) - JENKINS-7194 FishEye support. Version 1.32 (Aug 12 2010) - JENKINS-3602 Ability to specify a subdirectory of the workspace for the Mercurial repository. - JENKINS-6548 NPE when cache was out of commission. Version 1.31 (Jun 10 2010) - JENKINS-6337 Polling broken when module list specified. Version 1.30 (May 17 2010) - JENKINS-6549 Mercurial caches for slaves was broken in 1.29. Version 1.29 (May 12 2010) - JENKINS-6517 Reduce memory consumption representing merges in large repositories. Version 1.28 (Mar 29 2010) - JENKINS-5835 Include repository browsing support for Kiln (patch by timmytonyboots). Version 1.27 (Mar 19 2010) - JENKINS-4794 Option to maintain local caches of Mercurial repositories. Version 1.26 (Mar 09 2010) - JENKINS-4271 Support parameter expansion for branch (or tag) field. - JENKINS-2180 Polling period can be set shorter than the quiet period now. Version 1.25 (Nov 30 2009) - JENKINS-4672 Option to run Mercurial with --debug. - Dropping support for Mercurial 0.9.x. Use 1.0 at least. - JENKINS-4972 Do not consider merge changesets for purposes of polling. - JENKINS-4846 Option to download Forest extension on demand. Useful for hard-to-administer slaves. - Restoring ability to specify Mercurial executable name other than INSTALLATION/bin/hg(lost in 1.17 with move to tool installation system). - JENKINS-1099 Make "modules" list work even after restart. Version 1.24 (Nov 13 2009) - JENKINS-1143 Add support for the Forest extension. - JENKINS-4840 Support for clean builds when using Forest. Version 1.23 (Oct 23 2009) - Module list should filter the changelog as well as polling. (JENKINS-4702) - Implement getAffectedFiles in MercurialChangeSet r22903. Version 1.22 (Sep 23 2009) - JENKINS-4461 fix used a JDK 6ism: JENKINS-4528. Version 1.21 (Sep 22 2009) - JENKINS-4461 fix was leaking file handles: JENKINS-4513. Version 1.20 (Sep 21 2009) - JENKINS-4514 alternate browsers do not show up in dropdown after updating the plugin. This is an intermediate quick fix until version 1.325 of the core is released. Version 1.19 (Sep 20 2009) - JENKINS-4461 fix was leaking threads. - Mercurial changelog now links to diffs and specific revisions of files (JENKINS-4493) Version 1.18 (Sep 18 2009) - 1.17 release was botched (Maven issue), rereleasing as 1.18. Version 1.17 (Sep 18 2009) - Fixed various issues with named branches. (JENKINS-4281) - If switching to clone due to path mismatch, at least explain what is happening in the build log. (JENKINS-1420) - Kill Hg polling process after one hour, assuming it is stuck on a bad network connection. (JENKINS-4461) - Multiple Mercurial installations may now be configured as tools. See Tool Auto-Installation for background. - Environment variable "MERCURIAL_REVISION" that contains the node ID like "272a7f93d92d..." is now exposed to builds. (Also retain ID of tip revision for each build; not yet exposed via XML API or GUI but could be useful later.) - Google Code and BitKeeper can be now specified (in addition to hgweb) as a repository browser (JENKINS-4426) Version 1.16 (May 27 2009) - The plugin was failing to clean up tmp*style file if the check out failed. (JENKINS-3266) - Fixed a file descriptor leak (JENKINS-2420) Version 1.15 - Fixed implementation of clean update. (JENKINS-2666) - Choose the hgweb source browser automatically. (JENKINS-2406) Version 1.14 - Hudson clones (never updates) when repo path ends with (JENKINS-2718) - Fixed a bug in the polling and branch handling (report) Version 1.13 - Exposed the details of the changelog to the remote API. Version 1.12 Version 1.11 - Handle hg snapshot versions gracefully (JENKINS-1683) Version 1.9 - Supported "modules" so that Hudson won't start builds for changes outside your module in hg (discussion) - The plugin now correctly handles special XML meta-characters (such as ampersands) in filenames. - Correcting hgrc parser to not print warnings about valid config files. - Missing help file added. Version 1.8 - Polling is made more robust so that warning messages from Mercurial won't confuse Hudson - Do not show the list of files "changed" in a Mercurial merge changeset, as this list is often long and usually misleading and useless anyway. In the unusual case that you really wanted to see the details, you can always refer to hgwebdir or the command-line client. Version 1.7 - Fixed a bug in hgweb support URL computation (JENKINS-1038) Version 1.6 Version 1.5 - Perform URL normalization on hgweb browser URL (JENKINS-1038) Version 1.4 Version 1.3 - Improved error diagnostics when 'hg id' command fails. - Added branch support (JENKINS-815) - Help text was missing - Added version check to the form validation. Version 1.2 - Updated to work with behavior changes in hg 0.9.4 (this plugin can still work with 0.9.3, too) - Plugin now works with slaves. Version 1.1 - "hg incoming" now runs with the --quiet option to avoid status messages from going into changelog.xml - fixed crucial bug where "hg pull" was run even if "hg incoming" didn't find any changes.
https://plugins.jenkins.io/mercurial
CC-MAIN-2018-09
refinedweb
3,035
50.63
In this part of the tutorial we will discuss button and software debouncing. We want to study button debouncing first and in some detail so we have a good understanding of what it entails. Button debouncing is important and should not be underappreciated. Button switches are one of the ways that we create input to the microcontroller. When the button is pressed, we expect a reaction such as an LED blink, or a menu scroll. If a button has not been debounced in some capacity, we can become. Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Button Debouncing To illustrate button debouncing, the project we selected contains two LEDs. When the button is pressed, the LEDs toggle between one another. A button press turns one off and the other on. When releasing the button, it can start the process again and cause the LEDs to toggle again. You will notice the LEDs will toggle two times or more with only a single button press. There are two ways to cripple debouncing. An in-circuit method (hardware) with use of a capacitor, and software debouncing. The hardware simply uses a capacitor to eliminate debouncing, and the software will create a variable that measures the confidence level of the button stream of ones or zeroes. Disclaimer illustration you will see that the circuit is connected on the breadboard sans the hardware debouncing, so the problem can be experienced. Two LEDs are attached to the microcontroller, both on PORT B, one on pin 0 and the other on pin 2. Both of these pins will be set to output and since the LEDs are green, a 330ohm. #include <avr/io.h> int main(void) { DDRB |= 1 << PINB0; //Set Direction for output on PINB0 PORTB ^= 1 << PINB0; //Toggling only Pin 0 on port b DDRB |= 1 << PINB2; //Set Direction for Output on PINB2 DDRB &= ~(1 << PINB1); //Data Direction Register input PINB1 PORTB |= 1 << PINB1; //Set PINB1 to a high reading int Pressed = 0; //Initialize/Declare the Pressed variable while (1) { if (bit_is_clear(PINB, 1)) //Check is the button is pressed { //Make sure that the button was released first if (Pressed == 0) { PORTB ^= 1 << PINB0; //Toggle LED in pin 0 PORTB ^= 1 << PINB2; //Toggle LED on pin 2 Pressed = 1; } } else { //This code executes when the button is not pressed. Pressed = 0; } <p> }</p><p>}</p> When the microcontroller is programmed, and the button is pressed repeatedly, it becomes clear that the LEDs will toggle, sometimes correctly and sometimes multiple times with only one button press. Add the capacitor and check the button pressing and LED toggling again. On the oscilloscope,. Step 2: Software Debouncing So why do we need to go over a new method when the other one seemed to work fine? Well, the software debounce method is essentially free if the program space and microcontroller cycle will allow it. With only a few lines of code, you can provide a lot more control of how the debouncing methods work with the button you're using. Debouncing in hardware may add additional costs to each developed board, and it is more difficult to determine a good debouncing for all the push button switches that will be used. However, if you want to preserve program execution cycles, it is best to go: int main(void) { DDRB |= 1 << PINB0; //For Notes on what these actions mean PORTB ^= 1 << PINB0; DDRB |= 1 << PINB2; DDRB &= ~(1 << PINB1); PORTB |= 1 << PINB1; int Pressed = 0; int Pressed_Confidence_Level = 0; //Measure button press confidence int Released_Confidence_Level = 0; //Measure button release confidence while (1) { if (bit_is_clear(PINB, 1)) { Pressed_Confidence_Level ++; //Increase Pressed Confidence Released_Confidence_Level = 0; //Reset released button confidence since there is a button press if (Pressed_Confidence_Level >500) //Indicator of good button press { if (Pressed == 0) { PORTB ^= 1 << PINB0; PORTB ^= 1 << PINB2; Pressed = 1; } //Zero it so a new pressed condition can be evaluated Pressed_Confidence_Level = 0; } } else { Released_Confidence_Level ++; //This works just like the pressed Pressed_Confidence_Level = 0; //Reset pressed button confidence since the button is released if (Released_Confidence_Level >500 { Pressed = 0; Released_Confidence_Level = 0; } } } } Discussions
https://www.instructables.com/id/Beginning-Microcontrollers-Part-9-Button-and-Softw/
CC-MAIN-2019-47
refinedweb
688
56.29
The latest version of the book is P1.0, released over-16) Paper page: 14 On printed page 14, in file chp2/wrapper.rb there are two invocations to GC.start as follows: GC.start GC.start(full_mark: true, immediate_sweep: true, immediate_mark: false) This second form (with options) seems to not be used anywhere else in the book, and looks like it is a leftover that was meant to be removed. - Reported in: P1.0 (15-Feb-16) Paper page: 15 "Let's use the wrapper to run our unoptimized code example from the previous chapter" but then the sample code has nothing to do with the mentioned example from the previous chapter. The structure of the "data.csv" file used in the example is not shown anywhere and that makes it hard to reason about it. --Nuno Pato - Reported in: P1.0 (07-Mar-17) PDF page: 17 If you want to modify an array in place, you need only to modify each of its elements in place. data.each{|str|str.upcase!} When bench-marked, I averaged 0.107 vs 0.109 with data.map!{|str|str.upcase!} Barely any difference, but still... map! is not really doing anything after str.upcase!.--Carlos J. Hernandez - Reported in: P1.0 (12-Feb-16) PDF page: 41-42 Command passes gc: :disable, but results show"gc":"enable" with a gc_count: 1. If GC is disabled, gc_count should be 0. =============== 2.2.0 :001 > Measure.run(gc: :disable) { Thing.all.load } {"2.2.0":{"gc":"enable","time":0.32,"gc_count":1,"memory":"33 MB"}} => nil =============== INVALID COMPARISON The above command is run using gc: :disable and compared to the following command where gc is enabled. =============== 2.2.0 :001 > Measure.run { Thing.all.select([:id, :col1, :col5]).load } {"2.2.0":{"gc":"enable","time":0.21,"gc_count":1,"memory":"7 MB"}} => nil "This uses 5 times less memory and runs 1.5 times faster than Thing.all.load." (Comparing 33 MB to 7 MB) =============== This is an invalid comparison as the first command runs without GC and is 33 MB, while the 2nd command runs with GC and reports 7 MB, but also has a gc_count of 1. When testing these commands myself, I get something more like the following: =============== 2.2.0 :001 > Measure.run(gc: :disable) { Thing.all.load } {"2.2.0":{"gc":"disable","time":0.32,"gc_count":0,"memory":"33 MB"}} => nil 2.2.0 :001 > Measure.run (gc: :disable) { Thing.all.select([:id, :col1, :col5]).load } {"2.2.0":{"gc":"disable","time":0.21,"gc_count":0,"memory":"16 MB"}} => nil =============== With, gc_count: 0 on both memory is only half the amount of memory on the 2nd command, not 5 times.--Thomas - Reported in: P1.0 (17-Dec-15) PDF page: 54 The link to KCachegrind in the footnote seems to be to an old page. Since I can't include links in errata, all I can say is that the old page has a link to the new page, and that should probably be substituted in.--Paul Fioravanti - Reported in: P1.0 (20-Dec-15) PDF page: 55 Using ruby-prof version 0.15.9 (on Ruby 2.2.4p230), in order to get the final line of chp4/ruby_prof_example_api1.rb printing the result of the FlatPrinter to the file, I had to slightly change the syntax, otherwise I ended up with an empty file. The API is `RubyProf::AbstractPrinter#print(output = STDOUT, options = {})`, so I needed to change `printer.print(File.open("ruby_prof_example_api1_profile.txt", "w+"))` to `printer.print(File.open("ruby_prof_example_api1_profile.txt", "w+"), {})`--Paul Fioravanti - Reported in: P1.0 (20-Jan-16) PDF page: 55 ruby: 2.3.0 ruby-prof: 0.15.9 This line `printer.print(File.open("ruby_prof_example_api1_profile.txt", "w+"))` I needed to change it to: `File.open("ruby_prof_example_api1_profile.txt", "w+") { |file| printer.print(file) }` I tried: `printer.print(File.open("ruby_prof_example_api1_profile.txt", "w+"), {})` but that didn't work either.--Meng Fung - Reported in: P1.0 (20-Dec-15) PDF page: 71 Call Graph rendering didn't work for me out of the box using QCacheGrind on Mac OSX, so I would suggest adding in some extra installation instructions for the Graphviz library on each platform. In order for me to get the graphs rendering, I needed to perform the following steps: $ brew install graphviz $ sudo ln -s /usr/local/bin/dot /usr/bin/dot The symlink was because QCacheGrind couldn't seem to find where dot was. Since the call graphs are great, I really think that adding extra installation information where possible would go far to prevent disappointment when you say you're just about to show "the most useful [graph] in KCachegrind", yet you just get error messages and have to go searching for a solution to render them properly.--Paul Fioravanti - Reported in: P1.0 (21-Dec-15) PDF page: 74 class AppTest < MiniTest::Unit::TestCase is now class AppTest < MiniTest::Test in Minitest 5.8.--Paul Fioravanti - Reported in: P1.0 (25-Jul-16) Paper page: 80 The description of chp5/app_optimized3.rb is incorrect/misleading in that it includes Regexp#initialize calls. However, the Regexp's do not include any interpolation, so ruby generates one static regexp during compilation and uses it for all calls to the function, it does not initialize a new regexp per method call. This is true as far back as ruby 1.8, and probably even true before that, as I believe perl operates similarly. Example, best tested with irb: ObjectSpace.count_objects[:T_REGEXP] # => 173 # Use regexp without interpolation, ruby creates static regexp on method definition: def a; //; end ObjectSpace.count_objects[:T_REGEXP] # => 174 # Calling the method does not generate a new one: (0..100).map{a} ObjectSpace.count_objects[:T_REGEXP] # => 174 # Use regexp with interpolation, ruby doesn't create a static regexp on method definition: def b; /#{}/; end ObjectSpace.count_objects[:T_REGEXP] # => 174 # It generates a new regexp every call: (0..100).map{b} ObjectSpace.count_objects[:T_REGEXP] # => 275 # Use regexp with interpolation with o modifier, ruby doesn't create a static regexp on method definition: def c; /#{}/o; end ObjectSpace.count_objects[:T_REGEXP] # => 275 # But it creates one the first time the method is called, and uses it for subsequent calls: (0..100).map{c} ObjectSpace.count_objects[:T_REGEXP] # => 276--Jeremy Evans - Reported in: P1.0 (06-May-16) Paper page: 102 Page 101 lists GC Time(ms) as 0.755 in the GC profiler report, but page 102 says "the single collection pass took almost 800 ms", which is 1000 times that value. (I hope I'm looking in the right place, but I can't see what else the almost 800 ms could be referring to.)--Steven K - Reported in: P1.0 (25-Jul-16) Paper page: 106 For the trick that works on Linux to clear the filesystem cache, use just echo instead of sudo echo. If you require passwords for sudo and don't support tickets, using sudo echo and sudo tee is going to require two password prompts, and the echo command does not need superuser privileges.--Jeremy Evans - Reported in: P1.0 (17-May-16) PDF page: 116 Missing word between "to" and "the performance" in this fragment: "Should the test find a slowdown, we want to the performance before and after, and their difference." - Reported in: P1.0 (20-Nov-16) Paper page: 117 It seems somewhat strange that the methods which are meant to help readers establish their own peformance testing/benchmarking routines (specifically the methods: measure, performance_benchmark and assert_performance) are not coded in the performance-aware way that is discussed in the book. (One could argue that they are helper functions not intended for high performance, but neverthless it was an obvious opportunity to show performance-aware code, which was missed.) - Reported in: P1.0 (27-Jul-16) Paper page: 160 When describing memsizes of objects, the book says that the extra byte for strings that can't be stored in the ruby object is for "upkeep". That is incorrect, the extra byte is so the string is null terminated, so you can pass the string to C str* functions that expected null terminated strings without worrying about SIGSEGV (not that that is a good idea). All bytes for upkeep are stored inside the object. --Jeremy Evans - Reported in: P1.0 (20-Mar-16) Paper page: 161 Small typo. Instead of "Now that you how Ruby allocates, ..." should be "Now that you know how Ruby allocates..."--Steve Jemens
https://pragprog.com/titles/adrpo/errata
CC-MAIN-2019-13
refinedweb
1,410
58.69
A simple but working Finnish language hyphenator. Project description A simple but working Finnish language hyphenator. By Pyry Kontio a.k.a Drasa (Drasa@IRCnet, pyry.kontio@drasa.eu) Hyphenates Finnish text with Unicode soft hyphens. (U+00AD) Mainly intended for server- side-hyphenation of web sites. Allows to set hyphenation-preventing character margins for words so that they won’t break right at the start or the end. (For example, it’d be a bit silly - although certainly possible in Finnish language - to break a word like ‘erikoinen’ at ‘e-rikoinen’. With default margin of 2, it breaks more stylistically pleasingly, ‘eri-koinen’.) Hyphenated html tags break web sites, so there’s the boolean argument skip_html. That enabled, it skips over all the words that are contained between “<” and “>” characters. Usage: as a standalone script: hyphenate_finnish.py [margin] [text] or as a Python module: from hyphenate_finnish import hyphenate; hyphenate(“some text but <html> isn’t gonna get hyphenated!”, margin=1, skip_html=True) It’s that simple. By the way, written in Py3k, but it seems to work with 2.7 too. Licensed with LGPL. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/hyphenate_finnish/
CC-MAIN-2019-18
refinedweb
212
69.18
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project. On Wed, Sep 17, 2014 at 01:56:06PM +0400, Andrew Senkevich wrote: > > The wiki says: > > > > 3.1. Goal > > > > Main goal is to improve vectorization of GCC with OpenMP4.0 SIMD > > constructs (#2.8 in > > and Cilk Plus constructs (6-7 in > >) > > on x86_64 by adding SSE4, AVX and AVX2 vector implementations of > > several vector math functions (float and double versions). AVX-512 > > versions are planned to be added later. These functions can be also > > used manually (with intrincics) by developers to obtain speedup. > > > > It is the opposite of > > > > > > > > which is for programmers to use them directly in their > > applications, mostly independent of compilers. > > > > We need to come to an agreement on what goal is first. > > > > -- > > H.J. > > Hi H.J., > > of course the first goal is to improve vectorization. Usage with > intrinsics is additional goal and is not very significant. > > Attached first patch corrected according last comments in >. --- a/math/bits/mathcalls.h +++ b/math/bits/mathcalls.h @@ -46,6 +46,17 @@ # error "Never include <bits/mathcalls.h> directly; include <math.h> instead." #endif +#undef __DECL_SIMD + +/* For now we have vectorized version only for _Mdouble_ case */ +#if !defined _Mfloat_ && !defined _Mlong_double_ +# if defined _OPENMP && _OPENMP >= 201307 +# define __DECL_SIMD _Pragma ("omp declare simd") As the function is provided only on x86_64, it needs to be guarded by defined __x86_64__ too (or have some way how arch specific headers can tell what function are elemental). Also, only the N (notinbranch) version is provided, so you'd need to use "omp declare simd notinbranch", and furthermore only the AVX2 version is provided (that is not possible for gcc, you need all of SSE2, AVX and AVX2 versions, the other two can be thunked (extract arguments and call cos in a loop or similarly, then pass result in vector reg again). Jakub
https://sourceware.org/legacy-ml/libc-alpha/2014-09/msg00416.html
CC-MAIN-2021-43
refinedweb
316
63.59
In this article, I’ll show you how to use some of the features of Bing maps, and how to use the data Bing provides to build a Silverlight application to show the status of U.S. airports. I’ll break this project down into fine-grained steps so that you can see each aspect of it, and we’ll walk through the creation of the application from start to finish to get a thorough understanding of how this kind of application is put together. So, let’s get started! Preparation Before we can actually start building the Silverlight application and doing cunning things with data, we need to set ourselves up to be able to use Bing’s services. Registering First of all, you need to register on the Bing developer site. Click on Create an AppID (in the top right-hand corner), and you’ll need to enter your LiveID (or create one if you don’t have one already), and then submit some basic information such as Application Name, Description, Company Name and email address. It’s worth remembering that despite the registration for the service, Bing is currently free. Download the SDK After completing registration, head back to the Bing developer site, log in, and click on Download the SDK. You’ll be directed to the Microsoft Download Center to get a copy of the SDK, which is currently on version 2.0. Simply download the kit and perform the installation as you would for any other SDK. Creating the Application After you’ve registered and downloaded the SDK, open Visual Studio 2010 and create a “Silverlight Application” project; in my example, I will name the application AirportStatus_BingMaps (as you can see in figure 1), and to facilitate the publication of the application I’ll create a Web Site from the “Silverlight Application” template, as recommended. Figure 1. Creating the Silverlight Application project (click on the image for an enlarged view). Referencing the Assemblies Now that we have created the application, we’ll reference the Assemblies necessary to present the maps in the application. To select the Assemblies, right-click on the AirportStatus_BingMaps Silverlight project, select “Add Concerning …“, select the Browse tab, and navigate to the C:\Program Files\Bing Maps Silverlight Control\V1\Libraries directory. Select the files Microsoft.Maps.MapControl.Common.dll and Microsoft.Maps.MapControl.dll, and don’t forget to confirm that the references have been added to your project correctly. Creating the Map Now that we’ve created the application and referenced the assemblies, we can create a map within the project. Inside the MainPage.xaml file, within the LayoutRoot grid and just below the line: Now that we’ve added the columns (which will, in turn, determine the layout of our application) we’ll add the namespace that will reference Bing maps. Just above the same line in MainPage.xaml as before, reference the namespace using the syntax: In the area reserved for the declaration of the XAML namespace, you should see the complete list of references: Now that the assembly is referenced within our XAML, and basic layout of the application has been created, we can finally create the map. Just below the layout definition on the </ Grid.ColumnDefinitions> line, add the control (obviously, substituting “Your Bing Maps Key” as appropriate): This control has some properties which I’ll explain: First, it calls Bing the alias for the namespace, and uses this alias to call the controls contained within this namespace, followed by the alias name (myMAP). The control that we are calling in the current situation is the Map, which is followed by the CredentialsProvider property, where you need to input the registry key that your were emailed when registering with the Bing Developer Center. The second property, Mode, indicates what type of map is loaded (Road or Aerial view), and finally we have the property that indicates the name of control, as mentioned a moment ago. To verify your coding, run the application, and the result should look like Figure 2. Figure 2. The basic Silverlight application, calling a map from Bing (click on the image for an enlarged view). Building the Airport Class To improve the function of our currently basic application, let’s create a class that will contain all (or almost all) U.S. airports with their current status. To create the class, just hit Shift + Alt + C, select Class from the Add New Item window, and change the class name to “Airport” and the file name to Airport.cs. Now that the class is created, add the class properties as listed below: Note that the Location property does not have a primitive type, so the type is resolved using the Microsoft.Maps.MapControl which we added in the header file. Luckily for you, the details of the names and airport codes of the U.S. airports are contained in an XML file which I’ve already created, which can be downloaded from here. This file contains almost all U.S. airports, although you should note that it only contains their airport codes and names; the exact location of the airports identified will be found using a WCF service from Bing. To add the file to the project, you can simply download it and save it directly into the project. Bing’s WCF Reference Service To consume the services of Bing maps, right-click on the AirportStatus_BingMaps project and click on Add Service and, in the Address property, enter: This service is responsible for conducting searches of locations and drawing routes, among other things. More details of the service and the Bing SDK can be found on Microsoft’s interactive Maps Control SDK demonstration site. After entering the address into the application, click on GO, wait while the service is loaded, and you should eventually see something like figure 3, with the service referenced and loaded. For the rest of the article, I’ll simply refer to the service as “Geoservices”. Figure 3. Bing’s WCF Geoservices, references and loaded. Reading Airports.XML and Consuming the WCF Service Now that we have the service configured and the list of airports, we need to use the data we have to find the position of each airport. So, go back to the MainPage.xaml file and, just below the <Bing: Map CredentialsProv … >line, add the following code: In this code I am simply creating a container called StackPanel and putting some controls around it; namely a Button and a ListBox. Note that we’ve changed the DataTemplate of the Listbox so that the information is consistently being displayed as “ID – Name”, and note also that the XAML coding part of this project is finished! Next, we’ll open MainPage.xaml.cs and create a method that will read the Airports.xml file and then search for each airports’ location through the Bing Geoservices. First add the references for the Microsoft.Maps.MapControl and System.Xml.Linq namespaces, and then create the following two private properties in MainPage.xaml.cs (I will not use the MVVM pattern in this article): These properties will help us to consume the Bing services; the _geoservice is just a reference to the Bing service, and _geocodeRequest is where we will create the credentials that will be sent with the request that we’ll generate in just a moment. The third property is a generic list of type Airport, which will contain the list of airports for as long as the application is live. Now that the ancillary properties have been generated, we can create a GetAirports method to read the Airport.xml file and transform each entry into an Airport object to be added to the Airports list. To add the necessary System.Xml.Linq assembly to AirportStatus_BingMaps, you follow the same process as when we added the assemblies at the start of the article, although this time you’ll need to navigate to the C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Libraries Client\ directory and select the System.Xml.Linq.dll file (don’t forget to confirm that the reference is added correctly). Now we can add the following code to the MainPage.xaml.cs file: Don’t forget to add the Microsoft.Maps.MapControl and System.Xml.Linq namespaces, or obviously none of this will work. Here’s what’s happening in this method: - It first creates a local variable of type XDocument that will read the XML file through LinqToXML, - A private property receives an instance of the previously-created GeocodeRequest class, - An instance of the Credentials class is created, which will load the credentials that we mentioned at the start of the article, - The _geocodeRequest.Credentials.ApplicationId property is altered, receiving the ApplicationId in the myMAP control via XAML, - A command-type foreach loop is created, which will traverse the entire airports.xml file and, for each Airport Node, create an instance of Airports and add it to the private property of type list<Airport> “Airports”. Finally, note that the last foreach line is calling another method, which is responsible for making the location search work; this Search method is described below: The method first confirms that the Geoservice has no other instances running, so as to avoid unnecessary code and unnecessary memory consumption. If the result of this check is null, then the method creates a new instance of the service through the configuration interface of the endpoint, and then adds an EventHandler to the GeocodeCompleted event, which runs at the end of the WebService search, which is an asynchronous WCF service call from the Silverlight application. Remember, in Silverlight you can only make calls to asynchronous services. Upon validation and instantiation of the service, we indicate the search query which, in our example, will be the ID of the airport, and then we have an asynchronous call to the method of the newly-created and referenced GeocodeAsync service. Now, we are running the service asynchronously, but we know that the returning results will have to eventually be addressed in order to implement the GeocodeCompleted event, as the code below illustrates: As you can see, this method is handling the return from the GeocodeCompleted service. Note that validation is performed first , confirming that the service returned successfully, and then we identify if the result is of the “Airport” type. Having performed these validations, we search through our pre-prepared list of airports to find an airport with the same name as the WebService returned (performed by the lambda expression created in the method). After finding the airport in the list, the Location property is just updated to the location obtained from the service. To test the application now, go to the MainPage class constructor and update the following code snippet: Now run the application and check that the result looks similar to Figure 4: Figure 4. Running the application with Bing’s WCF Geoservice (click on the image for an enlarged view). Locating and Marking Airports on the Map Now that the Geoservice is up and running, and the map is being displayed, we can mark the location of airports on the map. First we’ll create a private property named imageLayer with type MapLayer (private MapLayer imageLayer;), and then locate the GetAirports () method and added the following code to the beginning of the method, before we create a variable of type XDocument doc: Perhaps you’re wondering what this is? The first is a property receiving an instance of class type MapLayer, which is a kind of layer that can be created on the map and customized with some information, and we add this layer to the map through the Children property of myMAP. Next, we need to find the Search (string id) method and, just below the line: … we need to add an EventHandler for the GeocodeCompleted event, encoded in a similar way to _geoservce.GeocodeCompleted += new : Tip:You can use the snippet functionality of Visual Studio in this instance; code to the equality operator and press tab twice, and Visual Studio will handle the creation the method and make the reference to the event. The code that will be implemented in the _geoservice_GeocodeCompleted method, referenced by the GeocodeCompleted event, is as follows: Before we continue, you need to add an image to the AirportStatus_BingMaps project through the “Add Existing Item” menu, and change the URI to the name of the image which you’ve added (in my case it was Flag2_Green.png), and ensure that this is reflected in your method. First we’re verifying the status of the returned results. If successful (i.e. the result is of type “Airport”) and if the conditions are met, then the method creates an object of type Image, changes some of that object’s properties (such as the image source and size), and then adds the image to the customizable layer that was added to the map earlier. The second parameter of the method, AddChild, indicates the location on the map where the image should be positioned, which in our case is the position returned by the Geoservice airport search. Next, the method finds the given airport using the description from my imported XML list, and then swaps the Location property in the file with the results from the service, so that we now know the correct position of each airport on the list. Now that we’ve marked the map with the location of airports, we’ll arrange it so that selecting an airport in the list on the right-hand side of the application will focus on that airport in the map. So, we return to the MainPage.xaml file and locate the lstbAirports ListBox control, and delegate an event for this control by modifying the code to get an XAML statement like this: Tip: use the NewEventHandler snippet of Visual Studio to create the method in MainPage.xaml.cs automatically. Next, change the lstbAirports_SelectionChanged method to: What we did in this method is just to change the SetView property of the map to the location of the airport selected in the list, and the second property is simple the level of zoom. Now let’s create a new property called uiElements that will be of type Dictionary<Airport, UIElement>; this will map an image to the airport to facilitate the next step, where we’ll return the service status of the airport. After declaring the Dictionary variable, go to the constructor and instantiate the property immediately after the InitializeComponent () method: Go to the _geoservice_GeocodeCompleted method and immediately after the line: … Include this code snippet: In this snippet we discover whether the airport is on the list of UI elements and, if it’s not, we add the image and airport to the new dictionary, as this will be used in the status mapping later on. Let’s see what the application looks like at this point: Figure 5. The entire list of airports with their locations displayed on the map (click on the image for an enlarged view). Figure 6. Selecting Las Vegas airport and automatically having the map focus on it (click on the image for an enlarged view). Out Of Browser Now that the application nearly 100% complete, we’ll enable it to exist Out Of Browser. if you look at the layout of the application, you’ll see the Install button in the top right-hand corner. That button is going to install the application on the local machine, so that we can consume an RSS feed without going through security policies and get the desired result: the current status of the airports. To do this, open the project properties of AirportStatus_BingMaps, navigate to the Silverlight tab and check “Enable application running out of the browser“. Next , click the Out-Of-Browser Settings button, check the “Require elevated trust When running outside the browser ” option, and click OK. Figure 7. Configuring the Silverlight application to run Out of Browser(click on the image for an enlarged view). Now that the application is configured to run outside the browser, we close the project properties, go back to MainPage.xaml, and delegate the Install button’s click event: Now go to MainPage.xaml.cs and locate the btnInstall_Click method created by Visual Studio, and change the code to: First a validation is performed by checking whether the application is installed or not and, if not, then the method performs the installation. When we start the application in the browser and click on “Install” we’ll see the following dialog: Figure 8. Installing the application on your local machine. If we click on Install, the application will then run on our machine as if it were a local executable. Reading information about the status of airport RSS Now that the application is running out of browser, it can access the RSS feed from airportfact.com and use it to find the current status of the US airports. First let’s give MainPage.xaml.cs a way of cleaning the HTML that comes through the RSS feed: Now we’ll add another method that will request the appropriate RSS feeds: … and immediately below that, we’ll write a method for handling the return from the RSS feeds and adding the results to our map: The GetDescription method opens the root URL of the RSS feeds, and searches the titles for the code of the airport it’s currently updating (e.g. LAX), navigating through the possible feeds using the information from our list of airports. The client_DownloadStringCompleted method then takes this information, locates the given airport in the XML list and updates its property value with the current status of the airport, and finds the airport’s corresponding image on the map and sets its property value to match. To call the service that is now contained in the GetDescription (string search) method, find the _geoservice_GeocodeCompleted method, and after the line: …call the GetDescription method, passing the ID property of the airport variable as a parameter: The end result should look like this: Figure 9. The finished application, locating U.S airports and reporting on their current status (click on the image for an enlarged view). Conclusion You can download the complete project from the top of this article to see the finished result. This little application only scratches the surface of Bing maps and the services it provides, and is similarly a very basic demonstration of Silverlight’s abilities. Nevertheless, you’ll hopefully agree that this was a very simple application to write, and that it was relatively easy to manipulate multiple data sources to get the desired results.
https://www.simple-talk.com/dotnet/.net-framework/building-an-airport-status-mashup-with-silverlight-and-bing-maps/
CC-MAIN-2017-04
refinedweb
3,108
55.47
1. did my code work? Advertising 2. It appears that you have to reset the mysql_fetch_*(), not reset the $myrow. Try calling: mysql_data_seek($result_it, 0) before while ($myrow = mysql_fetch_assoc($result)) It would appear that it resets the $result_it for a call to mysql_fetch_row(). I would assume that it would also work for mysql_fetch_array() and mysql_fetch_assoc() Worth a try. Justin on 29/08/02 4:16 PM, Petre Agenbag ([EMAIL PROTECTED]) wrote: > Hi Justin > OK, a quick feedback on your previous suggestion: > > I tried to unset the $myrow_it, but it still didn't produce any output. > The only way I could get it to work was with the same method you > suggested in this e-mail. > I had to create 2 new vars ( which basically boils down to 2 more > SQL's). > > I think I'm not 100% understanding what the $result variable contains > $> => 'value'. In this case, I've chosen to use the three types of status as >> the key, and the natural language or heading as the value. >> >> <? >> $ticketTypes = array( >> 'OPEN' => 'New Tickets', >> 'CURRENT => 'Current Tickets', >> 'OLD' => 'Old Tickets' >> ); >> ?> >> >> Then you can loop through this array with a foreach, and do the same thing >> to all three elements. >> >> Now, we have to take the $type (key) and $heading (value) of each array >> element (the three options) and use them within the loop to produce the >> different results. >> >> <? >> >> // array of ticket types >> $ticketTypes = array( >> 'OPEN' => 'New Tickets', >> 'CURRENT => 'Current Tickets', >> 'OLD' => 'Old Tickets' >> ); >> >> >> foreach($ticketTypes as $type => $heading) >> { >> // print heading >> echo "<B>{$heading}</B><BR>"; >> >> $sql = "SELECT * FROM tickets WHERE status='{$type}'"; >> $result = mysql_query($sql); >> if(!$result) >> { >> echo "There was a database error: " . mysql_error() . "<BR>"; >> } >> else >> { >> // no need to to do an $i++ count... just use this: >> $total = mysql_num_rows($result); >> while ($myrow = mysql_fetch_array($result)) >> { >> // a little trick i have to make each column >> // name (eg "status") into it's own var (eg $status) >> foreach($myrow as $k => $v) { $$k = $v; } >> echo "{$company} :: {$title} :: {$content}<br>"; >> } >> echo "Total New: {$total}<br>"; >> } >> } >> ?> >> >> >> Now, I haven't tested the above, so gimmie a yell if it breaks... >> >> When I strip all the bullshit and comments out, and condense the script down >> to it rawest form, it's like 19 lines of code -- that's including error >> handling and all sorts of stuff!! And it will work for 2 ticket types >> (still less lines of code than your 40+) and for 50 ticket types -- just >> make the array bigger. >> >> The other point is, if this script was called for many different pages, then >> you could include it all in a function... but that's for another day! >> >> >> Sure, you're doing three sql queries, but each one of them is returning a >> more focused result set... I'd have to do some tests, but I reckon there's >> very little difference between ONE QUERY + MASSIVE AMOUNTS OF PHP CODING >> if() sections etc etc, versus THREE QUERIES + SOME CODING. The nature of >> what you want to achieve with this script is conducive to using three >> separate queries. >> >> I know less queries == faster, but it's not always the case, and not always >> worth worrying about, unless you've got a HUGE site with millions of hits a >> day... you just won't notice the benefit, in comparison to the advantage. >> >> >> Some pages on a site I'm working on right now, hinge.net.au, have 5-10 >> queries on them, for sessions, users, content, data, counters, logging, >> message boards, etc etc. And I've NEVER noticed a performance problem, or >> got any complaints. >> >> >> When you've got maybe 500 rows in there (or 5,000, or 50,000), it'd be nice >> to run an a/b test with a script timer, and get some averages to find out >> definitively. >> >> >> Enjoy, >> >> Justin >> >> >> >> on 28/08/02 11:16 PM, Petre Agenbag ([EMAIL PROTECTED]) wrote: >> >>> Justin, >>> Thanks for the reply. >>> I frequently see your comments on threads here, and don't worry, I >>> wouldn't take any as insults as I can see you want to help, and I >>> appreciate that! >>> >>> So, you are more than welcome to show me alternatives, I am by no means >>> an ace programmer, and I eagerly lap up all comments and suggestions. >>> >>> I will try the suggestion, and will report back. >>> >>> Just to sort of explain my thinking process. >>> I would think that limiting queries to a db to the absolute minimum >>> would be the "best", BUT, I can now see that there are no easy way, as >>> the IF iterations clearly show... >>> >>> I would me very interested in your suggestions as to "cleaning" up the >>> code, for I have seen plenty of examples where you can write the same >>> functionality in many many different ways, and the only difference is >>> normally in the way people think about the problem and the solution. >>> It can't be taught,and will only come with experience, which I can >>> hopefully get a head start on with this list... >>> >>> Thanks alot. >>> >>> On Wed, 2002-08-28 at 14:52, Justin French wrote: >>>> Try unset($myrow_it). >>>> >>>> You don't want to reset the array, you want to clear it, then use a new >>>> array (with the same name, and the same $result) for another while loop. >>>> >>>> FWIW, I'm pretty sure I'd just use three distinct queries. Infact, it'd be >>>> worth you timing your scripts with both these versions to see what works >>>> out >>>> quicker -- three queries, or lots of if() statements [remember, you're in a >>>> while loop, so 5000 rows * 3 if statements = 15000!!] >>>> >>>> Put this at the top: >>>> >>>> <? >>>> $show_timer = 1; >>>> function getmicrotime() >>>> { >>>> list($usec, $sec) = explode(" ",microtime()); >>>> return ((float)$usec + (float)$sec); >>>> } >>>> $time_start = getmicrotime(); >>>> ?> >>>> >>>> And this at the end: >>>> >>>> <? >>>> if($show_timer) >>>> { >>>> $time_end = getmicrotime(); >>>> $timer = $time_end - $time_start; >>>> echo "Timer: {$timer}<BR>"; >>>> } >>>> ?> >>>> >>>> Run the script with both versions about 10 times each, and take the average >>>> :) >>>> >>>> >>>> There are many many ways I can see that you could clean up and optimise the >>>> script you've shown (not intended as an insult AT ALL!) -- quite possibly >>>> some of these would make more of a difference to your performance than >>>> limiting yourself to just one query... at the very least, it would make >>>> updating the script easier, and reduce the lines of code. >>>> >>>> >>>> Good luck, >>>> >>>> Justin French >>>> >>>> >>> >> > -- PHP General Mailing List () To unsubscribe, visit:
https://www.mail-archive.com/php-general@lists.php.net/msg77033.html
CC-MAIN-2018-17
refinedweb
1,051
68.4
The latest addition to my set of Arduino shields is a true fun thing: The ElecFreaks.com JoyStick Shield 🙂 ElecFreaks.com Joystick Board with FRDM-KL25Z and nRF24L01+ The board costs less than US$10 (see) and enables any Arduino board to be a gaming board 🙂 ❗ I had to heat up and re-solder the Arduino headers, as the soldering was not very well done (contacts where not solid). After fixing this, I had no problems. The board has - 3.3V and 5V supply voltage and logic choice - Socket for nRF24L01+ transceiver - All Arduino connectors are routed through, and one connector can be used for a Nokia 5110 display. - 4 big (red and blue) push buttons: A, B, C an D - Two small push buttons (E and F) - Joystick with X and Y potentiometer and extra push button (KEY) ❗ While all Arduino headers are available on the top side of the board, it is not possible to stack another board on top of the Joystick board. The X/Y joystick has an extra push button (KEY) for pressing the stick down: Demo Application I have created a demo application for the shield (see the link to the project and sources on GitHub at the end of this article). The example runs with FreeRTOS, handles all buttons and the x/y joystick, features a command line shell and sends commands with the nRF24L01+ transceiver. nRF24L01+ Module I don’t have that Nokia 5110, but I’m using the nRF24L01+ transceiver with the board. Because the interrupt line of the nRF24L01+ module is not present on this board, I had to extend RNet stack so it works without interrupts in polling mode. To use the module in polling mode, I disable the interrupt pin: In the RNet stack, I enable polling: With this enabled, the transceiver part of the stack polls the status byte in the transceiver automatically: /* ** =================================================================== ** Method : RF1_PollInterrupt (component nRF24L01) ** Description : ** If there is no interrupt line available, this method polls ** the device to check if there is an interrupt. If ** Parameters : None ** Returns : Nothing ** =================================================================== */ void RF1_PollInterrupt(void) { uint8_t status; void RADIO_OnInterrupt(void); /* prototype */ status = RF1_GetStatus(); if (status&(RF1_STATUS_RX_DR|RF1_STATUS_TX_DS|RF1_STATUS_MAX_RT)) { RF1_CE_LOW(); /* pull CE Low to disable transceiver */ RADIO_OnInterrupt(); RF1_OnInterrupt(); /* call user event (if enabled)... */ } } If you do not have the nRF24L01+ module, it can be disabled with setting the macro PL_HAS_NRF24 in platform.h to 0. Push Buttons I’m using the Processor Expert ‘Key’ component for the push buttons: This components will do all the debouncing and generates the events for the application. Command Line Shell The shell offers these commands: -------------------------------------------------------------- Joystick -------------------------------------------------------------- CLS1 ; Group of CLS1 commands help|status ; Print help or status information FRTOS1 ; Group of FRTOS1 commands help|status ; Print help or status information radio ; Group of radio commands help|status ; Shows radio help or status channel ; Switches to the given channel. Channel must be in the range 0..127 power ; Changes output power to 0, -10, -12, -18 sniff on|off ; Turns sniffing on or off rnwk ; Group of rnwk commands help|status ; Shows help or status app ; Group of app commands help|status ; print help or status information The ‘status’ command shows the system and joystick status: -------------------------------------------------------------- SYSTEM STATUS -------------------------------------------------------------- Firmware : Apr 26 2014 17:29:21 FRTOS1 : TASK LIST: Name Status Prio Stack TCB# -------------------------------------------------------------- Shell R 1 320 1 IDLE R 0 181 4 App B 2 98 3 RNet B 3 120 2 RTOS ticks : 1000 Hz, 1 ms Free heap : 3728 bytes Radio : state : READY_TX_RX sniff : no channel : 0 power : 0 dBm OBSERVE_TX : 2 lost, 15 retry rnwk : addr : 0xFF app : Buttons : A(off) B(off) C(off) D(off) E(off) F(off) KEY(off) Analog : X: 0x7F9C(0) Y: 0x7FE6(0) The application is using events, and when a button is pressed, this is sent as a message with the RNet stack: void APP_HandleEvent(uint8_t event) { #if PL_HAS_NRF24 uint8_t data; #endif switch(event) { case EVNT1_A_PRESSED: CLS1_SendStr((unsigned char*)"A pressed!\r\n", CLS1_GetStdio()->stdOut); #if PL_HAS_NRF24 data = 'A'; (void)RAPP_SendPayloadDataBlock(&data, sizeof(data), RAPP_MSG_TYPE_JOYSTICK_BTN, RNWK_ADDR_BROADCAST, RPHY_PACKET_FLAGS_NONE); #endif break; case EVNT1_B_PRESSED: CLS1_SendStr((unsigned char*)"B pressed!\r\n", CLS1_GetStdio()->stdOut); #if PL_HAS_NRF24 data = 'B'; (void)RAPP_SendPayloadDataBlock(&data, sizeof(data), RAPP_MSG_TYPE_JOYSTICK_BTN, RNWK_ADDR_BROADCAST, RPHY_PACKET_FLAGS_NONE); #endif break; case EVNT1_C_PRESSED: CLS1_SendStr((unsigned char*)"C pressed!\r\n", CLS1_GetStdio()->stdOut); #if PL_HAS_NRF24 data = 'C'; (void)RAPP_SendPayloadDataBlock(&data, sizeof(data), RAPP_MSG_TYPE_JOYSTICK_BTN, RNWK_ADDR_BROADCAST, RPHY_PACKET_FLAGS_NONE); #endif break; case EVNT1_D_PRESSED: CLS1_SendStr((unsigned char*)"D pressed!\r\n", CLS1_GetStdio()->stdOut); #if PL_HAS_NRF24 data = 'D'; (void)RAPP_SendPayloadDataBlock(&data, sizeof(data), RAPP_MSG_TYPE_JOYSTICK_BTN, RNWK_ADDR_BROADCAST, RPHY_PACKET_FLAGS_NONE); #endif break; case EVNT1_E_PRESSED: CLS1_SendStr((unsigned char*)"E pressed!\r\n", CLS1_GetStdio()->stdOut); #if PL_HAS_NRF24 data = 'E'; (void)RAPP_SendPayloadDataBlock(&data, sizeof(data), RAPP_MSG_TYPE_JOYSTICK_BTN, RNWK_ADDR_BROADCAST, RPHY_PACKET_FLAGS_NONE); #endif break; case EVNT1_F_PRESSED: CLS1_SendStr((unsigned char*)"F pressed!\r\n", CLS1_GetStdio()->stdOut); #if PL_HAS_NRF24 data = 'F'; (void)RAPP_SendPayloadDataBlock(&data, sizeof(data), RAPP_MSG_TYPE_JOYSTICK_BTN, RNWK_ADDR_BROADCAST, RPHY_PACKET_FLAGS_NONE); #endif break; case EVNT1_KEY_PRESSED: CLS1_SendStr((unsigned char*)"KEY pressed!\r\n", CLS1_GetStdio()->stdOut); #if PL_HAS_NRF24 data = 'K'; (void)RAPP_SendPayloadDataBlock(&data, sizeof(data), RAPP_MSG_TYPE_JOYSTICK_BTN, RNWK_ADDR_BROADCAST, RPHY_PACKET_FLAGS_NONE); #endif break; default: break; } /* switch */ } The application main task performs all the periodic work, and sends joystick x/y data: static void AppTask(void *pvParameters) { uint16_t cntMs; uint16_t x, y; int8_t x8, y8, x8prev, y8prev; #if PL_HAS_NRF24 uint8_t data[2]; #endif uint8_t buf[24]; CLS1_SendStr((unsigned char*)"Hello from the Joystick App!\r\n", CLS1_GetStdio()->stdOut); cntMs = 0; x8prev = 127; y8prev = 127; /* should be different from center position */ for(;;) { if (APP_GetXY(&x, &y, &x8, &y8)!=ERR_OK) { CLS1_SendStr((unsigned char*)"Failed to get x/y!\r\n", CLS1_GetStdio()->stdErr); } else { if ((x8!=x8prev) || (y8!=y8prev)) { /* send only changing data, and only if not zero/midpoint */ UTIL1_strcpy(buf, sizeof(buf), (unsigned char*)"xy: "); UTIL1_strcatNum8s(buf, sizeof(buf), x8); UTIL1_chcat(buf, sizeof(buf), ','); UTIL1_strcatNum8s(buf, sizeof(buf), y8); UTIL1_strcat(buf, sizeof(buf), (unsigned char*)"\r\n"); CLS1_SendStr(buf, CLS1_GetStdio()->stdOut); #if PL_HAS_NRF24 data[0] = (uint8_t)x8; data[1] = (uint8_t)y8; (void)RAPP_SendPayloadDataBlock(&data[0], sizeof(data), RAPP_MSG_TYPE_JOYSTICK_XY, RNWK_ADDR_BROADCAST, RPHY_PACKET_FLAGS_NONE); #endif x8prev = x8; y8prev = y8; } } if (cntMs>500) { LED1_Neg(); cntMs = 0; } KEY1_ScanKeys(); EVNT1_HandleEvent(); FRTOS1_vTaskDelay(100/portTICK_RATE_MS); cntMs += 100; } } Summary With the joystick board, I have now a nice remote controller for my Zumo bot, and I have a wireless link with the nRF24L01+ module. I need to add batteries, and then I have a cool remote controller I can use for pretty much anything. I have some other Nokia displays available, so maybe I should spend some time implementing a game (PacMan?) :-). The project and sources are available on GitHub. Happy Gaming 🙂 Pingback: Joystick Shield with nRF24L01 driving a Zumo Robot | MCU on Eclipse Pingback: Snake Game on the FRDM-KL25Z with Nokia 5110 Display | MCU on Eclipse Pingback: User Interrupt on NMI Pin with Kinetis and ExtInt Component | MCU on Eclipse I am getting a totally different view of the Key component in Processor expert components view. How do you add multiple keys, for example. mine says referenced component disabled or not installed. may be there are some components I don’t have, I will try to update from source forge repo. I am using KDS 3.0. Is your project a Kinetis SDK project? If so, then all these LDD or most of the McuOnEclipse components won’t work with the SDK. Nop, its just a Processor Expert project in KDS 3.0 polling or interrupt mode? Either. Could you show the component properties of key and how the UI looks like when adding multiple keys. Am at a loss here. Sorry, got it, If I use polling and no interrupts, then I can have multiple keys, but the challenge was, I have to select the child component Inhr1:BitIO to select the pin for the key** ok, good that this one has been resolved 🙂 Are you using TU1 fro a trigger TRG1_AddTick(); on timeout? I am figuring out some things from the typical usage help on component 🙂 You can use any timer (I use TimerUnit in most cases). Simply feed the AddTick() method with the specified frequency and you are good to go. specified freq? TickPeriodMs, in the component settings/properties. By default 10 ms. You need to call AddTick() with that frequency (period). If you select ‘RTOS’, then it is expected that you use the RTOS (FreeRTOS) tick hook to call AddTick(). OK, so I have a timer with a period of 10ms calling TRG1_AddTick(); on its TI1_OnInterrupt event. Still nothing. I can set a breakpoint into the timer events but not the keypressed even, debugger shuts down. I am doing this with interrupts disabled for the Key component BTW. debugger shuts down? Because of breakpoints in removed code? If you set a breakpoint, do you get a ‘tick mark’? I was suspecting it was because of removed code as well, for if you remove a component, the events in events.c don’t get deleted automatically even after generating code. But then its not the case, the components are there, I even generated code again. Just can’t set breakpoints in the KEY1_OnKeyPressed event. I am trying to control a robotic arm. I just have to wait for the repo to clone and try to compare with yours tomorrow. Thanks anyways. I don’t think it is because of the components. I think the problem is somewhere else 😦 For the trigger component, apart from calling the addTick() periodically from a timer event, am I also supposed to call AddTrigger() somewhere ? I dont seem to understand this trigger mechanism. Coz I have a trigger defined by default for the KEY1 component as “#define TRG1_KEY1_PRESS 0” but where is this used? I think there is a connection missing. I checked the settings and there is no difference 😦 Checking settings with what? I’m sorry, but I’m somewhat lost with your problem :-(. Are you using the FRDM-KL25Z with that Joystick shield? Do you have proper pull-ups for your push buttons installed if using your own buttons? I am using the FRDM-KL25Z with the joystick shield. The PE components settings. Ok, let me check that project myself again. Maybe something has changed in between… Ok, so I looked at your code, a bit complex, am not using RTOS, I can see that somewhere in application.c you are repeatedly calling KEY1_ScanKeys(); through CTRL_ScanKeys();, the KEY1_OnKeyPressed is firing when a key is pressed and CTRL_OnKeyPressed() is setting events and then in Application.c you call APP_HandleEvent(); which handles those events set by OnKeyPressed(). All this while the timer event is calling AddTick(). I was simply not scanning the keys in main!! Thats it. Now the KeyPressed event is firing. Thanks. Ah, yes! If you are in polling mode, you have to poll (well, scan) the keys 🙂 I also updated the gcc toolchain to 2015 q2, I guess that helps with the breakpoints as you indicated on the launchpad bug report. Hello, Erich I’m trying to help a customer who is using your Component “KEY” in a project with K22 in KDS. I order to analyze this issue separately, I created a new project in KDS, and set PTB17 (which in FRDM-K22F is connected to SW3 button) as a button on your component “KEY” and enabled the Port interrupt for detecting falling edges on this pin. Then I set PTD5, which is connected to a led, as a GPIO output and toggle this port every time the Port interrupt happens in ISR (in Events.c). The first time I press SW3, the interrupt happens and the led is toggled. But the next times, it doesn’t work anymore. I realized that, after the event occurs, the interrupt is disabled in PCR register. More specifically, after it runs “Scan” function in “keyPin1_OnInterrupt” (in KEY1.c file). How can I avoid this? Is there a specific setting to do in “KEY” component for that? Thanks! Hi Marco, are you using the Trigger module to debounce the key? If so, you need to advance the trigger with the AddTick() method. If it enters debouncing, it disables the key interrupt during debounce time, maybe this is the problem. Maybe I should come up with a dedicated example? Hi Marco, I have created an example project and pushed it on GitHub here: I hope this helps, Erich Hi Erich. I found a bug in the component. In the Component inspector, you can select number of Button and number of Hat, but the USB_Descriptor, don’t generate correct descriptor for selected parameters. I supose you create a basic and fixed descriptor for this component. But for me is OK (I modify the descriptor for my application). Thank’s Hi Sergi, yes, thanks for the reminder, that’s something I wanted to finish for the release, but have not implemented it yet. Hi Sergi, I have now implemented settings to configure the number of controls. That works fine for number of buttons (I tested up to 24 buttons). For the analog, throttle and hat switch I have a setting to have one or zero (disabled), as this would require larger changes (duplications in the USB descriptor). Are you actually using multiple hat switches? If so, I could continue looking into this, but in my case I only have a hat switch with 4 positions, so I’m fine 🙂 Hi. No, I don’t need a hat. Only I need 16 buttons. And change the X,Y from 8 to 16 bits. Regards. Hi Sergi, ok, good point about the analog bits, I will add a setting for this. Erich
https://mcuoneclipse.com/2014/04/27/joystick-shield-with-the-frdm-board/
CC-MAIN-2017-47
refinedweb
2,239
60.85
view raw I'm having trouble getting a parent class mocked with mock.patch parent.py import mock class Parent(): def __init__(self): print("Original recipe") child.py from parent import Parent class Child(Parent): def foo(self): print('Parent is {}'.format(Parent)) test.py import mock from child import Child c = Child() # expect 'Original recipe' c.foo() with mock.patch('child.Parent'): c = Child() # expect silence c.foo() test.py Original recipe Parent is <class 'parent.Parent'> Parent is <MagicMock name='Parent' id='4325705712'> Original recipe Parent is <class 'parent.Parent'> Original recipe Parent is <MagicMock name='Parent' id='4325705712'> You are not patching the Parent class, you are patching the child module, changing its Parent attribute to a mock. This does not change Child at all, because it still uses the old Parent class as base class. In Python 3, you can instead patch Child.__bases__ to change the base class at runtime. This come with its weirdnesses, of course. Python has no "variables", only names bound to specific objects in memory. Changing those names bindings (e.g. patching the scope they are contained it with mock.patch or setattr) has absolutely no effect on previous uses of these bindings. This means that although you do patch the Parent attribute of the child module, replacing it by a Mock, as the module has already loaded, the class is already defined with the old target of the Parent attribute, which is, the original Parent class. Childinstances Parentas external with mock.patch('child.Child.method_that_calls_method_on_parent'): ... If you want to isolate and mock method on Parent when testing instances of Child, you could make the calls to Parent sit in dedicated methods (and then patch these methods), as you'd do test external classes. Parent If you know in advance which methods of Parent you'll need to patch, you can just imply patch the methods on Parent. with mock.patch('parent.Parent.method'): ... This will mutate the value of the Parent attribute (which is the same for the child and parent modules, as child imports Parent from parent), instead of modifying which objects the Parent attribute points to in a particular module as you were doing before. Parentbehave like a Mock with mock.patch('parent.Parent.__getattribute__'): ... This is the most close to the intent of your original code. It relies on changing the way Python gets attributes from the Parent class, effectively patching all of its possible attributes. The disadvantage with this is that you'd get a mock even for non-existing attributes, but that was the case with your original approach as well. This can be overcome by replacing __getattribute__ with a wrapper that returns a mock only for found attributes : def _getmock(self, name): value = object.__getattribute__(self, name) return Mock(value) _original = getattr(parent.Parent, '__getattribute__') setattr(parent.Parent, '__getattribute__', _getmock) try: ... finally: setattr(parent.Parent, '__getattribute__', _original) (Your test suite probably provides a way to temporarily patch _getmock as parent.Parent.__getattribute__, as mock.patch does, which would make this simpler.) This can be further customized to specify the type and parameters of the mock created depending on the attribute name ( name in _getmock) or value ( value in _getmock), or making it so that the same mock is returned for when the same attribute name is accessed multiple times.
https://codedump.io/share/XMETQEfpJ41A/1/patching-a-parent-class
CC-MAIN-2017-22
refinedweb
555
56.55
Injecting parameter values Hi, it's me again. I once played around with spring for Java and I wonder if it is possible to Inject parameters into constructors and setters with the RL2 framework. Something like injector inject into class Alpha the string "Hello" using setter setString(). Or inject into constructor param 1 = 4, param 2 = Classname. On a side note, how can i use named injections? Thanks. Comments are currently closed for this discussion. You can start a new one. Keyboard shortcuts Generic Comment Form You can use Command ⌘ instead of Control ^ on Mac Support Staff 1 Posted by Ondina D.F. on 13 Feb, 2014 11:11 AM Named injection: Mapping: Injection point: Using an interface: Constructor injection Mappings: Dependencies are mapped first, then the class needing them in the constructor! SomeModel: 2 Posted by matej on 13 Feb, 2014 11:12 AM 1: if constructor has parameters, they will be automatically injected if the class is created with injector. (you inject the class somewhere, or you create it directly with injector.getInstance() or injector.instantiateUnmapped if the class is not mapped. to inject into setters, just put inject tag above setter [Inject] public function set (value:Foo):void; 2:to use named injections, you would inject like this [Inject(name=“foo”)] public var foo:Foo; it works for setters also, but I don’t know how to achieve that in contractor When you map the class, you have the optional name parameter injector.map(FOO,”foo”); 3 Posted by dhagenbln on 13 Feb, 2014 11:21 AM Wow thanks, that clears up a lot of the fog still residing in my brain. Support Staff 4 Posted by Ondina D.F. on 13 Feb, 2014 11:42 AM @matey Place the metadata tag above the class : 5 Posted by dhagenbln on 17 Feb, 2014 05:53 PM One little thing Is it possible to Inject multiple parameters with and without names? Something like: [Inject (first param some instance, name="someValue")] public class SomeClass { public function SomeClass(someInstance:InstanceType, someValue:String){} } So the first parameter is unnamed and the second is named. Support Staff 6 Posted by Ondina D.F. on 18 Feb, 2014 01:11 PM In theory, according to this:..., it should work like this: But, it doesn't. I don't know whether it is a bug or not. I've tried. If you, or Matej, or someone else have the time to investigate this, please do so, and if it appears to be indeed a bug, please report it on github, on the new location of swiftsuspenders: Ondina Ondina D.F. closed this discussion on 11 Mar, 2014 08:50 AM.
http://robotlegs.tenderapp.com/discussions/robotlegs-2/9800-injecting-parameter-values
CC-MAIN-2019-22
refinedweb
446
62.38
In previous post, we learned about Lambda expressions and functional interfaces. Now, let’s move on the discussion and talk about another related feature i.e. default methods. Well, this is truly revolutionary for java developers. Till java 7, we have learned a lot of things about interfaces and all those things have been in our mind whenever we wrote code or designed the applications. Some of these concepts are going to change drastically from java 8, after introduction of default methods. I will discuss following points in this post: What are default methods in java 8? Why default methods were needed in java 8? How conflicts are resolved while calling default methods? What are default methods in java 8? Default methods enable you to add new functionality to the interfaces of your libraries and ensure binary compatibility with code written for older versions of those interfaces. As name implies, default methods in java 8 are simply default. If you do not override them, they are the methods which will be invoked by caller classes. They are defined in(); public class Animal implements Moveable{ public static void main(String[] args){ Animal tiger = new Animal(); tiger.move(); } } Output: I am moving And if class willingly wants to customize the behavior then it can provide it’s own custom implementation and override the method. Now it’s own custom method will be called. public class Animal implements Moveable{ public void move(){ System.out.println("I am running"); } public static void main(String[] args){ Animal tiger = new Animal(); tiger.move(); } } Output: I am running This is not all done here. Best part comes as following benefits: - Static default methods: You can define static default methods in interface which will be available to all instances of class which implement this interface. This makes it easier for you to organize helper methods in your libraries; you can keep static methods specific to an interface in the same interface rather than in a separate class. This enables you to define methods out of your class and yet share with all child classes. - They provide you an highly desired capability of adding a capability to number of classes without even touching their code. Simply add a default method in interface which they all implement. Why default methods were needed in java 8? This is a good candidate for your next interview question. Simplest answer is to enable the functionality of lambda expression in java. Lambda expression are essentially of type of functional interface. To support lambda expressions seamlessly, all core classes have to be modified. But these core classes like java.util.List are implemented not only in JDK classes, but also in thousands of client code as well. Any incompatible change in core classes will back fire for sure and will not be accepted at all. Default methods break this deadlock and allow adding support for functional interface in core classes. Let’s see an example. Below is a method which has been added to java.lang.Iterable. default void forEach(Consumer<? super T> action) { Objects.requireNonNull(action); for (T t : this) { action.accept(t); } } Before java 8, if you had to iterate on a java collection then your would get an iterator instance and call it’s next method until hasNext() returns false. This is common code and have been used thousands of time in day to day programming by us. Syntax is also always same. So can we make it compact so that it takes only single line of code and still do the job for us as before. Above function does that. Now to iterate and perform some simple operation on every item in list, all you need to do is: import java.util.ArrayList; import java.util.List; public class Animal implements Moveable{ public static void main(String[] args){ List<Animal> list = new ArrayList(); list.add(new Animal()); list.add(new Animal()); list.add(new Animal()); //Iterator code reduced to one line list.forEach((Moveable p) -> p.move()); } } So here, an additional method has been added to List without breaking any custom implementations of it. It has been very desired feature in java since long. Now it’s with us. How conflicts are resolved while calling default methods? So far so good. We have got all basics well. Now move to complicated things. In java, a class can implement N number of interface. Additionally, a interface can also extend another interface as well. An if any default method is declared in two such interfaces which are implemented by single class. then obviously class will get confused which method to call. Rules for this conflict resolution are as follows: 1) Most preferred are the overridden methods in classes. They will be matched and called if found before matching anything. 2) The method with the same signature in the “most specific default-providing interface” is selected. This means if class Animal implements two interfaces i.e. Moveable and Walkable such that Walkable extends Moveable. Then Walkable is here most specific interface and default method will be chosen from here if method signature is matched. 3) If Moveable and Walkable are independent interfaces then a serious conflict condition happen, and compiler will complain then it is unable to decide. The you have to help compiler by providing extra info that from which interface the default method should be called. e.g. Walkable.super.move(); //or Moveable.super.move(); That’s all for this topic here. I will more on this next time when something interesting comes into my mind. DO not forget to drop your comments/thoughts or questions. Happy Learning !! Feedback, Discussion and Comments Naga Parise Hi Lokesh Gupta, Thanks for info. All your articles are extremely good. Nishtha How can we define “Static default methods”. I’m getting a Compile time error saying “Illegal combination of modifiers for the interface method run; only one of abstract, default, or static permitted”. Please help Shekhar Any concrete static method in an interface is “default” by default. So we don’t write ” default static myStaticMethod() “. Just write static myStaticMethod(){…;} and it will be static default method for that interface. Its confusing , else they should have made it Veerendhar Nice useful explanation Lokesh…. Thank you MANOHAR GNANAVEL Yeas Thank you Boitumelo J. Mahlong A good explanation on Default methods. I have a question which I will also attempt to answer by playing with a few examples. If default methods are to enable/support functional interface implementations on core classes, does it then mean if I have 5 default methods in my interface, I can implement all through lambda expressions or will it only be the one method not declared “default” Noopur lambda expressions are for defining body to abstract method of functional interface. The default methods already have definition in interface itself. Shilpa Thank you Lokesh! prathap hi lokesh, nice explanation and thanx adarsh Does this mark the end of abstract classes. Barnali Default method does not mark end of abstract classes. Interface does not have state/field so default method can not use state like abstract class. RAJ Nice short narration Thanks. Ankush Very nice and informative article. Thank you! Saurabh Nice, precise and short article. Thanks
https://howtodoinjava.com/java8/default-methods-in-java-8/
CC-MAIN-2020-45
refinedweb
1,201
58.79
Search Type: Posts; User: tbenbrahim Search: Search took 0.02 seconds. - 5 Mar 2014 6:11 PM - Replies - 0 - Views - 1,509 culprit is in LabelField.java line 122 - 15 Aug 2013 2:30 PM - Replies - 15 - Views - 13,033 I would suggest you educate your users, rather than change anything. If someone modified a record at 1000 in Paris, it was 0900 in London. Since the user in Paris sees 1000 and the one in London... - 11 Apr 2013 7:36 PM - Replies - 7 - Views - 6,538 it works in 3.0.1 - 2 Mar 2011 3:08 PM - Replies - 5 - Views - 2,480 ** * @author tony.benbrahim * */ public class AjaxComboBox<T extends ModelData> extends ComboBox<T> { public interface GetDataCallback<T> { void getData(String query, int maxResults,... - 5 Mar 2010 8:56 PM - Replies - 40 - Views - 11,868 Actually yes, it is in 2.0 :-) - 15 Dec 2009 10:54 PM Jump to post Thread: Stop using for each loops by tbenbrahim - Replies - 1 - Views - 2,365 change the order of the loops :) - 14 Dec 2009 7:31 PM Jump to post Thread: Ext GWT 2.1.0 Now Available by tbenbrahim - Replies - 70 - Views - 53,601 I think I did the same thing you did (got reduced visibility error, etc...) Make absolutely sure everything matches, and everything will be fine gxt resources, jars in WEB-INF/lib, jars in... - 14 Dec 2009 7:22 PM - Replies - 3 - Views - 2,154 Code written in 1.6/1.7 will compile without problem in GWT 2.0 Code written to take advantage of new 2.0 features (RunAsync, ResourceBundle, etc...) will obviously not compile in 1.X Tony - 9 Dec 2009 5:11 PM Jump to post Thread: GWT 2.0 support - When and how by tbenbrahim - Replies - 8 - Views - 2,949 I tried it today. My application compiles with a couple of warnings (no big deal). However there were many UI problems (wrong sized buttons, disappearing tab strips when switching tabs, etc...),... - 3 Jul 2008 8:34 PM Still does not look like GPL to me. I can take MySql, Apache (not GPL, I know) or Linux, make modifcation or extensions, and ship the whole thing on a CD or post it on the net. I have the right to... - 12 May 2008 1:24 AM From, By the same analogy, the corporate user who uses a computer at work doesn't get possession of it, so the user also does... - 12 May 2008 12:21 AM I was surprised to hear this mentioned (negatively) on the Java Posse podcast (by far the most popular Java podcast), so this has been a PR disaster so far. Java Posse podcast #184, about 20... - 31 May 2007 9:08 PM - Replies - 4 - Views - 1,647 it does fire a load event, however this does not update the contents of the combobox attached to the SimpleStore. The first line it hits in the handler is if (!focused) return;, and that's the end... - 31 May 2007 5:28 PM - Replies - 4 - Views - 1,647 There is a bug in SimpleStore when adding a single record using loadData with append=true for example: supplierStore.loadData([[data.id,data.name]],true); I was expecting my combo box tied... Results 1 to 14 of 14
http://www.sencha.com/forum/search.php?s=44be28cc3997768d280fff44076a13e6&searchid=10264968
CC-MAIN-2015-11
refinedweb
549
81.63
Introducing meadow The only connected things platform that runs full .NET Standard 2.0 apps on a microcontroller. Cloud managed, secure and embeddable. Real .NET for real IoT. Support us on Back our project and help bring meadow to the world! Professional Grade® Runs full .NET Standard 2.0 Libraries. Updateable over the air. Complete documentation with great guides and samples. Integrates with your favorite cloud. Get connected easily by adding your favorite cloud library via built-in support for NuGet. Production Ready Leverage existing teams and resources to build hardware solutions. Low-cost embeddable modules for scaled deployments. Go from Prototype to Production with the same hardware, same code. Built-in gateway communications: WiFi, BLE, and LTE (future modules). Plug and Play Massive peripheral driver library API—just grab your favorite components and plug them in. Well designed APIs make hardware easy. using Meadow; var tempSensor = new AnalogTemperatureSensor(Pins.D1); var lcd = new Lcd2004(); temp.NotificationThreshold = 0.5f; temp.TemperatureChanged += (s,e) => { lcd.WriteLine( "Temp: " + e.Temperature.ToString("n1"), 0); } meadow F7 Development Board Powerful enough for machine vision and AI. Small and efficient enough to be embedded anywhere. Powerful, energy-efficient STM32F7 microcontroller with WiFi, BLE, 216MHz, 16MB RAM, 32MB Flash, 2D graphics and JPEG acceleration.Tons of GPIO, PWM, I2C, SPI, CAN, UART and more. Solar power ready with integrated LiPo battery charger. Meadow is Unbeatable Join the Revolution Back us on KickstarterView Meadow Kickstarter
https://www.wildernesslabs.co/Meadow
CC-MAIN-2019-39
refinedweb
238
55
Extra Models¶ Warning The current page still doesn't have a translation for this language. But you can help translating it: Contributing. Continuing with the previous example, it will be common to have more than one related model. This is especially the case for user models, because: - The input model needs to be able to have a password. - The output model should not have a password. - The database model would probably need to have a hashed password. Danger Never store user's plaintext passwords. Always store a "secure hash" that you can then verify. If you don't know, you will learn what a "password hash" is in the security chapters. Multiple models¶ Here's a general idea of how the models could look like with their password fields and the places where they are used: from typing import Optional from fastapi import FastAPI from pydantic import BaseModel, EmailStr app = FastAPI() class UserIn(BaseModel): username: str password: str email: EmailStr full_name: Optional[str] = None class UserOut(BaseModel): username: str email: EmailStr full_name: Optional[str] = None class UserInDB(BaseModel): username: str hashed_password: str email: EmailStr full_name: Optional[str] = None About **user_in.dict()¶ Pydantic's .dict()¶ user_in is a Pydantic model of class UserIn. Pydantic models have a .dict() method that returns a dict with the model's data. So, if we create a Pydantic object user_in like: user_in = UserIn(username="john", password="secret", email="john.doe@example.com") and then we call: user_dict = user_in.dict() we now have a dict with the data in the variable user_dict (it's a dict instead of a Pydantic model object). And if we call: print(user_dict) we would get a Python dict with: { 'username': 'john', 'password': 'secret', 'email': 'john.doe@example.com', 'full_name': None, } Unwrapping a dict¶ If we take a dict like user_dict and pass it to a function (or class) with **user_dict, Python will "unwrap" it. It will pass the keys and values of the user_dict directly as key-value arguments. So, continuing with the user_dict from above, writing: UserInDB(**user_dict) Would result in something equivalent to: UserInDB( username="john", password="secret", email="john.doe@example.com", full_name=None, ) Or more exactly, using user_dict directly, with whatever contents it might have in the future: UserInDB( username = user_dict["username"], password = user_dict["password"], email = user_dict["email"], full_name = user_dict["full_name"], ) A Pydantic model from the contents of another¶ As in the example above we got user_dict from user_in.dict(), this code: user_dict = user_in.dict() UserInDB(**user_dict) would be equivalent to: UserInDB(**user_in.dict()) ...because user_in.dict() is a dict, and then we make Python "unwrap" it by passing it to UserInDB prepended with **. So, we get a Pydantic model from the data in another Pydantic model. Unwrapping a dict and extra keywords¶ And then adding the extra keyword argument hashed_password=hashed_password, like in: UserInDB(**user_in.dict(), hashed_password=hashed_password) ...ends up being like: UserInDB( username = user_dict["username"], password = user_dict["password"], email = user_dict["email"], full_name = user_dict["full_name"], hashed_password = hashed_password, ) Warning The supporting additional functions are just to demo a possible flow of the data, but they of course are not providing any real security. Reduce duplication¶ Reducing code duplication is one of the core ideas in FastAPI. As code duplication increments the chances of bugs, security issues, code desynchronization issues (when you update in one place but not in the others), etc. And these models are all sharing a lot of the data and duplicating attribute names and types. We could do better. We can declare a UserBase model that serves as a base for our other models. And then we can make subclasses of that model that inherit its attributes (type declarations, validation, etc). All the data conversion, validation, documentation, etc. will still work as normally. That way, we can declare just the differences between the models (with plaintext password, with hashed_password and without password): from typing import Optional from fastapi import FastAPI from pydantic import BaseModel, EmailStr app = FastAPI() class UserBase(BaseModel): username: str email: EmailStr full_name: Optional[str] = None class UserIn(UserBase): password: str class UserOut(UserBase): pass class UserInDB(UserBase): hashed_password: str Union or anyOf¶ You can declare a response to be the Union of two types, that means, that the response would be any of the two. It will be defined in OpenAPI with anyOf. To do that, use the standard Python type hint typing.Union: Note When defining a Union, include the most specific type first, followed by the less specific type. In the example below, the more specific PlaneItem comes before CarItem in Union[PlaneItem, CarItem]. from typing import Union from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class BaseItem(BaseModel): description: str type: str class CarItem(BaseItem): type = "car" class PlaneItem(BaseItem): type = "plane" size: int items = { "item1": {"description": "All my friends drive a low rider", "type": "car"}, "item2": { "description": "Music is my aeroplane, it's my aeroplane", "type": "plane", "size": 5, }, } @app.get("/items/{item_id}", response_model=Union[PlaneItem, CarItem]) async def read_item(item_id: str): return items[item_id] List of models¶ The same way, you can declare responses of lists of objects. For that, use the standard Python typing.List: from typing import List from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class Item(BaseModel): name: str description: str items = [ {"name": "Foo", "description": "There comes my hero"}, {"name": "Red", "description": "It's my aeroplane"}, ] @app.get("/items/", response_model=List[Item]) async def read_items(): return items Response with arbitrary dict¶ You can also declare a response using a plain arbitrary dict, declaring just the type of the keys and values, without using a Pydantic model. This is useful if you don't know the valid field/attribute names (that would be needed for a Pydantic model) beforehand. In this case, you can use typing.Dict: from typing import Dict from fastapi import FastAPI app = FastAPI() @app.get("/keyword-weights/", response_model=Dict[str, float]) async def read_keyword_weights(): return {"foo": 2.3, "bar": 3.4} Recap¶ Use multiple Pydantic models and inherit freely for each case. You don't need to have a single data model per entity if that entity must be able to have different "states". As the case with the user "entity" with a state including password_hash and no password.
https://fastapi.tiangolo.com/tr/tutorial/extra-models/
CC-MAIN-2021-17
refinedweb
1,035
53.41
So I got this problem in my homework, which asks me to implement with both set and map respectively to solve for the numbers of modes in an array(to find the data that has the highest frequency of occurrence within a set of data). It also requires to test and show solutions for no modes, 1 mode, and multiple modes, These are literally all the instructions I have(The teacher isn't keen on being specific). As far as I understand, neither set or map allows for duplicates values, then how would there even exist modes? I am very iffy. Can someone give me some insights on this one, should it be doable? I would appreciate any pointers to kick this off. Thanks!! Linked Questions - NULL and nullptr comparison - Error reading variable. Cannot create a lazy string with address - C++ getInt function - 2D vector arrays in c++ - Exception thrown: read access violation. **_Right_data** was 0x8. occurred - C++ Tennis Score Tracker - How to create an object of class defined inside namespace in c++ - C++ Sorting Class Array - Calling private method in C++ - How to use string::find()? - C++ File I/O - CSV update - RoundTo() differences in roundings - Monotone Boolean Function - Difference in evenness - Declaration is incompatible with? Using set and map to solve the mode problem?Asked by V.Reeve At 1 Answers Related Questions - UNO game card functionalities - Error : undefined reference to 'function' in C++ - Char array from cin - How can I declare an object in a class, that needs another in its constructor? - Quite Good Numbers - Merge c++ to header only - Separating classes with interdependencies into headers - What is wrong with my solution for code check problem CIELAB - 2D array size unknown - Operator [] overloading - How to fix failed to build visual studio project? (Code Generation Failed) - Unqualified lookup and (maybe-)dependent base classes - Trigger specific constructor for a member in the member initialisation list based on a flag - how to test a string for letters only - What should be the result of multiplication of two characters in C++? The solution is really simple, you just have a map that keeps count of how many times each integer showed up and the iterate through the map to find the key with the largest count. The solution is the last one right here: As for the solution with the set, your probably best of using a ordered multiset that will keep things in the right order and then just iterate with a variable max count and current count. Replace most_freq (another variable) with the current element when current count is > than max count. That is basically the solution. I could code this up in 3 minutes, but I want you to do your homework.
https://techqa.club/v/q/using-set-and-map-to-solve-the-mode-problem-c3RhY2tvdmVyZmxvd3w1NTc4NDg5NA==
CC-MAIN-2021-39
refinedweb
454
58.42
CodePlexProject Hosting for Open Source Software I'm making a game that uses Farseer Physics, and the XNA.Framework.Content. So far to get the physics engine and XNA content pipeline to work together, I've added quite a few [ContentSerializerIgnore] attribute tags around the XNA3 branch. I'm also keeping with the latest release of Farseer, and doing a daily update. My question is if the XNA3 branch could include tags to make it's objects intermediate serializer safe for everyone? If we included the ContentSerializerIgnore attribute, we would have dependence on the XNA framework. But because it is an attribute (simply just a class that inherits from Attribute) we might be able to include it. At the same time, it would be great if there were support for standard serialization (not using the content pipeline). Would you be willing to contribute this to the project? Standard serialization would be cool! However, I haven't gone too far into using it. So far, I've only made adjustments to Geom.cs, Body.cs, and GenericLis.cs. I've added: #if(XNA) [ContentSerializerIgnore] #endif For all delegates, and under #if (XNA) using Microsoft.XNA.Framework.Content;. I've Written Readers and Writers using the Microsoft.XNA.Framework.Content. So, it I do not think it will break the standard library. I'd be more than happy to contribute to the project. Putting attribute tags for the content pipeline probably won't take too long, as there's only about 8 of them. If you like, I could also put in XNA readers and writers (I just got done coding mine). I'm not too sure, but for standard xml serialization, the [XmlIgnoreAttribute] tag will cause the standard serializers to ignore the specific attribute. It will compile with the [ContentSerializerIgnore] attribute below it, so I think should work with both Before I started hacking at the farseerphysics.dll, I created custom, separate classes, which contained the same values as the body and geom, but it's harder to maintain. This would make it way easier! :) I've also gone a small part of the way with Intermediate Serializer stuff. Rather than editing the engine code, I created classes for PhysicsSimulatorContent, BodyContent and GeomContent. It was only really a proof of concept idea, and it was pretty messy with the Geom's storing a Body ID to look up the actual Body at runtime. I ended up scrapping the idea of serializing the whole Sim. How would you get your game objects' references assigned to the correct Geom.Tag anyway? Now I think I will have something like this for my physics object's content classes: PhysicsObjectContent |- List<LinkContent> |- List<PhysicsElementContent> |- BodyContent, |- List<GeomContent> Where LinkContent is a class with all the information for creating Joints and Springs (LinkType, elementId1, elementId2, anchorPoint1, anchorPoint2 etc). I don't see any reason why these content classes wouldn't be usable with both XmlSerializer and IntermediateSerializer. @daswampy: Great. If you could download the latest source code checkin, add the attributes and send it as a patch in our patch section (found under source control) it would be much appreciated. Standard .net serialization is just adding [Serializable] to the classes that can be serialized. If some properties/fields does not need to be serialized, put a NonSerialized attribute above them to make the serializer ignore it. (Remember, binary serializers even serializes private fields). The XML serializer uses the XmlIgnoreAttribute (It only serializes public fields and properties) to mark it as non serialized. Working with the XML serializer is the easiest in our case because we don't need to serialize all the private fields and properties. Everything needed is public (i think). To get the binary serializer to work correctly too, might be easier by implementing the ISerializable interface and tell it to only serialize the relevant data. An example on how that is done, can be found here. Hmm, so it looks like there are many ways to serialize the data. So, which one will the patch be for? The best way to test that it works, is to write a serializer for it, so which ever way, there should be a sample in the AdvancedSamples. I was not aware that binary serialization is not supported on Xbox. I guess we have to go with XML serialization only then. But, that also makes it a lot easier :) All public properties that should not be serialized should be attributed in the following manner: #if(XNA) [ContentSerializerIgnore] #endif [XmlIgnore] public void Method(Type arguments) { } That should do it. It would be great if you could create a test to see if it works. To smack two flies with one hit, it would be awesome if you made a simple sample for inclusion in AdvancedSamples. Just a simple "Load simulation", "Start Simulation", "Save Simulation" demo. All the tools are there to create such a sample. @roonda I keep all geoms and bodies encapsulated in a physics object. The physics object is kept inside an array. The array is kept in a wrapper. The wrapper is kept in Unit Manager. The unit manager is kept in an arbiter. I made a struct called Tag, it contains an enum and an int. The tag stores information of which unit manager it belongs to, and what is it's index inside the array wrapper (which is inside the unit manager). When a class wants the complete unit (which uses the physics object as a base class), based off one geom, it gives the arbiter the geom's tag, and the arbiter returns the unit (the arbiter can see all unit managers). I hope that made sense. The thing with System.Runtime.Serialization, is that some of the classes are supported on the xbox 360, some aren't. I'm going through the MSDN documentation for it right now. I'm going to ask in the xna forums what is supported and what isn't supported. Found it: What about Zune, I know the XNA serializer doesn't work it, as far as I know only System.Xml works. I think serialization would speed up the loading of my game since right now it takes an unexceptible 1-2 minutes. XML serialization is for loading and unloading xml documents. So if you have 5 different enemies, you can have 5 different xml documents which define the properties of the enemies. During run-time, the game will load the xml documents into the enemies. Taking 1-2 minutes to load a game might be due to something else. I looked a little bit on MSDN and XNA forums to see what the work around would be for Zune xml, turns out that using XNA content pipeline will work for zune. During run-time, the zune will need access to the ContentReader. The ContentReader is used whenever you do: Unit unit = game.ContentManager.Load<Unit>("assetLocation"); On MSDN, if you look at the bottom, and around the Microsoft.XNA.Framework.Content, it's supported on the Zune platform. Now, the actual content pipeline itself, with the xml writers, is definitly not supported on the zune. This is because visual studio writes the xml files to binary .xnb during compile time. The compiler also checks between the readers, writers, and xml documents to make sure that the xml tags match. Since the content pipeline in my mind is just a fancy way of loading resources and most of it's features are not really something FPE will gain from. (Features like load once from disk and cache rest of time might be something worth) I think that simple XML serialization of the physics engine together with it's bodies, geometries, joints, springs and controllers would be a great thing. Simply saving the state of the whole world in an XML file would make it easier for people to create save/load functions in their game. Having the basic structure in place also makes it possible to extend with binary serialization and send copies of the world (physics simulator along with dynamics) over the network. I hope you come up with a solution that works on most platforms. If you come up with a solution in the next few days, I might be able to squeeze it into 2.1. Don't wait for me for 2.1. I'm going to teach my self System.XML (it's really just going through some tutorials) over the next few days, while continuing development on my project. I'm not going to port my XNA pipeline solution over until after the standard xml solution is written. Probably the best thing would be to write a tool that will allow people to create and save physics objects to xml, and have classes built into farseerphysics.dll to read in the saved pre-created physics objects. Oh my, that's a lot more work than just adding tags so people can write it themselves :). I've used this simple generic XML serializer in some of my previous projects: If you write down the tags the correct places, I will make sure those tags gets used. For now, we only implement standard XML serialization - any XNA pipeline stuff can come later. Edit: A heavily updated implementation of the generic XML serializer I might add, but it has the fundamental idea. I was tired of manually updating the ContentTypeReaders/Writers for my game, so I just wrote a generic class that takes advantage of the Intermediate Serializer: If it works on all XNA Platforms (awaiting answer from XNA Forums), I'm going to submit this as the XNA Serializer, then get to work on a .net standard XML serializer. If it doesn't, then it's great for development purposes, when classes are always changing etc. Maybe someone could put it to good use. It just relies on using a folder directory set-up to be exactly the same as your namespacing. Basically, it does everything automatically for you :), just need to drop in some tags. /// <summary> /// Generic class to deserialize any object based on namespace name /// Must have a matching content directory tree to match full namespace /// </summary> /// <typeparam name="T"></typeparam> public static class ObjectSerialization<T> where T : new() { private static StringBuilder loc = new StringBuilder(System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location)); private static int directorylength = loc.Length - 14; static public void Serailize(string assetName) { T testData = new T(); XmlWriterSettings settings = new XmlWriterSettings(); settings.Indent = true; settings.ConformanceLevel = ConformanceLevel.Auto; SetCurrentLocation(assetName); using (XmlWriter writer = XmlWriter.Create(loc + ".xml", settings)) { IntermediateSerializer.Serialize(writer, testData, loc.ToString()); } } static public T Deserialize(string assetName) { SetCurrentLocation(assetName); //Create the xml reader using (XmlReader reader = XmlReader.Create(loc.ToString())) { //Deserialize the data return IntermediateSerializer.Deserialize<T>(reader, ".\\"); } } private static void SetCurrentLocation(string assetName) { //Chop off "bin\x86\Debug" and anything else from the end loc.Remove(directorylength, (loc.Length - directorylength)); //Set the location to the correct directory loc.Append("\\content\\" + typeof(T).FullName); //Using namespaces, so change the . to a \\ loc.Replace(".", "\\"); //Add the assetname loc.Append("\\" + assetName + ".xml"); } } Sent patch for the tags and this previous copy pasted class. I'm not sure if you wanted the body and geom serialized in the springs and joints, so that's up to you. Going to write a demo screen which will use system.xml next. Me and my team have a fully working set using the 3rd method that works on Xbox, Pc and the Zune perfectly that we have intergrated into farseer We will be submitting a patch sometime in the next week or so They already applied the patch for the [ContentSerializerIgnore]/[XmlIgnore]. It works great now. Been using the IntermediateSerializer until XNA 3.1 comes, which will have automated .xnb serialization. When I submitted the patch, I forgot to add tags to IsStatic in geom.cs. For IsStatic, it requires that the body be instantiated, which isn't during the deserialization of a geom. I'll submit another patch w/ just that fix. IMO, The only editing which should be done to farseer is just tags, no readers/writers/anything. Anything else should be put into an advanced demo screen. Nonetheless, I look forward to seeing your implementation for it. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://farseerphysics.codeplex.com/discussions/58138
CC-MAIN-2016-44
refinedweb
2,078
56.15
Sets information about Access Manager log service for the remote log module. This must be called before calling am_log_message() with AM_LOG_REMOTE_MODULE as the log module. Otherwise use am_log_log() with a log record and SSO token ID to log to Access Manager. #include "am_log.h" AM_EXPORT am_status_t am_log_set_remote_info(const char *rem_log_url, const char *sso_token_id, const char *rem_log_name, const am_properties_t log_props); This function takes the following parameters: URL of the Access Manager log service. The logged by SSO Token ID. The log name on Access Manager. Properties to initialize the remote log service with. This function returns am_status_t with one of the following values: If the function call is successful. If an error occurs.
https://docs.oracle.com/cd/E19636-01/819-2140/adocs/index.html
CC-MAIN-2019-04
refinedweb
112
59.09
I'm wondering why the following tag methods produce different results: Method 1: def tag(html) print "<#{html}>#{yield}</#{html}>" end def tag(html) print "<#{html}>" print yield print "</#{html}>" end tag(:ul) do tag(:li) { "It sparkles!" } tag(:li) { "It shines!" } tag(:li) { "It mesmerizes!" } end <li>It sparkles!</li><li>It shines!</li><li>It mesmerizes!</li><ul></ul> <ul><li>It sparkles!</li><li>It shines!</li><li>It mesmerizes!</li></ul> Just to echo @tadman's answer: order of evaluation AND inconsistency of api. Your block sometimes returns strings and sometimes prints strings as a side-effect. print "<#{html}>" print yield print "</#{html}>" Here you print, then yield. If the block returns a string (one of :li blocks), then it's printed right here. If it's a :ul block, then its side-effects happen (printing of li blocks) and nil is printed after that. In the other case print "<#{html}>#{yield}</#{html}>" Ruby has to assemble one string to print. Which means yielding before any printing. Which means that side-effects happen before printing the opening <ul>. As the ul block returns nil, that's why it's printed empty at the end of the string ( <ul></ul>). Does it make sense?
https://codedump.io/share/YiW2gvbUtB5t/1/ruby-iterator-yield
CC-MAIN-2018-26
refinedweb
207
77.13
behalf. It is recommended that you familiarise yourself with CSRF, what the attack vectors are, and what the attack vectors are not. We recommend starting with this information from OWASP. Simply put, an attacker can coerce a victims the filter to your Global object: import play.GlobalSettings; import play.api.mvc.EssentialFilter; import play.filters.csrf.CSRFFilter; public class Global extends GlobalSettings { @Override public <T extends EssentialFilter> Class<T>[] filters() { return new Class[]{CSRFFilter.class}; } } §Getting the current token, and it performs the check. It should be added to all actions that accept session authenticated POST form submissions: @RequireCSRFCheck public static static Result get() { return ok(form.render()); } §CSRF configuration options The following options can be configured in application.conf: csrf.token.name- The name of the token to use both in the session and in the request body/query string. Defaults to csrfToken. csrf.cookie.name- If configured, Play will store the CSRF token in a cookie with the given name, instead of in the session. csrf.cookie.secure- If csrf.cookie.nameis set, whether the CSRF cookie should have the secure flag set. Defaults to the same value as session.secure. csrf.body.bufferSize- In order to read tokens out of the body, Play must first buffer the body and potentially parse it. This sets the maximum buffer size that will be used to buffer the body. Defaults to 100k. csrf.sign.tokens- Whether Play should use signed CSRF tokens. Signed CSRF tokens ensure that the token value is randomised per request, thus defeating BREACH style attacks. Next: Working with JSON Found an error in this documentation? The source code for this page can be found here. After reading the documentation guidelines, please feel free to contribute a pull request. Have questions or advice to share? Go to our community forums to start a conversation with the community.
https://www.playframework.com/documentation/2.2.x/JavaCsrf
CC-MAIN-2019-18
refinedweb
310
51.55
Faster Boot with Many Devices By Steve Sistare on Aug 01, 2014 In addition to its highly touted features such as Kernel Zones and Unified Archives, the just-released Solaris 11.2 has some nice unsung optimizations that are noticed less because everything works the same, but faster. One set of optimizations improves the scalability and efficiency of the devfsadm daemon, which is responsible for managing the namespace of devices under the /devices and /dev mount points in the filesystem. This daemon is very busy at boot time and during certain device configuration operations. For example, it accumulates 11 minutes of CPU time during boot on a system with 1000's of devices: Moreover, devfsadm is on the critical path during boot, and many SMF services depend directly or indirectly upon it completing the configuration of /dev, including for example the console login service. We made many improvements to the userland and kernel components of devfs, including caching, hashing, and tuning timeouts. In one test, we measure the time from typing the OBP boot command to the appearance of the console login prompt. The system is a T5440 with 4000 disk devices, multi-pathed using MPxIO, which multiplies the number of device instances. The previous version of Solaris takes over 44 minutes, and Solaris 11.2 takes just over 3 minutes, for a 13X speedup. This is an old T-series processor, and both the old and new Solaris will be faster on a more recent processor such as the T5, but Solaris 11.2 will still reduce boot times by many minutes on current platforms with 1000's of devices. Your results will vary with device count. The algorithms we fixed have polynomial time complexity, so the times do not scale linearly. If your system has only 10's to 100's of devices, you might not notice the difference. In addition to normal boot and reboot, commands for which we observe speedups include the following (but no doubt there are others I have missed): - devfsadm -C : Clean up dangling /dev links - reboot -r : reconfiguration reboot - cfgadm -al : show status of dynamically reconfigurable hardware r esources Do you configure systems with a huge number of devices? Does Solaris 11.2 make a difference for you? Please share your experiences.
https://blogs.oracle.com/sistare/tags/solaris
CC-MAIN-2015-35
refinedweb
380
60.24
-12-2014 03:44 PM Hello all, So, I take the 2 TCL files created with 2014.1 (project and top-levle BD), and I am using to rebuild a project with VIVADO 2014.2. Sounds like straight-forward task, so before I did this, I reloaded this project in GUI and wrote both TCL files. Then ran a Linux meld, and the BD is identical. After I solved an error from the previous posting, I get the following error, see further. The question is obvious, and I would like some to please explain to me in simple words what do I need to do. Don't just send me a link to the UG doc! I don't always have time to RTFM when a new tool comes out, yet not all docs are updated to 2014.2 release. Attached is the BD tcl file. Thanks in advance for the help. Prompt response will be appreciated and kudoed. BR Vlad # # # # Restore current instance # current_bd_instance $oldCurInst # # save_bd_design # } # create_root_design "" ERROR: [BD 5-216] VLNV <> is not supported for this version of the tools. ERROR: [Common 17-39] 'create_bd_cell' failed due to earlier errors. while executing "create_bd_cell -type ip -vlnv inst_arches_udp_ip " (procedure "create_root_design" line 123) invoked from within "create_root_design """ (file "./system_bd_eclipse.tcl" line 857) Vivado% 06-12-2014 09:31 PM - edited 06-12-2014 09:32 PM Hi Vlad, Is this your custom IP? Did you make sure you add the custom IP to the IP repositories for vivado 2014.2? In this case Vivado is not finding this specific IP in the IP repository and so you see this issue. Can you add the custom IP to the IP repository? Regards, Achutha 06-13-2014 08:37 AM Hi Achutha, This particular error turned out to be my TCL bug. Everything I do stored in a namespace, and the call to the namespace was incorrect. Once I fixed it, I only had to clean up the errors of the drivers from my previous postings. Sorry about that Vlad 06-15-2014 09:13 PM Hi Vlad, Thanks for sharing the solution. Please close this thread by marking the answer. Regards, Deepika. 09-18-2014 10:02 PM Hi Vlad, I met the same problem as yours, but I am still not clear about how you fix the issue after reading your reply. Could you please give an example? Thanks zli 09-19-2014 06:34 AM It has been 3 months now, so you have to refresh my memory :o) What exactly is the problem you are having? 04-08-2015 09:37 AM I got exactly this error when I forgot to change the part number in my project setup tcl script. It seems if your block diagram is created for a different 7-Series part it will throw this misleading error message. Check your part type. :-) Pete 04-08-2015 11:51 AM 04-08-2015 02:29 PM
https://forums.xilinx.com/t5/Design-Entry/VIVADO-2014-2-cannot-properly-parse-BD-TCL-created-with-2014-1/td-p/474882
CC-MAIN-2020-10
refinedweb
491
73.17
Practical AOP (Part 1): Transparent remoting with AOP and EJBs There are basically four views about AOP nowadays (ok, it's more or less the same for any technology): those who think it's the golden hammer and everything is a nail, those who think it has some applicability, those who are strongly against it or have deep concerns about its wild adoption and those who simply couldn't care less about it. :-) I hope this kind of posts I intend to write help all the four groups in some way. Let's start with an example most people are familiar with: remoting. Many technologies try to address remoting with different approaches - RMI, CORBA, EJB, webservices etc. - and each one has its own applicability, since most of them (are intended to) do more than just remoting. Also, these technologies can be implemented in several ways - consider the way EJB implementations in application servers has evolved, as an example. So, let's narrow our requirements for this case of study: - Remoting should be transparent to the user. So, this means not even lookup or interfaces should be necessary for a user to call a remote component. - We want to keep the benefits provided by EJB technology - security, transactionality, etc. - but without any complicated constraint on our code. We don't want to write tons of rules for a Hello World. In a simple way, we want EJB benefits without any of its limitations. How could we implement this? Using genesis this should be as hard as: public class RemoteClass implements java.io.Serializable { /** * @Remotable */ public void helloWorld() { System.out.println("Hello world"); } } public class Client { public static void main(String[] args) { RemoteClass remote = new RemoteClass(); remote.helloWorld(); } } If you run this example using a genesis empty-project based structure, putting RemoteClass in your shared sources dir and Client in your client sources dir you will see that "Hello World" actually gets printed in your application server console. How this magic happens? genesis' aspect named net.java.dev.genesis.aspect.EJBCommandExecutionAspect intercepts execution of methods annotated as @Remotable as defined in aop.xml and executes the method call inside a Stateless Session Bean in the server side. Since you have a simple POJO, you are not constrained and can take advantage from any OO feature you want, including instantiating a remote object with new if that's what you want, and you still get all the benefits from EJB technology. It's a much cleaner approach to remoting than other ones currently available and it's certainly going to be expanded on future releases to support full Session Beans semantics with plain POJOs - as well as the current model. For further information about how this actually is implemented by genesis, refer to the documentation pages for genesis aspects and genesis business component model. - Login or register to post comments - Printer-friendly version - mister__m's blog - 1802 reads
https://weblogs.java.net/blog/mister__m/archive/2004/12/practical_aop_p.html
CC-MAIN-2015-27
refinedweb
485
52.8
12 July 2011 19:38 [Source: ICIS news] LONDON (ICIS)--German biofuels firm CropEnergies reported a sharp increase in its fiscal first-quarter operating profit and sales, despite the country’s troubled launch of 10% bioethanol blended gasoline (E10), it said on Tuesday. CropEnergies reported operating profit for the three months ended 31 May at €15.3m ($21.5m) - sharply up from the €2.4m in the same period last year - as sales rose by 41% to €132.1m. CropEnergies' bioethanol production increased by 14% year on year to 157,000 cubic metres, partly as a result of less maintenance work compared with the same period last year, it said. As for E10, CropEnergies said sales are picking up. The fuel has been approved for sale at Germany's pumps since 1 January, but many drivers rejected it, fearing damage to their engines. CropEnergies said producers sold some 149,000 tonnes of E10 blended fuel in April, a 9% market share. This compares with 115,000 tonnes in February. By mid-June, E10 was already available at about half of Germany's petrol stations, CropEnergies said. In fact, Germany's E10 volumes are significantly higher than volumes at French petrol stations when the fuel was introduced there two years ago, the company added. However, an industry source has told ICIS the German and French E10 launches differ because ?xml:namespace> For the full fiscal year ending 29 February 2012, CropEnergies expects sales to reach up to €570m, a 21% year-on-year increase from €473m in the 2010/2011 fiscal year. The company is one of ($1 = €0.71)
http://www.icis.com/Articles/2011/07/12/9476915/cropenergies-defies-german-e10-troubles-as-profit-sales.html
CC-MAIN-2014-15
refinedweb
269
61.87
Finding the track lanes, Part II As a quick recap from last time, we started with this image: and using some processing we got to this: We will be continuing the code from the previous part, which you can find here: Finding the track lanes, Part I So how do we decide where to place our points? What we want to do is 'scan' along the image in many places and find the outsides of the three colours. Before we can do that they need to be simplified to True or False arrays. We can do this very simply by using the inbuilt functionality of numpy: red = red > 0 green = green > 0 blue = blue > 0 walls = walls > 0 The only trouble here is that cv2.imwrite will not handle this data type. We can make a function to help us by making the values between 0 and 255 instead: def WriteMask(name, mask): image = mask * 255 cv2.imwrite(name, image) We can then write our masks like so: WriteMask('blue-mask.jpg', blue) WriteMask('green-mask.jpg', green) WriteMask('red-mask.jpg', red) WriteMask('walls-mask.jpg', walls) Which gives this for the colours: Note how the image is simply black or white now, no grey like last time. The next thing to do is decide where in the image we will take slices. This code will generate 100 slices along the original image: grid = 100 scanLines = [] for i in range(grid): # Work out the position in the original image position = (i / float(grid)) * height position = int(position) # Work out the cropped position if position < cropTop: # Above our cropped region pass elif position >= cropBottom: # Below our cropped region pass else: # In the cropped region, correct and add to our list croppedY = int(position - cropTop) scanLines.append(croppedY) # Show the list of positions print scanLines There were 70 values within the crop we did, this is because we cropped away 30% of the image :) We can better illustrate where these lines are by drawing them. We do this by making a brand new image the same size as the cropped one. Then we draw our lines on top of it: # Make a black image the same size as our cropped image scanLineImage = numpy.zeros_like(cropped) colourWhite = (255, 255, 255) # Loop over each line for y in scanLines: cv2.line(scanLineImage, (0,y), (width-1,y), colourWhite, 1) cv2.imwrite('scanlines.jpg', scanLineImage) Now we have some lines to scan, how do we scan a line? We get numpy to help with that as well. We make ourselves a little function which will find the edges in a mask image: def SweepLine(mask, y): found = [] # Grab the line of interest line = mask[y, :] # Get numpy to give us a list of the positions where the line changes in value changed = numpy.where(line[:-1] != line[1:])[0] # Remove changes too close to the edge of the image for i in changed: if i < 2: pass elif i > (width - 3): pass else: found.append(i) # Return the found values return found A quick test with a position of 135 in the green mask for example: print SweepLine(green, 135) shows [476, 1075, 1640] in our example. That would mean the green value changes three times on that line. We can now process each line, but we still need to match the lines together. We do this by looking for similar positions: # The values of try1, try2, try3 are used to attempt a match with target # Any matches are added to existing lists matched1, matched2, matched3 # Any values which cannot be matched are added to the existing list unmatched def FindMatches(y, target, try1, try2, try3, matched1, matched2, matched3, unmatched): maxSeperation = int(width * 0.05) # Loop over all the values in target: while len(target) > 0: # Remove the next value from the list of targets xt = target.pop() matched = False # See if try1 can match it if try1: for x1 in try1: if abs(x1 - xt) < maxSeperation: # Matched, work out the point and add it matched = True try1.remove(x1) x = (xt + x1) / 2 matched1.append((x, y)) break if matched: continue # See if try2 can match it if try2: for x2 in try2: if abs(x2 - xt) < maxSeperation: # Matched, work out the point and add it matched = True try2.remove(x2) x = (xt + x2) / 2 matched2.append((x, y)) break if matched: continue # See if try3 can match it if try3: for x3 in try3: if abs(x3 - xt) < maxSeperation: # Matched, work out the point and add it matched = True try3.remove(x3) x = (xt + x3) / 2 matched3.append((x, y)) break if matched: continue # No matches unmatched.append((xt, y)) The function is fairly long but quite simple. It takes a list of points to try and match and one or more lists to try and match it with. Each point is then added to one of the matched lists, or is added to the unmatched list. So we now have: - Four image masks we can match - A set of lines to scan over - A function to scan for changes - A function to find matches between two or more lists We are now ready to find all of our points :) What we do is go through each line in turn and: - Scan each mask for the changes (if any) on that line - Attempt to match each line with any it might be next to - Keep all the matches in the same list for all of the lines We do that like this: # Make our matched lists matchRG = [] matchRB = [] matchRW = [] matchGB = [] matchGW = [] unmatched = [] # Loop over each line for y in scanLines: # Scan the masks edgeR = SweepLine(red, y) edgeG = SweepLine(green, y) edgeB = SweepLine(blue, y) edgeW = SweepLine(walls, y) # Do the matching FindMatches(y, edgeR, edgeG, edgeB, edgeW, matchRG, matchRB, matchRW, unmatched) FindMatches(y, edgeG, edgeB, edgeW, None, matchGB, matchGW, None, unmatched) # Add any left over points to the unmatched list others = edgeB[:] others.extend(edgeW) for x in others: unmatched.append((x, y)) It would help a lot if we could see our points at this stage. We can make a quick function to draw a cross on the image like this: def DrawCross(image, (x, y), (r, g, b)): crossSize = 5 width = image.shape[1] height = image.shape[0] # Build the list of points to change points = [] for i in range(-crossSize, crossSize + 1): points.append((x + i, y)) points.append((x, y + i)) # Change the points on the image for point in points: x = point[0] y = point[1] if (x >= 0) and (y >= 0) and (x < width) and (y < height): image.itemset((y, x, 0), b) image.itemset((y, x, 1), g) image.itemset((y, x, 2), r) Now we have everything we need to plot all of our points onto a blank image: pointImage = numpy.zeros_like.jpg', pointImage) We can do the same thing using the cropped image as a base instead: pointImage =2.jpg', pointImage) In the standard Race Code processing we also check if the changes are True to False or False to True. This allows the code to tell the ordering of the changes when it makes a difference. For example in the images above the green | blue line is labelled the same as the blue | green line. While the changes to see if the edges are changing to on or off are simple, they do make the code a fair bit longer. In fact our standard processing has a total of: - Seven valid matching pairs - Two invalid matches: wrong-way and unknown - Two returned lists from SweepLine: rising and falling - Four calls to FindMatchesto determine all of the matches All of these points can be seen from Race.py by calling TrackLines(). These are the points which we will use to determine where the track is and where it will be heading. Stay tuned for Part III where we work this all out from the points! Add new comment
https://www.formulapi.com/blog/detect-lanes-2
CC-MAIN-2020-29
refinedweb
1,326
75.34
JSP date example JSP date example JSP date example Till now you learned about the JSP syntax...; The heart of this example is Date() function of the java.util Reading Request Information date database table name birthday (DOB date); dob with DATE data type in database while... JSP Page Hello World...=d.getTime(); java.sql.Date date = new java.sql.Date(t); try Date Scheduler - JSP-Servlet Date Scheduler How to schedule a date in jsp and servlet.am using tomact servver JSP Date & Time - JSP-Servlet JSP Date & Time Hi! How to compare a String with System time in JSP.... pls... help me... Thanks in advance date format - Date Calendar date format how to convert dd-mmm-yyyy date format to gregorian calendar format in JSP please tell me the code Hi friend, Code to convert date to calender. import java.util.*; import java.text.*; public String Date incremented in .jsp String Date incremented in .jsp I am an utter novice in jsp, but I... a String in the .jsp to be a date which is derived as Todays Date plus 90 days, in a defined format. I can hardcode a date and the .jsp works fine, but I can't Date Coding - JSP-Servlet Date Coding Hi Sir. i am creating one web application in which date is compared with specific date.i.e,When current date is reached to particular given date then a particular action should happened.For example imaginer date print date print how can i print the date in jsp page in the following formate month-date-year. (example. march 8 2012 jsp - Date Calendar JSP page load error Hi, I am getting error while loading the page in JSP. What could be the possible reason anyone to store date in database - JSP-Servlet Jsp Code to store date in database Hi, Can u give me jsp code to store current date in to database. Thanks Prakash Date validation Date validation How to validate date in jsp? That is i have a textbox which will have the value from the date picker. How to validate the date.../jsp/emp-event.shtml Thanks javascript date picker - Date Calendar javascript date picker how to insert a date picker user control in a html/jsp using javascript??? please help, it's urgent. Hi Friend.../javascript-calendar.shtml Thanks Date in JSP Date in JSP To print a Date in JSP firstly we are importing a class named... so that the Date class and its properties can accessed in the JSP page jsp code for date generation - JSP-Servlet jsp code for date generation hai i am meyis i need a jsp program... thanks Hi friend, For more information on Date in JSP visit to : http How to print a webpage without url and date in jsp ? How to print a webpage without url and date in jsp ? How to print a webpage without url and date in jsp java.text.ParseException: Unparseable date - JSP-Servlet me ! .. in my project generated servlet from jsp .. is throwing an exception .. i tried a lot but failed .. jsp code is ::::: It is throwing exception .... java.text.ParseException: Unparseable date JSP:How to get year of system date - Date Calendar JSP:How to get year of system date Please tell me how to get year of system date; I am doing import java.util.*; Date d = new Date(); d.getYear...:// Thanks date example with database date example with database i want to insert date in database(oracle) through jsp. thanx in advance Date Formatter in JSP Date Formatter in JSP This section illustrates you how to use date formatter. To display the date in different formats, we have used DateFormat class. This class provides Date auto format Date auto format Hi, date time picker date time picker I have enetered a date time picker in my jsp file. there is text field before the date time picker. What should i do so... the date time picker gets opened current date set current date set I need to set current date as default in a drop down list which has a label tag too in my jsp page.please somebody help me Date validation in JSP Date validation in JSP Example for validating date in a specified format in a JSP page This example illustrates how to use date validation in a JSP page. Here in this code we are using servlet - Date Calendar servlet Dear friends How to automaticaly indicate(color) the particular date in jsp after the expires from the database. for example... will be indicate the jsp, based on the expire the date) what are the ways Date operation - JDBC Date operation The same what i asked already.Still i didnt get the solution.I dont ve any problem while inserting data from jsp to database.My Doubt... the html text obj into sql Date obj? Hi friend, date date how to insert date in database? i need coding doubt - Date Calendar from Database using JSP JSP Example Code for retrieving Data from Database<...;/html>For more JSP and Servlet Example codesInserting Data into Database using...;Retrieve data from Database using JSP JSP Example Code for retrieving Data from Convert string to Date in JSP Convert string to Date in JSP  ... non- programmers. Whenever such a user pass a date in string format, he is unaware of the fact that the date he has passed has been parsed by the server Want Automatic No with Date - Development process Want Automatic No with Date Hi, I want the jsp code. i want serial no with date and month.For example "240501" date is 24 and month is 05 and serial no 01 .Thanks Prakash conept - Date Calendar example of using Calendar in JSP page by the JavaScript.... has a very good example of using Calendar in JSP page by the JavaScript. http... link. This link has a very good example of using Calendar in JSP page.   DATE DATE I have the following as my known parameter Effective Date... of calcultion starts My question is how to find the date of Thursday This cycle repeats based on every cut off day Here is an example that displays javascript cal code for selecting date in text field - JSP-Servlet javascript cal code for selecting date in text field HI I want javascript calendar code to select date in text field in jsp.pls send me? .../jsp/emp-event.shtml=" date Addintion in year fields of sql date in a java page Addintion in year fields of sql date in a java page In my database i have a field StartDate and EndDate Now entering date from my jsp..., HttpServletResponse response) throws ServletException, IOException { Date How to build calander - Date Calendar he should enter his DOB just by selecting the date month and year for that i... urgent hi registration form in jsp A:hover {text-decoration...: Date of Birth struts2.2.1 date Format example struts2.2.1 date Format example. In this example, We will discuss about the different type of date format using struts2.2.1. Directory structure of example. 1- index.jsp <html> <head> <title>Date Date Tag (Data Tag) Example Date Tag (Data Tag) Example In this section, we are going to describe the Date tag. The date tag allows to format a Date in a quick and easy way. User can specify a custom format (eg Struts 2 Date Validator Struts 2 Date Validator The Date validator in the Struts 2 Framework checks whether the supplied date lies... between the <message> </message> tag. The following example - JSP-Servlet JSP date picker code I am digging for either a simple example or code to get the Date format in JSP jsp ;" import = "java.io.*" errorPage = "" %> <jsp:useBean id = "formHandler...,"Expiry Date"); if(expirydate == false) return false...; function makeArray() { //Start Current Date processing Code to insert date in to database - Development process Code to insert date in to database Hi , in my following code i want to store date in to msaccess database. i have tried many queries, but am...(JdbcOdbcStatement.java:288) at jsp_servlet.__insertdateindatabase._jspService Locale Specific Date validations through JavaScript/Ajax... in JSP file. For example You if your validation function is like... about using cookies in JSP at JSP Cookies Example tutorial page. Thanks...Locale Specific Date validations through JavaScript/Ajax... Hi, I convert date into String like 9-10-2012 into nine october two thousand twelve convert date into String like 9-10-2012 into nine october two thousand twelve convert the date like 9-10-2012 into nine october two thousand twelve on button click event using jsp Display current date in text box in JSP using javascript Display current date in text box in JSP using javascript In this section you will learn how to display current date in text box using JavaScript.For this purpose, we have used JavaScript Date object. The getDate() method returns the day get data between date from msaccess database get data between date from msaccess database here is my code, i want to get data between date using jsp with msaccess.i stored date... like this DATE NUMBER 01-09-2012 1 02-08-2012 Getting the Records in PL/SQl between Date Ranges - JDBC Getting the Records in PL/SQl between Date Ranges Getting the records Between the Dates in PL/SQL Database by using JDBC and display through JSP Display Current Date using JSP Custom(user define) Tag Display Current Date using JSP Custom(user define) Tag In this section, we will discuss about how to display current date & time using custom(user... page Deploy the application EXAMPLE : In this Example, Current date Struts 2 Date Format Examples In this example we will use DateBean action class to get the current date of the system. This current date will be used in our example. Here is the code... Date Example!</title> <link href="<s:url value="/c date convertion date convertion How to convert the string date format 23/11/2009 to original date format in asp.net Mysql Date from Date ; date. Understand with Example The Tutorial illustrate an example from 'DateTime in Mysql'. To grasp this example, we use date(current_trimestamp ( )) query... Mysql Date from Date   JSP - JSP-Servlet JSP Can anybody help me in making a jsp page with these contents:- ONLINE QUIZ: Name: Date: SAP Code... friend, Read for more information. JSP - JSP-Servlet JSP Hi! I am doing a project in JSP i.e. Library managment. In that I have to display current date in a text field as date of issue. I have tried... text field i have to display date of return after one month i.e. 18-1-2009 Date Picker Date Picker "I want to set default calendar date when using datepicker. This date means auto-loaded date when the calendar appears. The default date is the current date. I want to change it. How could I do jsp - JSP-Servlet jsp hi friends i want the jsp code to get the data from the database for particular date by clicking on date it has to forward to the page like form... | | | date | date | YES | | NULL Finding A Date Finding A Date My Frequency Start Date is 24-08-2012 My frequency end date - not defined Calculation Frequency is Weekly(Sunday) My calculation... the following date of sunday JavaServer Pages (JSP) Roseindia to JSP JSP ARCHITECTURE JSP date example Reading Request Information JSP Cookies Example...JavaServer Pages (JSP) is used to develop dynamic web content on the server date validation date validation sir, pls provide date validation code in javascript..we want to include it into our master form.. Please visit the following link:
http://www.roseindia.net/tutorialhelp/comment/95705
CC-MAIN-2014-52
refinedweb
1,958
72.36
in reply to Question about class/module/component I thought I'd refer you to my reply in an old thread with a similar topic. I'm a Java guy myself (involuntarily at work). The biggest mental block I had regarding Perl's system for organizing code is basically: in Perl, a file != a module. That is, one single file can contain code for multiple namespaces... or there can be multiple files, all with code for a single namespace. This is in contrast to Java, where each file starts with a package declaration (or it's implicitly the default package), and you're working with your chosen namespace for the entire file. perlmod is your definitive guide for all this. Perl Cookbook How to Cook Everything The Anarchist Cookbook Creative Accounting Exposed To Serve Man Cooking for Geeks Star Trek Cooking Manual Manifold Destiny Other Results (155 votes), past polls
http://www.perlmonks.org/?node_id=818826
CC-MAIN-2014-41
refinedweb
150
61.77
Docs | Forums | Lists | Bugs | Planet | Store | GMN | Get Gentoo! Not eligible to see or edit group visibility for this bug. View Bug Activity | Format For Printing | XML | Clone This Bug Seems like the fix to CVE-2007-1536 introduced another issue: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ============================================================================= FreeBSD-SA-07:04.file Security Advisory The FreeBSD Project Topic: Heap overflow in file(1) Category: contrib Module: file Announced: 2007-05-23 Affects: All FreeBSD releases. Corrected: 2007-05-23 16:12:51 UTC (RELENG_6, 6.2-STABLE) 2007-05-23 16:13:07 UTC (RELENG_6_2, 6.2-RELEASE-p5) 2007-05-23 16:13:20 UTC (RELENG_6_1, 6.1-RELEASE-p17) 2007-05-23 16:12:10 UTC (RELENG_5, 5.5-STABLE) 2007-05-23 16:12:35 UTC (RELENG_5_5, 5.5-RELEASE-p13) CVE Name: CVE-2007-1536 For general information regarding FreeBSD Security Advisories, including descriptions of the fields above, security branches, and the following sections, please visit <URL:>. I. Background The file(1) utility attempts to classify file system objects based on filesystem, magic number and language tests. The libmagic(3) library provides most of the functionality of file(1) and may be used by other applications. II. Problem Description When writing data into a buffer in the file_printf function, the length of the unused portion of the buffer is not correctly tracked, resulting in a buffer overflow when processing certain files. III. Impact An attacker who can cause file(1) to be run on a maliciously constructed input can cause file(1) to crash. It may be possible for such an attacker to execute arbitrary code with the privileges of the user running file(1). The above also applies to any other applications using the libmagic(3) library. IV. Workaround No workaround is available, but systems where file(1) and other libmagic(3)-using applications are never run on untrusted input are not vulnerable.. [FreeBSD 5.5] # fetch # fetch [FreeBSD 6.1 and 6.2] # fetch # fetch b) Execute the following commands as root: # cd /usr/src # patch < /path/to/patch # cd /usr/src/lib/libmagic # make obj && make depend && make && make install VI. Correction details The following list contains the revision numbers of each file that was corrected in FreeBSD. Branch Revision Path - ------------------------------------------------------------------------- RELENG_5 src/contrib/file/file.h 1.1.1.7.2.1 src/contrib/file/funcs.c 1.1.1.1.2.1 src/contrib/file/magic.c 1.1.1.1.2.1 RELENG_5_5 src/UPDATING 1.342.2.35.2.13 src/sys/conf/newvers.sh 1.62.2.21.2.15 src/contrib/file/file.h 1.1.1.7.8.1 src/contrib/file/funcs.c 1.1.1.1.8.1 src/contrib/file/magic.c 1.1.1.1.8.1 RELENG_6 src/contrib/file/file.h 1.1.1.8.2.1 src/contrib/file/funcs.c 1.1.1.2.2.1 src/contrib/file/magic.c 1.1.1.2.2.1 RELENG_6_2 src/UPDATING 1.416.2.29.2.8 src/sys/conf/newvers.sh 1.69.2.13.2.8 src/contrib/file/file.h 1.1.1.8.8.1 src/contrib/file/funcs.c 1.1.1.2.8.1 src/contrib/file/magic.c 1.1.1.2.8.1 RELENG_6_1 src/UPDATING 1.416.2.22.2.19 src/sys/conf/newvers.sh 1.69.2.11.2.19 src/contrib/file/file.h 1.1.1.8.6.1 src/contrib/file/funcs.c 1.1.1.2.6.1 src/contrib/file/magic.c 1.1.1.2.6.1 - ------------------------------------------------------------------------- VII. References The latest revision of this advisory is available at -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (FreeBSD) iD8DBQFGVGjhFdaIBMps37IRAgogAJ9o/0yCxtRi527rgvhg/BoC/AvEsQCfcwMX ABl7JIb1XiY6QKWQ6UfwlGA= =meQ0 -----END PGP SIGNATURE----- I got this email earlier today and my first thought was dupe :) *** This bug has been marked as a duplicate of bug 171452 *** It's not a dupe. The patch for CVE-2007-1536 introduced another issue. Information from Redhat bug: Colin Percival discovered that the fix for CVE-2007-1536 created an integer overflow flaw in file. This new flaw has been assigned CVE-2007-2799. Here is the information from Colin: + len = ms->o.size - ms->o.left; + /* * 4 is for octal representation, + 1 is for NUL */ + psize = len * 4 + 1; + assert(psize > len); On a 32-bit system, if len is 1.35GB, len * 4 + 1 = 5.4GB == 1.4GB, so the assert will pass. The buffer will then be overflowed (by as much as the attacker wants, although of course he'll run into unwriteable addresses eventually). This looks pretty exploitable... I think the right solution is to apply - assert(psize > len); + if (len > (SIZE_T_MAX - 1) / 4) { + file_oomem(ms); + return NULL; + } and add #include <limits.h> to the top (in place of the #include <assert.h> which the earlier patch adds). This needs to be fixed. The fix for the last two bumps were bogus. Somewhat a shame fbsd had to figure this out and our own people did not. Credits to the fbsd team. file-4.21 is in portage Thx Mike. Arches please test and mark stable. Target keywords are: file-4.21.ebuild:KEYWORDS="alpha amd64 arm hppa ia64 m68k mips ppc ppc64 s390 sh sparc ~sparc-fbsd x86 ~x86-fbsd" Stable for HPPA. x86/amd64 stable alpha/ia64 stable ppc64 stable ppc stable sparc stable. 200705-25, thanks everybody mips stable. *** Bug 181099 has been marked as a duplicate of this bug. ***
http://bugs.gentoo.org/179583
crawl-002
refinedweb
924
50.23
I have installed python 3.3 running fine. I am using a windows 7 64bit version and i have ran the installer for pyodbc module for sql connections. my problem is very interesting: if i create a file in D:\pythonWorkingDir\project_n and call it for example: test.py and in this file include the following code only: - Code: Select all import pyodbc as p when i go to the command line and traverse to D:\pythonWorkingDir\project_n and run this command: test.py i get an error for module not found "pyodbc" Now if while in the same directory I instead run this command: - Code: Select all D:\pythonWorkingDir\project_n>python >>> import pyodbc as p >>> i receive no errors and can use the module Why is python seeing the module from the shell but not from a file execution?
http://www.python-forum.org/viewtopic.php?p=5408
CC-MAIN-2017-13
refinedweb
140
57.91
1019. General Palindromic Number (20) A: 27 2Sample Output 1: Yes 1 1 0 1 1Sample Input 2: 121 5Sample Output 2: No 4 4 1 #include <iostream> #include <string.h> #include <stdlib.h> #include <algorithm> #include <stdio.h> #include <math.h> using namespace std; int n,b; int a[10005]; int cnt; void dfs(int n,int b) { if(n<b) { a[cnt++]=n; return; } dfs(n/b,b); a[cnt++]=n%b; } int main() { while(scanf("%d%d",&n,&b)!=EOF) { cnt=0; dfs(n,b); int i=0,j=cnt-1; int ans=0; while(i<=j) { if(a[i]!=a[j]) { ans=-1; break; } i++,j--; } if(ans==-1) printf("No\n"); else printf("Yes\n"); for(int i=0;i<cnt;i++) { if(i==cnt-1) printf("%d\n",a[i]); else printf("%d ",a[i]); } } return 0; }
https://blog.csdn.net/Dacc123/article/details/51539885
CC-MAIN-2018-30
refinedweb
143
67.86
Getting started with Developing for the AGENT SmartWatch Updated to include screenshots from the read Agent Installer, and project templates, and a few Amazon book references. Prerequisites : Tools and SDK’s. If you are looking for an introduction to C#, consider these resources: - MSDN Introduction to C# - Introduction to the C# Language and the .NET Framework - CSharp Book by Christoph Wille - CSharp Station’s Tutorial - Pluralsite : C# Fundamentals Part 1 - Embedded Programming with the Microsoft® .NET Micro Framework - Expert .NET Micro Framework (Expert's Voice in .NET) More specifically if you want a beginners guide to C# and the .NET Micro Framework here is a PDF to download. So by now your download should be finished. At the time of writing this, the download was an ISO. Since I’m using Windows 8 I’m able to right click that ISO and mount it as a DVD right there. For Win 7 users you may need to download a tool like UltraISO, Gizmo, or Virtual CloneDrive which will be able to take that ISO that you downloaded and “mount” it (or treat it) as a DVD drive on your system. If that doesn’t work for you, go old school and burn the ISO to a DVD. Start the installer by double clicking the wdexpress_full application on the DVD. Once the installer is ready you will be prompted with the installation tool: Be sure to hit the “I agree…” checkbox and optionally check the checkbox to Join the Customer Experience Improvement program if you like and the hit INSTALL. Now its time to go for a coffee, or you could use the time that it takes to install VS.NET to review the sites above get (re)acquainted with C# and the .NET Micro Framework! Eventually…. Hi Launch. Click the “Register Online” button to launch your browser for registration. You will be required to have a Microsoft Live account to proceed, so either login or create a new account. Once you are logged in, fill out the required information in their registration tool and finally you will be given your product key. Copy and paste your key, and allow for VS.NET to apply the product key to your installation. And finally success! NOTE: Typically VS.NET will ship periodic updates. The version of the express SKU we just downloaded included the most recent update (Update 2); so we will NOT need to also download and install that. Further updates should ship via the normal Windows Update process; you should be notified via that channel. Good news! Now that you have Visual Studio fully installed, lets proceed to get the .NET Micro Framework SDK installed. If you also have this installed (make sure it is the MICRO framework installed), skip this step. Download .NET Micro Framework SDK v4.3 from here and start the installation. Be sure to WAIT until VS.NET is installed completely along with any Updates prior to installing this SDK – and also make sure that you completely exit VS.NET prior to running this installation. Side Note: SDK means “Software Development Kit”. Typically SDK’s provide a set of software development tools that allow for the creation of applications for a certain software package, service framework, hardware platform, etc.. In this case we will be giving VS.NET the ability to work with our custom hardware platform – Micro devices. In fact this SDK gives you the ability to work with a plethora of Micro devices from NetDuino’s, the Gadgeteer platform and a whole laundry list of other hardware! Lets get the SDK installed. Hit Next, and choose “Complete” and then hit Next again. Finally the SDK will be installed. Lastly, the team behind the AGENT SmartWatch have shipped their own specific SDK which adds on the ability for a custom (and cool) emulator, not to mention some cool Project Templates for VS.NET; download it here and install it. Here is a screen shot of the emulator in action (more on this in a bit): Now that we have all of the prerequisites ready, lets move onto the development environment!. Pro Tip: You can get to a project properties window by double clicking the “Properties” node under the Project node as well. The only real thing to take note of on this first screen is the "Target framework” drop down list, it should be set to “.NET Micro Framework 4.3”. If you also installed any other previous or later versions of the SDK, they should be available in this drop down. Feel free to take a look at each other tab down the list on the left; there is sufficient documentation online describing what each and every option is if you are curious enough. Open up the last tab, “.NET Micro Framework”. First notice the options for the Transport. If you actually have an Agent Device, and it is plugged in to the USB port you should be able to choose it now (or choose USB). If not, just use the Emulator. At the time of writing this I have a pre-release version of the Emulator and the device (as seen above) is listed as the “AGENT Emulator”. The important thing to really take away from this screen is this is the place where you change change where you want to debug your application with; the emulator or the real device, etc.. Feel free to close this window now. Next up, if you expand the Properties folder in the Solution Explorer you should see a file “AssemblyInfo.cs”, double click to open it. Update the Description, Company name, product name, and version information in this file. This data will be used to identify your application, and its version moving forward. Once your satisfied with the values, close the file. Right click the References folder, and choose “Add Reference”. References are additional files which we can optionally import into our project which give us greater abilities through the .NET Framework API. Pro Tip: The general rule of thumb is to ONLY include references to other files that you actually need. If your project does NOT need these other API’s, don’t include them. Less is better! Also notice that you have a variety of options for references. The .NET Tab represents the libraries that are available and are installed into your system. The Projects tab represents libraries that exist in the same solution that you are currently using. Finally the Browse tab allows you to search your local hard drive for a file to be added as a reference – the typical use case for this is if you download a 3rd party library from the web that you want to include. Hit the Cancel button, and then lets skip all the way down to the “Resources.resx” file. Double click to open it. Since we downloaded this project from the web, the creators already packed in a set of images for each number. We will soon see how to use these resources to render the clock face. For now, consider the “Add Resource” button at the top. This is the way you import resources into your project to be used by your code. For example Images, Fonts, Icons, or if you plan on localizing your application into multiple languages you can manage the Name/Value pairs for each language. Pro Tip: Typically localization is done using multiple projects, one for each language –which ONLY contain the strings resources. Then you can load in that other file at runtime, and read these resource strings for that language – a topic for another day. Once you add in a resource you should also notice that the actual file is also added under the “Resources” folder. Expand that now, and you should see corresponding files for each number as we saw in the Resources.resx file. When adding your own artwork, you really need to pay attention to size, bit depth, etc.. Since the screen is only 128x128 pixels in size, and only supports 1bit depth you wont be able to get any grey scaled images, etc.. Just plain black and white images. Pre-scale any images to fit what you need; don’t rely on the framework for scaling either. Show me the Code! Now that you have the brief tour, lets dig into some of the code behind this watch. Start off by double clicking “Program.cs”. Lets walk through each set of related code here. The “using” statements are used to “Import” other API’s from our list of references for the application. It’s an easy way to make those namespaces available to the current code in this file and if you are trying to import a reference and intellisense is generally not working double check that you do have the actual reference added to the project, and that the version that reference targets is the version of the Micro Framework your working with (in our case here it is 4.3). A “namespace” is a way to logically divide your code, and can be any arbitrary value. Pro Tip: It is best practice to keep the namespace to match the Project and Folder structure. For example if we add a folder into this project called “API”, the namespace should be BigDigitsWatchFace.API. A class is a way to encapsulate a set of code in your application, generally referred to as being an “object”. In our example our class name is “Program”. Pro Tip: It is good practice to have a single class per file, and the name of the class should match the name of the file. This next set of code sets up some global variables in our class which are shared. In our case we set some useful constants which relate to the specifications/limitations of the device -the screen height and width, also if we should show 12 or 24 hour time, and if a bottom border should be shown, etc… Ever application needs an entry point, and that is typically a method that is marked with static, has a return type of void and is typically called Main (can also be set in project, properties). This is where our code starts to be executed once our watch face is started. The first thing we need to do is create a new Bitmap object. We pass in the MaxWidth and MaxHeight variables in order to get a full screen image. Consider this as our canvas, we will paint everything the user sees on this canvas. The call to UpdateTime() we will cover in a second. As the comments indicate, we setup a timer to automatically call the “UpdateTime” method once every minute. Notice the _updateClockTimer = new Timer…. call. This is instructing the app to call that “UpdateTime” method once every “period” – more on this in a second. Lastly we instruct our application to just go to sleep for an infinite amount of time. Without this, our app will just exit. What we need to do is keep the app alive, and allow for the Timer to keep ticking away calling our Update method. The last set of code (and scroll all the way down) deals with how we update the screen every “period” as discussed above. There are a few essential details to be aware of: // clear our display buffer _bitmap.Clear(); This will clear out the entire screen, leaving it empty. _bitmap.DrawImage(….); This is how we are drawing our images. And finally we make a call to Flush(). // flush the display buffer to the display _bitmap.Flush(); This will force the screen to update. Do take time to review the rest of the code in the Program.cs file, although much of it is around just getting the values to display on the screen, you will familiarize yourself with the style and abilities of the framework. Now that you have had a look around, hit the F5 Key on your keyboard. F5 is a short cut which will actually build all of this code, resources, etc.. and package them; gets them ready to run on our emulator. Once the build is complete it will launch the emulator (or connect to your actual watch) and then deploy the watch face on to it. If your using the Emulator it should just automatically pop open! Tip: I noticed that after lots of usage the emulator will not really start (the debugger doesn’t actually attach). In this case just hit F5 again and retry. If it persist exit VS.NET and try it all over again. If it persist even further you may have reached your memory limit for the device. Now that all of the pieces are together, hit the close button on the emulator to exit and return to VS.NET. Lets customize it! I’m sure most of us would love a super cool, customized watch face with our own name on it, right? Lets do that now. The first thing we will need to do is import a font into the project resources, so double click Resources.resx and hit the “Add Resources” drop down, and choose “Add Existing File…” Browse to C:\Program Files (x86)\Microsoft .NET Micro Framework\v4.3\Fonts or if you are on a 32 bit OS C:\Program Files\Microsoft .NET Micro Framework\v4.3\Fonts In the File Name text box at the bottom, put a * and hit enter. The fonts should show up now. Double click the “small.tinyfnt” file, and it will automatically be added into your project – hit save now. Tip: Keep in mind that like references you do not want to add very many “large” resources. They will take up the very minimal and valuable space on your device! In some cases, if your application or watch face is too large it might not even deploy to the emulator or watch, it will just seem like it exit’s! At or about line 92 of Program.cs, lets add some code which will display our name. Lets first bring in our font: var smallFont = Resources.GetFont(Resources.FontResources.small); And then draw some text on the canvas: _bitmap.DrawText("Rob", smallFont, Color.White, 0, SCREEN_HEIGHT - smallFont.Height); Lets break this down… _bitmap is our canvas. DrawText() allows us to literally draw arbitrary text on the canvas. “Rob” is the text we are going to draw. smallFont is the font we want to use when drawing. Color.White – draw the text with a white brush. 0 is the X or (left and right) coordinate on your device to start drawing SCREEN_HEIGHT - smallFont.Height is the Y or (top to bottom) coordinate on the device to start drawing. So what we do here is take the entire screen height 128 minus the height of the font, and start drawing from there down. This will place the text “Rob” at the bottom left side of the screen, like so: Its hard to see, but notice that our name is now painted over the minute value on the screen. Take time now to change the above code to write it on the top left of the screen instead of the bottom left. Pro Tip: Order does matter. Since we painted “Rob” after the minutes above it was literally painted on top of the 2 in the above screen shot. Try changing the Color to Black and you will see it in action. If we were to paint “Rob” first and then the 2, the 2 would cover our text. This matters when painting a variety of other things on the screen and you need to have one thing painted on top of the other.. The Watch face sample running in the Emulator: and the Watch Application sample: Conclusion This should give you a decent understanding of how to get started using VS.NET and the .NET Micro Framework to develop watch faces and applications for the AGENT SmartWatch.
http://weblogs.asp.net/rchartier/getting-started-with-developing-for-the-agent-smartwatch
CC-MAIN-2015-48
refinedweb
2,660
73.27
I tried to add the label to the div, after the page was rendered and still could not find the div on the page. private HorizontalPanel getHP(){ HorizontalPanel hp = new HorizontalPanel(); ... Type: Posts; User: kaushikGXT I tried to add the label to the div, after the page was rendered and still could not find the div on the page. private HorizontalPanel getHP(){ HorizontalPanel hp = new HorizontalPanel(); ... Is there any other way to add the new Html element and to induce a component into it? Would forcing the hp.layout() help? HorizontalPanel hp = new HorizontalPanel(); Html myHtml = new... I am trying to create a div and insert a component into the div by as follows and have not been successful. What is it that I am doing incorrectly? HorizontalPanel hp = new HorizontalPanel();... Figured it. It is DOM.createDiv() Thanks How can I create a div using GWT? Thanks. In our implementation, we have defined beans in the following fashion public class RequestBean extends BaseModel{ public RequestBean(){} public String getQueryText() { return... Could you please suggest how this can be achieved. Setting the width of the grid did not help. Once we remove the grid and attach it back, the width of the column headers increased, shifting the header cells to the right and thus broke the alignment... Thank you for the quick reply. An unsorted icon is definetely desired for our implementation. Is there a work around? Thank you. Should the width of the grid be the sum of width of all the columns? or could it be somewhat larger than that? How can we add image/icons to the grid header so that they represent the states sorted(ascending, descending) and unsorted? Thank you, Kaushik I am attaching excerpts from our code. Esentially we have a layout container in which we have added a Verticalpanel. In this panel by default we display the results in a list view. But the user has... We have a Grid with fixed column widths defined, which is added to a Vertical Panel. When we remove the grid and add it back (same instance) the width of the column header seems to be changing and... When using Safari 3.x and GXT 2.0 combination, in the Grid, the cells(columns) seem to be off from the header. We do not see this on any other browser. What could be going wrong and what is the... Hello, How can we make the content within the listView or a grid selectable? The behaviour of the cursor seems to be controlled from with the java tier and I have not been able to change it to... I was playing with the Paging Grid demonstrated at. When I navigate to the succeeding pages and open another browser window/tab with... Hi Venu, If you have implemented the mentioned options, could you please tell me the pro's and con's and which is the best one. Thank you! How can I add custom sort features to the Grid columns? Thank you How do I customize the PagingToolBar so that the toolbar does not show the Refresh button/feature? Thank you How do I access the contents of a FormPanel? If on submitting a form, I want to capture the form element values for processing, how do i do it? Also, is there a simple tutorial or basic examples...
https://www.sencha.com/forum/search.php?s=e15ad026fda79cadc62a7bbd84e391b1&searchid=17719381
CC-MAIN-2016-36
refinedweb
559
75.5
On Thu, Sep 22, 2011 at 2:58 AM, Kirill A. Shutemov<kirill@shutemov.name> wrote:> On Wed, Sep 21, 2011 at 11:01:46PM -0700, Greg Thelen wrote:>> On Sun, Sep 18, 2011 at 5:56 PM, Glauber Costa <glommer@parallels.com> wrote:>> > +static inline bool mem_cgroup_is_root(struct mem_cgroup *mem)>> > +{>> > + return (mem == root_mem_cgroup);>> > +}>> > +>>>> Why are you adding a copy of mem_cgroup_is_root(). I see one already>> in v3.0. Was it deleted in a previous patch?>> mem_cgroup_is_root() moved up in the file.Got it. Thanks.--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2011/9/22/243
CC-MAIN-2016-07
refinedweb
113
67.76
APIMASH: Porting to Windows Phone 8 You may recall that in May my evangelist colleagues and I launched a series of workshops around a bevy of API Starter Kits designed to kickstart HTML5/JS and C# developers into building their own, unique Windows 8 applications. Since then we’ve seen a few of your efforts appear in the Windows Store including Because Chuck Said So and my very own Eye on Traffic (which leverages the TomTom Traffic Cameras API). More recently I’ve been working on porting my Windows 8 Starter Kit (that leverages both the Bing Maps and TomTom APIs) to Windows Phone, and I thought I’d share some of my experiences in doing so. In a previous post, I spent a bit of time describing the architecture of the Windows 8 version, and it was certainly my goal to get as much code reuse as possible, though of course, I knew that the form factor of a Windows Phone device would necessitate rethinking the user experience. For this post, I’ll split the discussion in two parts: the services layer that handles the API calls and the front-end user experience. Services Layer From my architecture overview, you’ll recall there are two primary class libraries involved: - APIMASH which includes the plumbing code for issuing an HTTP request, parsing the response, and serializing the payload to the required formats, and - APIMASH_APIs which includes the object models and API-specific code to obtain the desired data from the Bing and TomTom REST APIs APIMASH Changes For the base layer, APIMASH, there were two notable changes required. Deserializing Byte Stream to a Bitmap In the class to deserialize HTTP response payloads, I had a method that takes the binary response from a GET request for a jpg image and turns it into a BitmapImage. It turns out things work a bit differently between Windows 8 and Windows Phone in this regard. As you might expect, the namespaces are different (Windows.UI.Xaml.Media.Imaging for Windows 8 and System.WIndows.Media.Imaging for Windows Phone), but it takes a bit more than a namespace modification to generate the same behavior. The Windows Phone case is a near no-brainer: public static T DeserializeImage(Byte[] objBytes) { try { BitmapImage image = new BitmapImage(); using (var stream = new MemoryStream(objBytes)) { image.SetSource(stream); } return (T) ((object) image); } catch (Exception e) { throw e; } } But I’m not too proud to say the Windows 8 version had taken me a while to come up with. The primary challenge was that the argument to SetSource is an IRandomAccessStream versus a good ole System.IO.Stream Below is what you’ll find in the current Windows 8 implementation, thought I suspect I could clean this up a bit by leveraging WindowsRuntimeStreamExtensions. public static T DeserializeImage(Byte[] objBytes) { try { BitmapImage image = new BitmapImage(); // create a new in memory stream and datawriter using (var stream = new InMemoryRandomAccessStream()) { using (DataWriter dw = new DataWriter(stream)) { // write the raw bytes and store synchronously dw.WriteBytes(objBytes); dw.StoreAsync().AsTask().Wait(); // set the image source stream.Seek(0); image.SetSource(stream); } } return (T) ((object) image); } catch (Exception e) { throw e; } } HTTPClient Windows 8 has this awesome HttpClient class which provides a very simple REST-inspired interface (methods like GetAsync, PostAsync, etc.). Unfortunately, that’s not available in Windows Phone… well sort of. There is a portable HttpClient that helps bring the light and airy HttpClient class to both Windows Phone and .NET 4, and that certainly seemed like the easiest approach for my porting effort. It does require adding a few dependencies like the Base Class Libraries, but the NuGet package makes that really simple to incorporate. All seemed to work well at first, except that my cam images didn’t seem to refresh on demand; the existing image never refreshed, but other cam images were being pulled in just fine. That led me on a chase that included discovering that Windows Phone has this helpful feature called image caching (read about it here), and for a while I was convinced somehow that was the culprit. It was not. Caching was indeed the root cause, but it wasn’t image caching, it was web response caching. Since each request for a given camera was hitting the same URI, and Windows Phone was caching the results, I was continually seeing the same image served from cache. One workaround for this is to tack on a (meaningless) URI parameter that always changes, say using a GUID. Presuming the server responding to the URI just ignores the superfluous parameter, all is well. That seemed a bit hacky though and could have a performance or memory hit since the device would now be caching data it would never ever re-access. The solution, elegant in its REST-fulness was a one-liner included before the call to GetAsync httpClient.DefaultRequestHeaders.IfModifiedSince = DateTime.UtcNow; APIMASH_APIs Changes As you may know, the preferred mapping option for Windows Phone is not the Bing Maps control but rather the one provided by Nokia as part of the platform (and available on all Windows Phone 8 devices). Knowing I’d be reworking some of the map user experience code (and being somewhat time constrained), I opted to pull out the Bing Maps API layer altogether. In the Windows 8 application, the Bing Maps API fueled the search experience - which I keenly wanted to highlight since it’s a Windows 8 platform differentiator - but for the Windows Phone version, I decided to scale back and drop the entire APIMASH_BingMaps.cs implementation. A coincident casualty of that decision (in conjunction with the use of Nokia maps) were two methods that no longer served any purpose for the Windows Phone versioh. Beyond that, the changes to the APIMASH_TomTom.cs implementation were incredibly minor: Imaging namespace modifications: System.Windows.Media.Imaging versus Windows.UI.Xaml.Media.Imaging. Image source pathing: The implementation includes two canned “error message” images should something go wrong when requesting a camera image. In Windows 8, those resources are delivered as loose content and accessible via the ms-appx:/// URI scheme. For Windows Phone 8, I needed to mark them as Resource on build and reference via them via URsI like /APIMASH_APIs;component/Assets/camera404.png. It is possible to create an implementation that will use resource streams and work across both Windows 8 and Windows Phone with no changes, but for the two images I’m dealing with here, it seemed like overkill. Application resource pathing: For both targeted platforms, my code stores the developer API key for TomTom as a static resource in the App.xaml file, but there is a nuance of difference in how resources are keyed. In Windows Phone, the ResourceDictionary class can be keyed off of any object type; in Windows 8, the ResourceDictionary is a clamped down a bit to deal with KeyValuePair specifically, so there are semantic differences between the Contains methods, and the Windows RT version of the class throws in a new ContainsKey method. Frankly, it took longer to explain it just now than to address it in code! User Experience From the look of the app on Windows 8 (shown earlier in this post), it should be clear that that specific user experience wasn’t going to transfer directly to Windows Phone, and in general, you should anticipate the front-end work of a port from Windows 8 to Windows Phone (or vice-versa) to require more thought, reflection, and elbow-grease than the plumbing (e.g., business logic and services layer). There isn’t a single formula for revamping a user experience for a different form factor, and there are lots of decisions large and small that are part of the process, many very specific to your application. I will say that despite the great feature set of the Windows Phone Emulator, once I began to focus on the UX changes (versus the services implementation), I very quickly adopted a workflow of deploying and debugging right from the device. What seemed to make perfect sense when interacting with the emulator via the mouse – or even touch, since I have a touch enabled laptop – seemed awkward when actually holding the device. I settled on the following UX for the phone version of the Starter Kit, and it’s quite obviously not feature-equivalent with the Windows 8 version, but I don’t think it needs to be.Notably, there’s no list view of all the cameras, but for the context of a Windows Phone (versus Windows 8) user, I don’t think that list is particularly useful, and the location of the user in context with the map view is paramount. To get to this point in my development, I started with a File->New Windows Phone 8 application, and one-by-one pulled in some of the existing assets from the Windows 8 application, including: - Common classes like BindableBase.cs (no changes needed) and a few converter classes used in the XAML. Those do require a bit of tweaking because the implementation of IValueConverter differs between the two platforms (specifically the last parameter of both of the conversion methods). - Custom map pin classes (CurrentLocationPin.xaml and PointOfInterestPin.xaml). One change was required here, namely the interpretation of the anchor point (from absolute pixels in the Windows 8 case, to a relative 0-1 scale in Windows Phone). Keeping in mind these assets are being used on two completely different map implementations, I was pleasantly surprised! - Given the change to Nokia maps from the Bing Maps control, I expected a lot of rewiring, but since I had abstracted much of the map UI integration into a BingMapsExtensions class, the changes weren’t difficult and very localized. (I also decided to change the class name, since Bing Maps wasn’t in the picture any more!) Reworking the screens though was pretty much a rewrite with a bit of cut-and-paste; here are some of the things you should be prepared for: - Windows 8 and Windows Phone have similar but not quite identical process lifetime and navigation models, - There are (often frustrating) nuances of difference in the XAML. For example, Windows uses using in namespace references, while Windows Phone uses clr-namespace. And you cannot always rely on feature parity across analogous user interface elements. In general though, Windows Phone XAML seems a bit more feature-rich than Windows 8, so I suspect my port from Windows 8 to Windows Phone was smoother than the reverse might have been. - Windows 8 doesn’t leverage theming all that much (it’s either light or dark), but in Windows Phone you typically will want to tap into styles based on the user-selected theme, leveraging resources like PhoneAccentBrush. - Be sure to check out the Windows Phone Toolkit for those things you can’t believe aren’t there by default :) Beyond that, I’ll leave it to you to crack open both projects and take a look at how I handle specific elements of the implementation. Whoa! What about these Portable Class Libraries I hear so much about? For those of you keenly following the evolution and merging of the Windows 8 and Windows Phone development experiences, you might be wondering why I haven’t mentioned anything about. Wouldn’t that have saved me a ton of time? Perhaps for the backend services implementation, but this wasn’t really a greenfield project, so did it (or does it in general) make sense to fix what ain’t broke: the Windows 8 version? For a real app that I’d expecte to evolve on both platforms, it might be worth going back and doing so to reduce the amount of duplicate code. But I'm being a bit disingenuous here as well, I pretty much knew I’d be evolving this to Windows Phone soon after starting the Windows 8 version, so the real answer is a bit more tactical and job-related. Portable class libraries are a Visual Studio Professional (and above) feature; in the interest of reaching as large an audience as possible with our Starter Kits, we wanted to make it as free and easy as downloading Visual Studio Express and hitting F5. If you do want to learn more about Portable Class Libraries and techniques for sharing code between Windows 8 and Windows Phone projects, there are a number of great resources, including: Channel 9 JumpStart Series Dev Center (Windows Phone 8 and Windows 8 app development) Real Talk: Sharing Code Between the Windows & Windows Phone Platforms (Build 2013)
https://docs.microsoft.com/en-us/archive/blogs/jimoneil/apimash-porting-to-windows-phone-8
CC-MAIN-2021-43
refinedweb
2,098
53.24
iDynamicObject Struct Reference An object in the dynamic world. More... #include <propclass/dynworld.h> Detailed Description An object in the dynamic world. Definition at line 450 of file dynworld.h. Member Function Documentation Add a decal to this dynamic object. Coordinates are given in local object space. Returns an id that can be used to later delete the decal. Returns csArrayItemNotFound in case the template is invalid (doesn't exist). Connect another object to a specific joint in this object. This only works if the factory of this object actually defines a joint with the given index. Otherwise this function returns false. If 'obj' is 0 then the connection will be removed. Create a pivot joint at a specific world space position (bullet only). Returns false if it was not possible to create a pivot. Force creation of the entity. Get the body for this object. Can be 0 if the object is currently not visible. Get the cell where this dynamic object lives. Get the object connected at a specific joint. If there is no such object or there is no such joint defined in the factory then this function returns 0. Return a one-line string briefly describing this dynamic object. Get the (optional) entity for this dynamic object. If the dynamic object is out of reach it is possible that the entity is not created yet. Get the optional entity name (only valid after SetEntity()). Get the optional parameter block that will be used to create the entity for this object. Get the entity template associated with this dynamic object. Get the factory from which this dynamic object was created. Get a unique identifier for this object. Get the light for this object. Can be 0 if the light is currently not visible or if the factory is a normal mesh factory. Get the mesh for this object. Can be 0 if the object is currently not visible or if the factory is a light factory. Get the amount of pivot joints. Get the position of a pivot joint. Get the transform of this object. Check hilight. Is static? Link this dynamic object with the given entity. Make dynamic. Make kinematic. Make static. Recreate the joints. This is useful after the joints have been modified in the dynamic factories so that they have the new information. Recreate the pivot joints from the factory. This will first remove all current pivot joints on the object. Refresh the colliders for this object. Remove the specific decal. Remove a specific pivot joint. Remove all pivot joints. Set the entity name to use for this object. Returns false if this fails (for example, there is no entity template with the given name). Set the optional entity name. Set hilight. Set the position of a pivot joint. Set the transform of this object. It is usually recommended to make the object kinematic before you do this. This function will not automatically do that. Undo kinematic and restore previous static or dynamic state. Unlink the entity from this object. This is useful if you want to put the entity in some inventory so you have to delete the dynobj afterwards. The documentation for this struct was generated from the following file: - propclass/dynworld.h Generated for CEL: Crystal Entity Layer 2.1 by doxygen 1.6.1
http://crystalspace3d.org/cel/docs/online/api/structiDynamicObject.html
CC-MAIN-2015-14
refinedweb
554
61.63
Is it a hallmark of Ghibli films that I don't remember the actual storyline, just small, nonsensical plot details — I.E. exactly the opposite of how remembering is supposed to work? :) cssquirrel even the if/else is redundant ;) function awesomeWorkday(tasks) { return (tasks instanceof coolJsonStuff || tasks instanceof coolApiStuff) ? true : false; } brad_frost woah, that URL almost beats in the Most Hideous “fluent” URL competition ;) What's the collective noun for shoeboxes? Because I have one of those. #watching/#listening to Aral Balkan’s “The High Cost of Free” (slides and audio) #indieweb #ownyourcontent #bookmark My tagging posts with emotions thing isn’t going very well — perhaps I should auto tag based on the smilies in notes /cc Brennan Novak #tagging #qs 10346 #steps today — wondering how my #digpen day count compares to Nick Charlton’s :) Saddened by the irony that Jeremy Keith’s The Long Web links to headconference.com, which no longer exists :( Thinking of marking up my hurdy gurdies as schema.org MedicalDevices, with “partial deafness” as an adverseOutcome, and “growing a beard” as a seriousAdverseOutcome.
https://waterpigs.co.uk/notes/?after=2013-03-23T17%3A29%3A25%2B00%3A00
CC-MAIN-2017-13
refinedweb
178
50.36
D-ASYNC: Journey to Code-First Cloud Native Apps D-ASYNC: Journey to Code-First Cloud Native Apps Read more about the need for coding language that can be ruin in the cloud, regardless of environment, and how D-ASYNC plays a role in that. Join the DZone community and get the full member experience.Join For Free I've been dreaming about the day when we can write regular code that can simply run in the cloud in a distributed manner without any awareness of the environment. Don't get me wrong, we already have such technologies today like Microsoft's Azure App Services with a combination of VSTS CI/CD, for example, or any application in a container that has an endpoint. That's great for one application, but what if we follow a microservice pattern or simply have more services? How do they communicate? How do we guarantee resiliency of a workflow that involves several apps? Of course, there are plenty of options but they go beyond invoking a program function as you would do in a monolith. That may sound obvious, but what if we can build more intelligent systems where the code itself becomes the first-class citizen of a cloud ecosystem? Where Workflows Matter You've probably seen architecture diagrams that use many FaaS like this: It's an example of a map-reduce design using AWS Lambda in conjunction with Amazon S3. It doesn't really matter how it works and what exactly it does, I just want to emphasize on the complexity of the solution with many moving parts. On one hand, there are obvious benefits of the design, but on the other hand, it's hard to understand the workflow especially when you have too many small pieces — similarly to the harm of small functions in a code. If you had an infinitely vertically scalable fault-proof application, how would you do exactly the same thing in a code without thinking of distributing the logic across multiple deployments? There are few major ingredients needed in this example: events (pub-sub), invocation of next step, and persistence of job state. 1. Events To describe a publish-subscribe model in a code, we usually use the Observer Pattern, however, some programming languages have built-in support for events like C# (from now on all the code will be demonstrated in C# — there is a reason for that which will be highlighted later): class Publisher { public event EventHandler Change; } class Subscriber { public void Subscribe(Publisher publisher) { publisher.Change += OnChange; } private void OnChange(object sender, EventArgs eventArgs) { ... } } 2. Invocation of Next Steps I hope that this is self-explanatory. An invocation of the next step is just a function call: class Workflow { void Run() { Step1(); Step2(); ... } } class AsyncWorkflow { async Task RunAsync() { await Step1Async(); await Step2Async(); ... } } 3. State Persistence To be honest, you may not even need to persist any state in a non-distributed application, however, there is some form of a persistence present anyway. When you call a sub-function, the context of the caller is stored on a stack in RAM, or in the case of async functions you have a Finite State Machine (FSM) that holds the state and conveys the context: class Workflow { void Run() { int abc = 123; Step1(); } void Step1() { // The state of 'Run' is on the stack in RAM. } } class AsyncWorkflow { async Task RunAsync() { int abc = 123; // 'RunAsync' is compiled into a state machine, // where the state (like 'abc') is saved on // the instance of that state machine. await Step1Async(); } } Another example of a workflow would be this diagram from Microsoft's Architecture reference: It shows that synchronous communication between microservices can lead to cascading failures, thus asynchronous communication mechanism is preferable. How would you lay that out in terms of a simple code? Well, we can use the events concept (pub-sub) from the previous example and we need classes/objects (Object-Oriented Design) and async functions. 4. Service Definitions and Instances Using the Abstraction Pattern you can express a (micro-) service definition with an interface: public interface IBasket { Task Add(StoreItem item); } Then you can implement the service with an actual class which can consume another (micro-) service to directly communicate to using... 5. Async Functions public class Basket : IBasket { // Dependency on another service. private readonly IOrdering ordering; // Use Dependency Injection pattern. public Basket(IOrdering ordering) { this.ordering = ordering; } public async Task Add(StoreItem item) { // Call function on another service asynchronously. await ordering.GetShippingOptions(item); ... } } The async- await syntax implies that a function might not execute immediately, where a Task is a completion promise which you can subscribe to, or poll on. D·ASYNC What is D·ASYNC? In short, it's everything described above but in a reverse order — an open-source framework with the idea to translate code into architectural patterns of a distributed app. In other words, it makes the code the first-class citizen, where your classes (services) and functions (steps of workflow) can run on different platforms, which are subjects of future optimization rather than an initial design choice. Why Code-First Is Important It's not a secret that microservice architecture pattern can be prohibitively expensive for new projects, especially for startups and SMBs, where a monolith is the preferable solution. At some point, a monolith becomes too "heavy," and we try to break it apart — we've seen it over and over again. A few questions that arise during this process include: "What do we use to host new services?", "What inter-communication mechanism do they use?", "How do we ensure resiliency when calling multiple services?" While it can be fun to play with different technologies, they are just artifacts, not primary business objectives. A monolith application already has the business logic, so why not keep using it instead of re-writing almost everything? Putting nonideal design choices aside, a code re-factoring process is natural to any project over its lifetime — you shape and organize the code, create new modules and components. They can be first candidates for separation into its own domain — its own (micro-) service. Such separation can feel natural with D·ASYNC, because the code is treated as a first-class citizen, which can be mapped to run in a separate deployment preserving same contract with already defined bounded context (DDD). Adding more tools and libraries on top of existing programming language abstractions makes development definitely easier. However, such approach will never get us to the point where programming truly feels natural and easy, fast and productive. For example, before async-await, we had to use thread pool libraries like the Task Parallel Library (TPL) for .NET. But the approach to parallel programming has drastically changed when it became the part of the language and framework themselves. It does not mean that async-await syntax yields the most efficient results in terms of performance, but it definitely does in terms of productivity — the code merely looks natural and hides the complexity of vertical scaling. How D·ASYNC Helps The core concept is to take a concrete case of a distributed application and try to express the same intent in a code using just the syntax of a programming language, OOP paradigms, and Design Patterns as demonstrated in the 5 code examples at the beginning of this article. Not everything can be done in this way though because a general-purpose programming language might not have any analogous syntactic structures of distributed design patterns. Nevertheless, the goal is to hide the complexity of distributed programming whenever possible — making it natural to write and read, not necessarily most efficient performance-wise. The code is the first-class citizen - D·ASYNC To make those concepts work in a distributed environment, the statefulness of workflows is a prerequisite for resiliency and scalability (the code example #3), otherwise it will be another RPC with cascading failures. The persistence can be achieved by saving and restoring a state of finite state machines — the auto-generated ones from async methods. Here is what D·ASYNC engine does to the code from a previous example: // This is a service, which is a part of a workflow. // The interface defines the communication contract. public class Basket : IBasket { // Dependency on another (micro-)service. private readonly IOrdering ordering; // The Dependency Injection pattern translates to the // service discovery. The 'IOrdering' service can be // deployed separately on a different platform. public Basket(IOrdering ordering) { this.ordering = ordering; } // This is a routine of a workflow. // An async function is compiled into a state machine. public async Task Add(StoreItem item) { // This code block is the first state transition // of the auto-generated finite state machine. // Calling another async method results in saving // state of current routine (FSM) and scheduling // the executing of another routine. The ability to // persist the state differentiates this technology // from any form of Remote Procedure Call. // Communication with another service can be handled // by a service mesh for example. await ordering.GetShippingOptions(item); // When 'GetShippingOptions' is complete, it schedules // its continuation - the current 'Add' routine, which // resumes at exact point from saved state. For // example, the input argument 'item' is the data of // the state. You can look at this as 'Add' routine // subscribes to the completion event of the // 'GetShippingOptions' routine. That's what creates // a distributed event-driven workflow. // This code block is the second state transition // of the auto-generated finite state machine. // The 'await' keyword acts as a 'delimiter' between // state transitions. } } The D·ASYNC on Azure Functions blog post unveils a seamless demonstration of the technology where you simply push an application code to Git hosted on VSTS, which triggers CI/CD pipeline and automatically deploys it as a set Azure Functions, where nothing in the code is hard-coded to use such environment, similarly to the example above. That means that you can run the same application in single process, Windows or Linux, Azure or AWS without changing a single line. How D·ASYNC Does Not Help You should not think that D·ASYNC will magically turn any monolith into a distributed app where you can re-deploy any piece of code on demand. Instead, it's merely a language-integrated abstraction layer that helps you achieve exactly the same things as you do today in a more friendly way. You are still responsible for the architecture and design regardless if you implement microservices or just use Service-Oriented Architecture. Another danger (as with any abstraction layer) is that you can easily miss performance, reliability, scalability, and security issues as described by fallacies of distributed computing. You can always blame it on tools and frameworks, but I believe that the root is in a programming language itself. A general-purpose programming language is not a cloud ecosystem programming language, thus we try to project distributed computing concepts onto abstractions of a simple language. Then, when we read that code we don't see the backward correlation. Programming languages should evolve with the demand of majority of its users. No, we will not see distributed computing syntax in most popular languages any time soon. However, .NET platform has a very neat feature - a compiler with the codename "Roslyn". It allows you to introduce new syntax or keywords to C#. That means you can extend the language to describe new concepts. For example consider this code snippet: service Basket { routine void Add(StoreItem item) { trigger ordering.GetPrice(item); } } The new keywords service, routine, and trigger don't introduce any new functionality — they are transpiled into class, async Task, and await respectively. As for a developer, this increases visibility and awareness of distributed environment and helps to avoid issues at early stage. This will be one of the D·ASYNC for .NET features in the nearest future, but for now beware of the traps. The Seed The rough idea came to me about 3 years ago after being unsatisfied with previous experience of dealing with distributed workflow frameworks. The turning point was a StackOverflow question about scheduling a function on a different machine with Task.Run(), where the obvious answer was "not possible". With digging into auto-generated finite state machines from async methods at the same time, I put two things together to create the first concept of a workflow engine. It then took a very long time of following the latest trends around microservices, containers, serverless, and service mesh along with making the first proof-of-concept application to create a much bigger vision. Thus the origin of the technology name: D for Distributed and Async after the use of async functionsto describe a workflow with underlying auto-generated finite state machines. C#/.NET were chosen due to technical feasibility in first place. Open-Sourcing Challenges D·ASYNC was living in a private repository on GitHub for a couple of years in R&D mode but is now available to the public. The project is born, doing its first baby steps towards adulthood, and waiting for the community to shape its personality. Personal Struggles Having a full-time job, I find it can be very difficult to find a balance between work, personal life, and a side project. It feels like a second full-time job that consumes most of my time, and where I sacrifice ordinary human communication. Investing into this project (including writing this article) involves waking up very early and staying up late on both workdays and weekends — day after day, month after month, coffee after coffee. It drains a lot of life power. It's not that easy to stay self-motivated over a time span of a few years to do such physically and emotionally exhausting activities. Sometimes a side project can get swept under the rug, but personally, unfinished business keeps me awake at night and leaves me with a sense of disappointment so I'm eager to pick it up again. It's not an expression of a complaint, but a description of a harsh reality when you dream big and try hard. The driver is a personal challenge to make something great and useful in the name of progress, even if it fails. Open source code is like an avatar - there are live people behind it. The Law When you work on an open-source project you should always think about its relation to your current employer. Laws differ from state to state and country to country, but usually there is a common set of rules to preserve your intellectual rights: - Don't create a competitive project - usually boils down to not stealing customers and getting revenue from same/similar channels. - Don't work on a project during working hours. Keep work and personal business separate. - Don't use work computers and devices to develop your project. - Don't use software paid by your employer. Get your own subscriptions or use freeware. - Don't use knowledge acquired during your employment, learn more on a side. Your employer pays for it when you perform duties and grow in your career path. Visiting prepaid conferences counts. - Don't implement anything related to research and development at your company. Your employer anticipates a return on such investment. - Don't use help of your colleagues during working hours. They are paid by the employer as well. If any of your co-workers wants to contribute to your project, make sure that they abide the same rules and respect the employer. While some of the points above can be debatable and hard to prove, breaking any of them would be simply unethical besides causing legal troubles. Just put yourself into your employer's shoes with any step you do and think about possible implications from the other side. My current employer is not ready for open-source culture yet, so I decided not to bring cowokers to my project even though some of them expressed interest in contributing. Get a mutual trust and respect with your employer. Depending on your project and objectives, you might think about avoiding personal liability. Even with choosing the right license agreement, there is always danger to get into a legal fight. An example would be a patent troll lurking in the shadows and waiting to feed on your success when you grow big. Personally, I decided to go with a limited liability company even though the project is free to use and does not bring any revenue. Regarding patents, I'm on the side of not having them for software. They impede the progress and don't bring a lot of value. Nowadays a software patent does not guarantee you a lot of protection, because it's ridiculously expensive to fight over — you'd probably pay a million US dollars in attorney fees. The approval process takes at least one year and technology changes very fast so your patent can be obsolete by that time you get it. Acquiring a patent can be a gamble — high patent attorney fees and hard to get it approved. The best use of a patent is a future valuation of your company if you have serious intentions and only if you can afford it. Promotion and Contribution From my perspective one of the biggest challenges is to promote the concept of the D·ASYNC project, but not the actual implementation which is still in preview and not recommended for production yet. The problem is in the paradigm shift — trending technologies mostly revolve around containers as a first-class cloud citizens. With D·ASYNC I'd like the community to start thinking about how to reach the next level of abstration and make code-first applications on top of existing technologies. Are you ready for the paradigm shift? At this moment I think the best contribution to D·ASYNC would be conceptual and ideological instead of adding new features to the code. This is slighty harder compared to other well-defined projects — such contributions usually require a long-term commitment with a high degree of scrutiny which do not give immediate value back. As a reader, you might get enlightened with the idea but when it comes to utilizing the actual implmentation you can find that it may not look exactly as advirtised or does not fit your needs - that's where most projects lose traction. Afterword This is my story of creating the most complex and challenging open-source project that took a time span of 3 years. I didn't want to focus solely on the project itself, but rather want to remind that there is much more than the code and features — an emotional part, legal aspects, and hurdles in perception. I hope you enjoyed this journey, and if you interested in the project development, you can find it on GitHub. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/d-async-cloud-native-apps
CC-MAIN-2019-09
refinedweb
3,147
50.97
#include "rewrite_driver.h" This extends class HtmlParse (which should renamed HtmlContext) by providing context for rewriting resources (css, js, images). Mode for BoundedWaitForCompletion. Indicates document's mimetype as XHTML, HTML, or is not known/something else. Note that in Apache we might not know the correct mimetype because a downstream module might change it. It's not clear how likely this is, since mod_rewrite and mod_mime run upstream of mod_pagespeed. However if anyone sets mimetype via "Header Add", it would affect the Browser's view of the document's mimetype (which is what determines the parsing) but mod_pagespeed would not know. Note that we also have doctype().IsXhtml() but that indicates quirks-mode for CSS, and does not control how the parser parses the document. Need explicit destructors to allow destruction of scoped_ptr-controlled instances without propagating the include files. Adds the filters from the options, specified by name in enabled_filters. This must be called explicitly after object construction to provide an opportunity to programatically add custom filters beyond those defined in RewriteOptions, via AddFilter(HtmlFilter* filter) (below). Queues up a task to run on the low-priority rewrite thread. Such tasks are expected to be safely cancelable. Adds a filter to the very beginning of the pre-render chain, taking ownership. This should only be used for filters that must run before any filter added via PrependOwnedPreRenderFilter. Tells RewriteDriver that a certain portion of URL namespace should not be handled via usual (HTTP proxy semantics) means. It's up to the filters to actually arrange for that to do something. Takes ownership of the claimant object. Note that it's important for the claims to be disjoint, since the RewriteContext framework needs to be able to assign compatible Resource objects for same URLs/slots among all filters that deal with them. Adds an extra external reference to the object. You should not normally need to call it (NewRewriteDriver does it initially), unless for some reason you want to pin the object (e.g. in tests). Matches up with Cleanup. Add a RewriteFilter to the end of the pre-render chain and take ownership of the filter. This differs from AppendOwnedPreRenderFilter in that it adds the filter's ID into a dispatch table for serving rewritten resources. E.g. if your filter->id == "xy" and FetchResource("NAME.pagespeed.xy.HASH.EXT"...) is called, then RewriteDriver will dispatch to filter->Fetch(). This is used when the filter being added is not part of the core set built into RewriteDriver and RewriteOptions, such as platform-specific or server-specific filters, or filters invented for unit-testing the framework. Returns the appropriate base gurl to be used for resolving hrefs in the document. Note that HtmlParse::google_url() is the URL for the HTML file and is used for printing html syntax errors. As above, but with a time bound, and taking a mode parameter to decide between WaitForCompletion or WaitForShutDown behavior. If timeout_ms <= 0, no time bound will be used. We fragment the cache based on the hostname we got from the request, unless that was overridden in the options with a cache_fragment. Determines whether the system is healthy enough to rewrite resources. Currently, systems get sick based on the health of the metadata cache. If there are not outstanding references to this RewriteDriver, delete it or recycle it to a free pool in the ServerContext. If this is a fetch, calling this also signals to the system that you are no longer interested in its results. Returns a fresh instance using the same options we do, using the same log record. Drivers should only be cloned within the same request. Clones share the same request_context, which contains bits derived from the request headers, so request_headers_ is also cloned (or shared if we make them shareable). You must call SetRequestHeaders before calling Clone. Get/set the charset of the containing HTML page. See scan_filter.cc for an explanation of how this is determined, but NOTE that the determined charset can change as more of the HTML is seen, in particular after a meta tag. Creates a cache fetcher that uses the driver's fetcher and its options. Note: this means the driver's fetcher must survive as long as this does. Creates an input resource based on input_url. Returns NULL if the input resource url isn't valid or is a data url, or can't legally be rewritten in the context of this page, in which case *is_authorized will be false. Assumes that resources from unauthorized domains may not be rewritten and that the resource is not intended exclusively for inlining. Creates an input resource. Returns NULL if the input resource url isn't valid or is a data url, or can't legally be rewritten in the context of this page (which could mean that it was a resource from an unauthorized domain being processed by a filter that does not allow unauthorized resources, in which case *is_authorized will be false). There are two "special" options, and if you don't care about them you should just call CreateInputResource(input_url, is_authorized) to use their defaults: Creates an input resource from the given absolute url. Requires that the provided url has been checked, and can legally be rewritten in the current page context. Only for use by unit tests. Creates a reference-counted pointer to a new OutputResource object. The content type is taken from the input_resource, but can be modified with SetType later if that is not correct (e.g. due to image transcoding). Constructs an output resource corresponding to the specified input resource and encoded using the provided encoder. Assumes permissions checking occurred when the input resource was constructed, and does not do it again. To avoid if-chains, tolerates a NULL input_resource (by returning NULL). Version of CreateOutputResourceWithPath where the unmapped and mapped paths are different and the base_url is this driver's base_url. Creates an output resource where the name is provided. The intent is to be able to derive the content from the name, for example, by encoding URLs and metadata. This method succeeds unless the filename is too long. This name is prepended with path for writing hrefs, and the resulting url is encoded and stored at file_prefix when working with the file system. So hrefs are: /usr/local/google/home/cheesy/bin:/usr/lib/google-golang/bin:/usr/local/buildtools/java/jdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin/.pagespeed[.$EXPERIMENT].. . EXPERIMENT is set only when there is an active experiment_spec. Could be private since you should use one of the versions below but put here with the rest like it and for documentation clarity. Version of CreateOutputResourceWithPath where the unmapped and mapped paths and the base url are all the same. FOR TESTS ONLY. Version of CreateOutputResourceWithPath which first takes only the unmapped path and finds the mapped path using the DomainLawyer and the base_url is this driver's base_url. This should only be called by the CriticalSelectorFinder. Normal users should call CriticalSelectorFinder::IsCriticalImage. Determines whether we are currently in Debug mode; meaning that the site owner or user has enabled filter kDebug. Calls the provided ResourceNamer's Decode() function, passing the hash and signature lengths from this RewriteDriver. Returns the decoded version of base_gurl() in case it was encoded by a non-default UrlNamer (for the default UrlNamer this returns the same value as base_url()). Required when fetching a resource by its encoded name. Attempts to decode an output resource based on the URL pattern without actually rewriting it. No permission checks are performed on the url, though it is parsed to see if it looks like the url of a generated resource (which should mean checking the hash to ensure we generated it ourselves). As above, but lets one specify the options and URL namer to use. Meant for use with the decoding_driver. Decrements a reference count bumped up by IncrementRenderBlockingAsyncEventsCount() Deletes the specified RewriteContext. If this is the last RewriteContext active on this Driver, and there is no other outstanding activity, then the RewriteDriver itself can be recycled, and WaitForCompletion can return. We expect to this method to be called on the Rewrite thread. dependency_tracker()->RegisterDependencyCandidate and ReportDependencyCandidate can be called from any thread. Must be called after all other rewrites that are currently relying on this one have had their RepeatedSuccess or RepeatedFailure methods called. Must only be called from rewrite thread. Called by RewriteContext when a detached async fetch is complete, allowing the RewriteDriver to be recycled if FetchComplete() got invoked as well. Called by RewriteContext to let RewriteDriver know it will be continuing on the fetch in background, and so it should defer doing full cleanup sequences until DetachedFetchComplete() is called. Call DetermineEnabled() on each filter. Should be called after the property cache lookup has finished since some filters depend on pcache results in their DetermineEnabled implementation. If a subclass has filters that the base HtmlParse doesn't know about, it should override this function and call DetermineEnabled on each of its filters, along with calling the base DetermineEnabledFiltersImpl. For all enabled filters the CanModifyUrl() flag will be aggregated (or'ed) and can be queried on the can_modify_url function. Reimplemented from net_instaweb::HtmlParse. Used by CacheExtender, CssCombineFilter, etc. for rewriting domains of sub-resources in CSS. If the value of X-PSA-Blocking-Rewrite request header matches the blocking rewrite key, set fully_rewrite_on_flush flag. Executes an Flush() if RequestFlush() was called, e.g. from the Listener Filter (see set_event_listener below). Consider an HTML parse driven by a UrlAsyncFetcher. When the UrlAsyncFetcher temporarily runs out of bytes to read, it calls response_writer->Flush(). When that happens, we may want to consider flushing the outstanding HTML events through the system so that the browser can start fetching subresources and rendering. The event_listener (see set_event_listener below) helps determine whether enough "interesting" events have passed in the current flush window so that we should take this incoming network pause as an opportunity. Asynchronous version of the above. Note that you should not attempt to write out any data until the callback is invoked. (If a flush is not needed, the callback will be invoked immediately). Returns the property page which contains the cached properties associated with the current URL and fallback URL (i.e. without query params). This should be used where a property is interested in fallback values if actual values are not present. Cleans up the driver and any fetch rewrite contexts, unless the fetch rewrite got detached by a call to DetachFetch(), in which case a call to DetachedFetchComplete() must also be performed. Initiates an In-Place Resource Optimization (IPRO) fetch (A resource which is served under the original URL, but is still able to be rewritten). proxy_mode indicates whether we are running as a proxy where users depend on us to send contents. When set true, we will perform HTTP fetches to get contents if not in cache and will ignore kRecentFetchNotCacheable and kRecentFetchFailed since we'll have to fetch the resource for users anyway. Origin implementations (like mod_pagespeed) should set this to false and let the serve serve the resource if it's not in cache. If proxy_mode is false and the resource could not be found in HTTP cache, async_fetch->Done(false) will be called and async_fetch->status_code() will be CacheUrlAsyncFetcher::kNotInCacheStatus (to distinguish this from a different reason for failure, like kRecentFetchNotCacheable). Note that if the request headers have not yet been set on the driver then they'll be taken from the fetch. See FetchResource. There are two differences: Initiates an async fetch for a rewritten resource with the specified name. If url matches the pattern of what the driver is authorized to serve, then true is returned and the caller must listen on the callback for the completion of the request. If the driver is not authorized to serve the resource for any of the following reasons, false is returned and the callback will -not- be called - the request should be passed to another handler. In other words there are three outcomes for this routine: In even other words, if this routine returns 'false' then the callback will not be called. If the callback -is- called, then this should be the 'final word' on this request, whether it was called with success=true or success=false. Note that if the request headers have not yet been set on the driver then they'll be taken from the fetch. Override HtmlParse's FinishParse to ensure that the request-scoped cache is cleared immediately. Note that the RewriteDriver can delete itself in this method, if it's not externally managed, and if all RewriteContexts have been completed. Reimplemented from net_instaweb::HtmlParse. As above, but asynchronous. Note that the RewriteDriver may already be deleted at the point the callback is invoked. The scheduler lock will not be held when the callback is run. Return true if we must flatten css imports, either because the filter is enabled explicitly or because it is enabled by PrioritizeCriticalCss. Overrides HtmlParse::Flush so that it can happen in two phases: FlushAsync is prefered for event-driven servers. Reimplemented from net_instaweb::HtmlParse. Returns the list of cohorts that should be read in based on our options. Creates and registers an inline attribute resource slot for rewriting. If this is the first time called for this position, a new slot will be returned. On subsequent calls, the original slot will be returned so that rewrites are propagated between filters. Creates and registers an inline resource slot for rewriting. If this is the first time called for this position, a new slot will be returned. On subsequent calls, the original slot will be returned so that rewrites are propagated between filters. Creates and registers a HtmlElement slot for rewriting. If this is the first time called for this position, a new slot will be returned. On subsequent calls, the original slot will be returned so that rewrites are propagated between filters. Create and and registers a source set slot collection for rewriting all the images in the srcset attribute of an . Also creates the neccessary resources using the provided filter's policy. If this is the first time called for this element + attr, a new collection will be returned. On subsequent calls, the original collection will be returned so that rewrites are propagated between filters. All filters using this call are expected to have the same values for AllowUnauthorizedDomain() and IntendedForInlining(). Increment reference count for misc. async ops that need the RewriteDriver kept alive. Increment reference count for misc async ops that should be waited for before doing rendering for current flush window. Report error message with description of context's location (such as filenames and line numbers). context may be NULL, in which case the current parse position will be used. Method to start a resource rewrite. This is called by a filter during parsing, although the Rewrite might continue after deadlines expire and the rewritten HTML must be flushed. Returns InitiateRewrite returns false if the system is not healthy enough to support resource rewrites. Log the given debug message(s) as HTML comments after the given element, if not NULL, it has not been flushed, and if debug is enabled. The form that takes a repeated field is intended for use by CachedResult, e.g: InsertDebugComment(cached_result.debug_message(), element); Messages are HTML-escaped before being written out to the DOM. The JS to detect above-the-fold images should only be enabled if one of the filters that uses critical image information is enabled, the property cache is enabled (since the critical image information is stored in the property cache), and it is not explicitly disabled through options. Returns true if some ResourceUrlClaimant has staked a claim on given URL. If this returns true, CreateInputResource will fail, but it's probably not worth logging any debug filter hints about that. log_record() always returns a pointer to a valid AbstractLogRecord, owned by the rewrite_driver's request context. Attempts to lookup the metadata cache info that would be used for the output resource at url with the RewriteOptions set on this driver. If there is a problem with the URL, returns false, and *error_out will contain an error message. If it can determine the metadata cache key successfully, returns true, and eventually callback will be invoked with the metadata cache key and the decoding results. After calling this method, the driver should not be used for anything else. Checks to see if the input_url has the same origin as and the base url, to make sure we're not fetching from another server. Does not consult the domain lawyer, and is not affected by AddDomain(). Precondition: input_url.IsWebValid() Returns true if we may cache extend Css, Images, PDFs, or Scripts respectively. Checks to see if we can write the input_url resource in the domain_url taking into account domain authorization, wildcard allow/disallow from RewriteOptions, and the intended use of the url's resource. After the function is executed, is_authorized_domain will indicate whether input_url was found to belong to an authorized domain or not. Determines whether the document's Content-Type has a mimetype indicating that browsers should parse it as XHTML. Return a mutable pointer to the response headers that filters can update before the first flush. Returns NULL after Flush has occurred. We expect to this method to be called on the HTML parser thread. Returns the number of images whose low quality images are inlined in the html page. Returns property page which contains cached properties associated with the current origin (host/port/protocol). May be NULL. Fills in the resource namer based on the give filter_id, name and options stored in the driver. Like AppendRewriteFilter, but adds the filter to the beginning of the pre-render chain. Returns the property page which contains the cached properties associated with the current URL. Should be called once everything in the property cache has been read, and the pages set on the object. Tries to register the given rewrite context as working on its partition key. If this context is the first one to try to handle it, returns NULL. Otherwise returns the previous such context. Must only be called from rewrite thread. Provides a mechanism for a RewriteContext to notify a RewriteDriver that a certain number of rewrites have been discovered to need to take the slow path. Indicates that a Flush through the HTML parser chain should happen soon, e.g. once the network pauses its incoming byte stream. Rewrites CSS content to absolutify any relative embedded URLs, streaming the results to the writer. Returns 'false' if the writer returns false or if the content was not rewritten because the domains of the gurl and resolved_base match. input_css_base contains the path where the CSS text came from. output_css_base contains the path where the CSS will be written. Provides a mechanism for a RewriteContext to notify a RewriteDriver that it is complete, to allow the RewriteDriver to delete itself or return it back to a free pool in the ServerContext. This will also call back into RewriteContext::Propagate, letting it know whether the context is still attached to the HTML DOM (and hence safe to render), and to do other bookkeeping. If 'permit_render' is false, no rendering will be asked for even if the context is still attached. Make the rewrite_worker tasks run on the request thread. This must be called immediately after initializing the driver, before it starts processing the request. Inserts the critical images present on the requested html page. It takes ownership of critical_images_info. This should only be called by the CriticalImagesFinder, normal users should just be using the automatic management of critical_images_info that CriticalImagesFinder provides. This should only be called by the CriticalSelectorFinder. Indicate that this RewriteDriver will be explicitly deleted, and thus should not be auto-deleted at the end of the parse. This is primarily for tests. This is relevant only when fully_rewrite_on_flush is true. When this is set to true, Flush of HTML will not wait for async events while it does wait when it is set to false. If this is set to true, during a Flush of HTML the system will wait for results of all rewrites rather than just waiting for cache lookups and a small deadline. Note, however, that in very rare circumstances some rewrites may still be dropped due to excessive load. Note: reset every time the driver is recycled. Sets a maximum amount of time to process a page across all flush windows; i.e., the entire lifecycle of this driver during a given pageload. A negative value indicates no limit. Setting fully_rewrite_on_flush() overrides this. Sets the num_initiated_rewrites_. This should only be called from test code. Takes ownership of 'options'. pool denotes the pool of rewrite drivers that use these options. May be NULL if using custom options. Sets whether or not there were references to urls before the base tag (if there is a base tag). This variable has document-level scope. It is reset at the beginning of every document by ScanFilter. Set the pointer to the response headers that filters can update before the first flush. RewriteDriver does NOT take ownership of this memory. Declares whether the current document is AMP or not. Prior to calling this, all HTML events are buffered, to avoid waking up filters that inject scripts. Utility function to set/clear cookies for PageSpeed options. gurl is the URL of the request from which the host is extracted for a cookie attribute. Reinitializes request_headers_ (a scoped ptr) with a copy of the original request headers. Note that the fetches associated with the driver could be using a modified version of the original request headers. There MUST be exactly 1 call to this method after a rewrite driver object has been constructed or recycled, before the RewriteDriver is used for request processing. This method also sets up the user-agent and device properties. Sets a server context enabling the rewriting of resources. This will replace any previous server context. Set a fetcher that will be used by RewriteDriver for current request only (that is, until Clear()). RewriteDriver will take ownership of this fetcher, and will keep it around until Clear(), even if further calls to this method are made. Determines if an URL relative to the given input_base needs to be absolutified given that it will end up under output_base: Override HtmlParse's StartParseId to propagate any required options. Note that if this (or other variants) returns true you should use FinishParse(), otherwise Cleanup(). Reimplemented from net_instaweb::HtmlParse. Switches the driver back to running rewrite_worker tasks using the QueuedWorkerPool. This can be called when we are retiring a server-request on behalf of the client (e.g. after a deadline was exceeded), but want background optimization to continue. It can no longer continue on the request thread. Convenience method to return the trace context from the request_context() if both are configured and NULL otherwise. Convenience methods to issue a trace annotation if tracing is enabled. If tracing is disabled, these methods are no-ops. Update the PropertyValue named 'property_name' in dom cohort with the value 'property_value'. It is the responsibility of the client to ensure that property cache and dom cohort are enabled when this function is called. It is a programming error to call this function when property cache or dom cohort is not available, more so since the value payload has to be serialised before calling this function. Hence this function will DFATAL if property cache or dom cohort is not available. Wait for outstanding Rewrite to complete. Once the rewrites are complete they can be rendered. Wait for outstanding rewrite to complete, including any background work that may be ongoing even after results were reported. Note: while this guarantees that the result of the computation is known, the thread that performed it may still be running for a little bit and accessing the driver. Writes the specified contents into the output resource, and marks it as optimized. 'inputs' described the input resources that were used to construct the output, and is used to determine whether the result can be safely cache extended and be marked publicly cacheable. 'content_type' and 'charset' specify the mimetype and encoding of the contents, and will help form the Content-Type header. 'charset' may be empty when not specified. Note that this does not escape charset. Callers should take care that dangerous types like 'text/html' do not sneak into content_type. Cohort for dependency information. This is written at different time than kDomCohort, and might not be in use for some requests, depending on settings. This string identifies, for the PropertyCache, a group of properties that are computed from the DOM, and thus can, if desired, be rewritten on every HTML request. Property Names in DomCohort. Tracks the timestamp when we last received a request for this url.
http://modpagespeed.com/psol/classnet__instaweb_1_1RewriteDriver.html
CC-MAIN-2017-30
refinedweb
4,158
64.3
Answered by: How to downgrade Visual Studio for Mac? Question - User2823 posted I need to downgrade from 7.2 to 7.1.5.2 Transitive project.json dependencies are broken in 7.2Tuesday, October 10, 2017 8:46 AM Answers - All replies - User18049 posted You should be able to get previous versions from the account page. What is the problem with transitive project.json dependencies in VS Mac 7.2?Tuesday, October 10, 2017 9:00 AM - User29525 posted @mattward From what I can see the account page only has Xamarin Studio, not Visual Studio for Mac. @RobLander Did you find the installation you needed?Tuesday, October 10, 2017 10:07 AM - - User29525 posted Thanks @mattwardTuesday, October 10, 2017 10:34 AM - User2823 posted Thanks @mattward. About regression: - I have portable project with project.json file referencing Xamarin.Forms - I have iOS project with "empty" project.json file, that references portable project - In VS 7.1 iOS compiles fine because Xamarin.Forms dependency is transitioned to iOS project - In VS 7.2 I have tonns of "The type or namespace 'Xamarin' could not be found" errorsTuesday, October 10, 2017 10:46 AM - User2823 posted I've also tried to downgrade Xamarin.iOS and Mononpackages leaving VS 7.2 but that doesn't help. So it seems like it is VS related.Tuesday, October 10, 2017 10:48 AM - User18049 posted Looks like something has changed in NuGet 4.3 maybe. I created a new Xamarin.Forms project using VS Mac 7.2, then uninstalled the Xamarin.Forms NuGet package and added a project.json file to the PCL project (also removed the : { "dependencies": { "Xamarin.Forms": "2.4.0.280" }, "frameworks": { ".NETPortable,Version=v4.5,Profile=Profile111": {} } } Then uninstalled the NuGet package from the iOS project and added a project.json file: ``` { "frameworks": { "xamarin.ios10": { "imports": "portable-net45+win81+wp80+wpa81" } }, "runtimes": {"win-x86" : {} } } ``` Then reloaded the solution. The build fails in VS Mac 7.2 due to missing Xamarin.Forms types but works in VS Mac 7.1. The problem seems to be that the generated project.lock.json file does not have any Xamarin.Forms information when the NuGet restore is run using VS Mac 7.2. The main difference between VS Mac 7.1 and 7.2 is that NuGet was updated from 4.0 to 4.3.1. However using NuGet.exe 4.3.0 from the command line and running a restore, or using msbuild /t:restore, both seem to generated project.lock.json files with Xamarin.Forms information in them. Just VS Mac 7.2 does not. A potential workaround here would be to disable automatic package restore and run the restore outside VS Mac using msbuild /t:restoreor nuget.exe.Wednesday, October 11, 2017 5:46 PM - User2823 posted So, it's a some kind of regression in newer nuget. Is it going to be fixed in Visual Studio for Mac updates?Wednesday, October 25, 2017 9:55 AM - User18049 posted Looks like a bug in VS for Mac. On upgrading to a more recent NuGet version some code in VS for Mac was removed due to a restructuring of various classes in NuGet itself. So it looks like project reference information is not being correctly setup when a restore is happening. I believe this then causes the transitively referenced NuGet packages to not be added to the project.lock.json. The lock file is also missing project references.Wednesday, October 25, 2017 10:59 AM - User18049 posted @RobLander - I have created a patch NuGet addin dll that should fix the problem with project.json files for VS Mac 7.2. You can download it from GitHub. Download that MonoDevelop.PackageManagement.dll file from GitHub. You may need to unblock it: xattr -d -r com.apple.quarantine MonoDevelop.PackageManagement.dll Then you can replace the existing NuGet addin .dll. I tested this locally by doing the following: // Make backup of NuGet addin .dll cp /Applications/Visual\ Studio.app/Contents/Resources/lib/monodevelop/AddIns/MonoDevelop.PackageManagement/MonoDevelop.PackageManagement.dll /Applications/Visual\ Studio.app/Contents/Resources/lib/monodevelop/AddIns/MonoDevelop.PackageManagement/MonoDevelop.PackageManagement.dll-backup // Replace existing NuGet addin .dll with new one. cp MonoDevelop.PackageManagement.dll /Applications/Visual\ Studio.app/Contents/Resources/lib/monodevelop/AddIns/MonoDevelop.PackageManagement/MonoDevelop.PackageManagement.dll Then the iOS project I was using for testing was building successfully and the project.lock.json file had the expected project references.Wednesday, October 25, 2017 9:07 PM - User2823 posted Btw maybe 15.4.2 service release contains this fix?Wednesday, November 1, 2017 9:46 AM - User18049 posted @RobLander - A fix will be in Visual Studio for Mac 7.3 at some point, not released currently. There is no fix for VS Mac 7.2 apart from using the patched binaries I linked to.Wednesday, November 1, 2017 9:48 AM - User253803 posted I updated to version 7.6.11 and it broke my project. I contacted support via visualstudio.com and was told there is no way to downgrade VS for MAC now. I'm posting here in the hopes that someone can point me to a repo or link somewhere to previous VS for mac versions?Wednesday, November 7, 2018 4:14 PM - User18049 posted @"RonanA.7363" How did it break your project?Wednesday, November 7, 2018 9:54 PM - User253803 posted @mattward In fairness, it didn't break the project. I can still build it, I just can't deploy it to simulator or device. The options (Project > Configuration > Device) that are usually there are disabled. If I set the start up project to the android project, the options are populated and work. If i then use the drop down to change project from android to iOS, the application crashes with the message: "A fatal error has occurred Details of this error have been automatically sent to Microsoft for analysis. Visual Studio will now close." I've tried updating xcode and then opening xcode and it installed some extensions. I restarted the machine but no change. Resintalled VS For mac and no change. I tried creating a new Xamarin Forms project and the same issues as my own project. Xcode version: 10.0 (10A255) If you have any ideas, I'm definitively willing to try it as right now I have no way to test my project.Thursday, November 8, 2018 9:19 AM - User18049 posted Can you attach the IDE log after reproducing the error? Thanks.Thursday, November 8, 2018 11:23 AM - User253803 posted I've pulled this out of the logs that seems to be the problem: FATAL ERROR [2018-11-09 09:59:38Z]: Visual Studio failed to start. Some of the assemblies required to run Visual Studio (for example gtk-sharp)may not be properly installed in the GAC. System.InvalidOperationException: Failed to compare two elements in the array. ---> System.ArgumentOutOfRangeException: Length cannot be less than zero. Parameter name: length at System.String.Substring (System.Int32 startIndex, System.Int32 length) [0x00087] in /Users/builder/jenkins/workspace/build-package-osx-mono/2018-02/external/bockbuild/builds/mono-x64/mcs/class/referencesource/mscorlib/system/string.cs:1257 at MonoDevelop.IPhone.IPhoneSimulatorExecutionTargetGroup.ParseIPhone (System.String name, System.String& device, System.Int32& version, System.String& subversion, MonoDevelop.IPhone.IPhoneSimulatorExecutionTargetGroup+IPhoneDeviceType& type) [0x00022] in /Users/vsts/agent/2.141.1/work/1/s/md-addins/MonoDevelop.IPhone/MonoDevelop.IPhone/Execution/IPhoneSimulatorExecutionTarget.cs:190 In case I'm wrong, I'm attached the full logFriday, November 9, 2018 10:23 AM - User18049 posted The crash looks like it is the same as that reported on the developer community site. This bug is fixed in Visual Studio for Mac 7.7.0.1552. There is also a workaround mentioned in the developer community forum post. I believe the problem is that more recent versions of Xcode one of the simulators is 'X' which breaks Visual Studio for Mac. As far as I am aware downgrading Visual Studio for Mac will not help in this case. A workaround is to rename the simulators in Xcode so that no iOS simulator name begins with a number (i.e. 0-9) or the letter X or S.Friday, November 9, 2018 1:47 PM - User253803 posted @mattward spot on sir. well played I had a custom simulator set up called "11.4 iPhone 5". Once i renamed that to "iPhone 5 11.4", everything worked. Thanks very much for the helpSunday, November 11, 2018 8:50 PM
https://social.msdn.microsoft.com/Forums/en-US/bcaca47d-d196-4dbc-82a6-440e3022be38/how-to-downgrade-visual-studio-for-mac?forum=xamarinvisualstudio
CC-MAIN-2021-31
refinedweb
1,413
53.07
A Novice Journey of integrating Huawei Video Kit feat. Node js Local Server (An Educational Video Application) Introduction HMS Video Kit allow us to play video by using a URL or multiple URLs that contain an address of a video. The latest version of this kit allow us to playback videos but in the later version it will support both editing and hosting of videos. This service helps us to build a superb video experience for our app users. Use Cases - The service can be used in a news app that contains small videos of the reported incident. - The service can be used as a video editing app. - The service can be used as a promotion video in your application. - The service can be used as an educational video related application. In this article we will be working on one of the use case and that is education video related application. For that we need two major players in any software development and that is server and client. Demo Server Side (Node Server) Prerequisite - We must have latest version of Node installed. - We must have latest version of Visual Studio Code installed. - Laptop/desktop and Huawei mobile device must share same Wi-Fi connection. Tool’s required 1. Express js is a Node js web application server framework, designed for building single-page, multi-page, and hybrid web applications. It is the de facto standard server framework for node.js. We don’t have to repeat same code over and over again using Express js. Node.js is a low-level I/O mechanism which has an HTTP module. If you just use an HTTP module, a lot of work like parsing the payload, cookies, storing sessions, selecting the right route pattern based on regular expressions will have to be re-implemented. With Express.js, it is just there for us to use.Express.js basically helps us manage everything, from routes, to handling requests and views. 2. Request module is by far the most popular (non-standard) Node package for making HTTP requests. Actually, it is really just a wrapper around Node’s built in http module, so we can achieve all of the same functionality on your own with http, but request just makes it a whole lot easier. const request = require('request'); request('', function(err, res, body) { console.log(body); }); The code above submits an HTTP GET request to developer.huawei our application: a) url: The destination URL of the HTTP request b) method: The HTTP method to be used (GET, POST, DELETE, etc.) c) headers: An object of HTTP headers (key-value) to be set in the request d) form: An object containing key-value form data Creating Project First we need to find a space in our machine and create a folder. We can name the folder whatever we want. Open Visual Studio Code, navigate to particular folder location which we created using cd command in the terminal of VS code. Run the below command to create package.json file. npm init Answer the questions presented, and you will end up with a package.json that looks like this: {"name": "android-node-server","version": "1.0.0","description": "It is a server for HMS Push Notification Kit","main": "app.js","scripts": {"test": "echo \"Error: no test specified\" && exit 1"},"author": "Sanghati","license": "ISC"} We need to run the above command when we start a new project. Also whatever tools we install in this project, a dependency will be added in the package.json file and we use the below command to install our dependencies in node_modules folder. npm install We may need to run this every time we manually add a dependency to our package.json file. Install Express We run the below command to install Express js in our project. npm install express –save — save, will ensure to add express dependency to our package.json file. Create app.js file To create app.js file using VS code terminal we need touch. Touch command creates an empty file if it doesn’t exist. Run the below command to create app.js file. npm install -g touch-cli touch app.js Import Express Open app.js in our favourite text editor, in my case VS Code. Import Express into our application: const express = require('express'); const app = express(); Create & Run Server After importing Express, we have to create a server where applications and API calls can connect. We can do that with Express listen method. app.listen(8080, () => { console.log('app working on 8080') }); We can test your server by running the below command. node app.js After running the above command we should see this: app working on 8080 Install Nodemon We will get tired of restarting our server every time we make any changes. We can fix this problem by using a package called Nodemon. Run below command to install nodemon globally. npm install -g nodemon Now, to run the project we will run below command instead node app.js. nodemon app.js Adding Routes Now let’s create a route that we can access in the browser. Browsers only serve pages using a GET request, so we need to create a route using Express get method like this: app.get('/', function(req, res) {res.send("Yep it's working");}); If we visit on our web browser, we should see this: For our application to receive POST requests, we will need a package called body-parser, which we can install by running: npm install body-parser --save body-parser extract the entire body portion of an incoming request stream and exposes it on req.body. Now, we can import body-parser into our application like this: const express = require('express'); const app = express(); const MongoClient = require('mongodb').MongoClient; const bodyParser = require('body-parser');//Body Parser ...app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); Middleware Middleware functions can access to the request object (req), the response object (res), and the next middleware function in the application’s request-response cycle. These functions are used to modify req and res objects for tasks like parsing request bodies, adding response headers, etc. Here we have used it only for a specific path. app.use(express.static(__dirname + '/public')); app.use(express.static(__dirname + '/img')); app.use(express.static(__dirname + '/videos')); Placing Videos We need to place all our videos in a video server folder as shown below: GET HTTP We will be using GET HTTP verb to send all videos from local server to android client. Let’s create our get route. Tips: Replace the link of videos and images with your respective machine IPV4 address and port number. Port number is the number your local server is running. Client Side (Android Native) Prerequisite - Must have a Huawei Developer Account. - Must have a Huawei phone with HMS 4.0.0.300 or later. - Must have a laptop or desktop with Android Studio, Jdk 1.8, SDK platform 26 and Gradle 4.6 installed. Settings Needed - First we need to create an app or project in the Huawei app gallery connect. - Provide the SHA Key and App Package name of the android project in App Information Section. - Provide storage location in convention section under project setting. - After completing all the above points we need to download the agconnect-services.json from App Information Section. Copy and paste the json file in the app folder of the android project. - Copy and paste the maven url inside the repositories of buildscript and allprojects respectively (project build.gradle file) maven { url ‘' } 6. Copy and paste the class path inside the dependency section of project build.gradle file. classpath ‘com.huawei.agconnect:agcp:1.3.1.300’ 7. Copy and paste the plugin in the app build.gradle file apply plugin: ‘com.huawei.agconnect’ 8. Copy and paste the library in the app build.gradle file dependencies section. implementation ‘com.huawei.hms:videokit-player:1.0.1.300’” /> 10. Now Sync the gradle. Retrofit Library We need Retrofit in order to call our restful APIs. Include the following dependencies in app build.gradle file. implementation ‘com.squareup.retrofit2:retrofit:2.8.1’ implementation ‘com.squareup.retrofit2:converter-gson:2.5.0’ Create Model Class Create a new java class and name it Videos. Open Videos class, copy and paste below code: Create RetrofitInterface Create a new Interface class and name it RetrofitInterface. Open the class, copy and paste below code: Create Retrofit Instance Create a retrofit instance in the MainActivity class and get the list of all videos from local server as shown below: Tips: Replace the BASE_URL with your machine IPV4 address with local server port number as shown below: private String BASE_URL =""; Show all video list Use SimpleAdapter with ListView to show all the list as shown below: Initializing WisePlayer First we need to implement a class that inherits from Application. In onCreate() method we need to initialize WisePlayerFactory.initFactory() of the WisePlayer SDK. This class is primarily used for initialization of global state before the first Activity is displayed. Tips: Do not forget to put the name of your application class in the AndroidManifest file as show below: User Interface There are two widgets in which WisePlayer use to play the videos: 1. SurfaceView widget 2. TextureView widget We will be using SurfaceView widget in our layout as shown below: Tips: Create a ShowVideo activity class and use the above code in the respective ShowVideo layout. Implements In the ShowVideo class, implement WisePlayer.ReadyListener and Callback APIs as shown below: import android.view.SurfaceHolder.Callback; import com.huawei.hms.videokit.player.WisePlayer; public class ShowVideo extends AppCompatActivity implements WisePlayer.ReadyListener, Callback SurfaceView Widget Implementation Initialize SurfaceView in onCreate() method and add layout listeners as show below: surfaceView = findViewById(R.id.surface_view); SurfaceHolder surfaceHolder = surfaceView.getHolder(); surfaceHolder.addCallback(this); surfaceHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); Create WisePlayer Instance A WisePlayerFactory instance is returned from Application class when the WisePlayer initialization is successful. WisePlayer player = HMSVideoKitApplication.getWisePlayerFactory().createWisePlayer(); Playback Parameters Set the playback parameters as shown below: player.setVideoType(PlayerConstants.PlayMode.PLAY_MODE_NORMAL); player.setBookmark(0); player.setCycleMode(PlayerConstants.CycleMode.MODE_NORMAL); In the above code, the setVideoType indicates that whether we want to play the video with audio or only audio. The setBookmark helps us to start the video from a certain time like we can start video after 30 seconds. The setCycleMode helps us to repeat the video again. Set URL’s We can set one or multiple URL’s video by using setPlayUrl method as show below: player.setPlayUrl(getIntent().getStringExtra("url")); Display the Video Set a view to display the video as shown below: @Override public void surfaceCreated(SurfaceHolder holder) { player.setView(surfaceView); } Play the Video btnPlay.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { player.start(); } }); Pause the Video btnPause.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { player.pause(); } }); Stop the Video btnStop.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { player.stop(); } }); Final Code Tricks An error will occur while using local server with android client and that is “CLEARTEXT communication to 192.168.1.4 not permitted by network security policy”. In order to avoid this error put android:usesCleartextTraffic=”true” in AndroidManifest application section. Conclusion We learn how to create a local server, create API to call the list of all the videos on the client side and show the video using HMS Video Kit. It does not take more than a day to integrate the entire scenario. I hope after reading this article we will get to see more beautiful application in AppGallery. Feel free to comment, share and like the article. Also you can follow me to get awesome article like this every week. For more reference
https://sanghati.medium.com/a-novice-journey-of-integrating-huawei-video-kit-feat-901d17c7ec87?source=post_internal_links---------1----------------------------
CC-MAIN-2021-43
refinedweb
1,962
57.47
- Issued: - 2019-10-16 - Updated: - 2019-10-16 RHBA-2019:3004 - Bug Fix Advisory Synopsis OpenShift Container Platform 4.1.20 bug fix update Type/Severity Bug Fix Advisory Topic Red Hat OpenShift Container Platform release 4.1.1.20. See the following advisory for the RPM packages.20 The image digest is sha256:a7e97365d16d8d920fedd3684b018b780337e069deb1dd8500e866c0d6110334 All OpenShift Container Platform 4.1 users are advised to upgrade to these updated packages and images. Solution Before applying this update, ensure all previously released errata relevant to your system have been applied. For OpenShift Container Platform 4.1 see the following documentation, which will be updated shortly for release 4.1.20, - 1721597 - Template Service Broker does not clean up Cluster Scoped Resources - BZ - 1725524 - m4.xlarge is not supported in ap-northeast-2b - BZ - 1732559 - Fix CSV Metadata to pass CVP validation - BZ - 1736001 - OAuth login page has wrong favicon - BZ - 1750809 - [4.1] RequestHeader IdP - `oc login` fails because oauth-server sets incompatible headers - BZ - 1751320 - The Elasticsearch deployment disappear after cluster upgrade - BZ - 1751388 - Can't display more than two Update Channels on the 'Create Operator Subscription' page - BZ - 1751859 - All downstream 4.1 elasticsearch5 builds on brew are stuck in "building" state - BZ - 1752521 - Route with 2 named ports sends traffic to wrong one with the correct one is down [4.1 backport] - BZ - 1752918 - [4.1.z] must-gather doesn't collect config.imageregistry resource - BZ - 1753293 - [4.1] [GCP] e2e failure: namespace is empty but is not yet removed - BZ - 1753394 - OLM Operator is showing wrong git commitId on 4.1.0-0.nightly-2019-09-18-141128 - BZ - 1754924 - CI: fix verify tools build failure - BZ - 1755646 - upgrade from 4.1.9 to 4.1.11 failed by 'replicaset-controller Error creating: pods "packageserver-6474c74cdd-" is forbidden: error looking up service account openshift-operator-lifecycle-manager/packageserver: serviceaccount "packageserver" not found' - BZ - 1758636 - Add prerelease-4.2 channel to UI in 4.1.z - BZ - 1759667 - Add stable-4.2, fast-4.2, and candidate-4.2 channels to UI in 4.1.z CVEs References (none) Red Hat OpenShift Container Platform 4.1 for RHEL 8 Red Hat OpenShift Container Platform 4.1 for RHEL 7 The Red Hat security contact is secalert@redhat.com. More contact details at.
https://access.redhat.com/errata/RHBA-2019:3004
CC-MAIN-2022-40
refinedweb
384
51.04
haystack Port of haystack java toolkit. Use it to connect and work with SkySpark. Project Haystack defines a tagging model and REST API for sensor systems such as HVAC, lighting, and energy equipment. SkySpark is a project to serve your IoT data and make analytics. To use this package you need installed SkySpark server (and apply a some license). Authentication methods are skipped now, so you can use this library from SkySpark pub folder. Just create in you SkySpark's folder root 'pub' directory and place here your app. It will be accessible by 'localhost_or_smth:80/pub/my_app.html'. Only client side is implemented at this moment, you can't use it from server or command-line apps. Usage A simple usage example (include this into HTML page): import 'package:haystack/core.dart'; import 'package:haystack/hclient.dart'; final String uri = ""; final String user = "haystack"; final String pass = "testpass"; HClient client; main() { client = new HClient(uri, user, pass); client.open().then((_) => client.about()).then((HDict result) { print(result); }); } Features and bugs Please file feature requests and bugs at the issue tracker. Libraries - haystack Copyright (c) 2015, BAS Services & Graphics, LLC. Licensed under the Academic Free License version 3.0 - haystack.hclient Copyright (c) 2015, BAS Services & Graphics, LLC. Licensed under the Academic Free License version 3.0
https://www.dartdocs.org/documentation/haystack/0.4.19/index.html
CC-MAIN-2017-09
refinedweb
218
59.9
Hi, I've been having a few headaches trying to get my code to compile. I feel like I've tried every way of changing "const"-ness, but still can't get this to compile with g++ in a cygwin environment. Would you be so kind to tell me where on earth I am going wrong? Below is the code and the compiler error. The error talks about "qualifiers" which I assume mean there's a "const" missing somewhere, but for the life of me, I can't figure out where. The compiler generates this error (g++):The compiler generates this error (g++):Code:#include <iostream> using namespace std; class Test { public: Test() : val(0) {} Test(int v) : val(v) {} int getVal() { return val; } bool operator<(const Test& rhs) { if (rhs.val < val) return false; else return true; } private: int val; }; template <class T> inline const T& mymax(const T& a, const T& b) { if (a < b) return b; else return a; } int main(int argc, char* argv) { int a1 = 10, a2 = 20; const Test t1(10); const Test t2(20); int result = mymax(a1,a2); const Test result2 = mymax(t1,t2); cout << "Largest value of ints is: " << result << endl; cout << "Largest value of Test(ints) is: " << result2.getVal() << endl; } I've no idea what this message is trying to tell meI've no idea what this message is trying to tell meCode:maxTamplate.cpp: In function `const T& mymax(const T&, const T&) [with T = Test]`: maxTemplate.cpp:37: instantiated from here maxTemplate.cpp:26: error: passing `const Test' as `this' argument of `bool Test::operator<(const Test&)' discards qualifiers
https://cboard.cprogramming.com/cplusplus-programming/139140-const-correctness-gone-wrong-i-suspect.html
CC-MAIN-2017-22
refinedweb
271
58.11
Mono 1.0 Beta 2 Contents - Mono Beta 2 Introduction. - Contents of the Beta. - Changes since Beta 1 - Known Issues - Logging bugs against Mono - Contributors - Contact Information Mono Beta 2 Novell is proud to introduce the second Beta release of Mono, an open source implementation of, This release is the second and last Mono 1.0 beta releases planned before our final release. This is an opportunity for developers outside of the contributing community to experience mono on their platform of choice. One of our main objectives is to make it easy for the novice or experienced Linux or Windows developer to start building applications on Linux or other platforms right away. We paid a lot of attention in this release to installation and package availability for the following platforms: Red Hat 9.0, Fedora Core 1, Novell SUSE 9.0, Novell SUSE SLES 8, MacOS X and Microsoft Windows 2000 and XP. This Beta is signature compatible with the .NET Framework 1.1. Although we are signature compliant for every assembly we ship, not all the functionality is available, most importantly the following are not. Gtk# is part of this release as well: a set of .NET bindings for the Gtk+ toolkit. This library allows you to build fully native Gnome application using Mono and includes support for user interfaces built with the Glade interface builder.. Applications built using GTK# will run on many platforms including Linux, Microsoft Windows (with our Microsoft Windows GTK+/GTK# runtime support) and Apple MacOS X. While our System.Windows.Forms implementation is making good progress and will allow user to run graphical .NET SWF based application on Linux, GTK# is the Mono project main graphical toolkit for cross platform application development and tight Linux integration. MonoDevelop In late 2003, a few developers from the Mono community began migrating SharpDevelop, a successful .NET open Source IDE from SWF on Windows to GTK# to Linux. We are pleased to release the result of their efforts, MonoDevelop with Mono 1.0 Beta 2 and the final release of Mono 1.0. In a short amount of time, MonoDevelop has become a very complete IDE for Mono including features such as syntax highlighting, code completion, import of Microsoft Visual Studio .NET solutions and it integrates the MonoDoc system. Installing Mono Beta-0.95.tar.gz cd mono-0.95 ./configure make install Optional Packages Libgdiplus and WineLib are optional packages, you only need those if you intent to use System.Drawing and Windows.Forms respectively. libgdiplus: tar xzf libgdiplus-0.8.tar.gz cd libgdiplus-0.8 Changes since the Beta1 release The following is a list of changes since the Mono 0.91 release: C# Compiler Bug Fixing Ben Maurer, Marek Safar, Martin Baulig, Miguel de Icaza Raja Harinath contributed plenty of bug fixes to Mono’s C# compiler for this release: Obsolete attributes are better supported, better error checking, all critical and blocker bugs have been fixed at this point. MCS went on a diet as well, and Ben shrunk 7 megabytes of memory usage in a typical bootstrap. pkg-config integration The Mono C# compiler now integrates with pkg-config: it is possible for library developers to ship a pkg-config description file for their libraries. For instance, to compile gtk-sharp and gtk-html software, now you can use: mcs -pkg:gtk-sharp sample.cs New Thread.Abort implementation Lluis architected and implemented a new implementation for Thread.Abort. This is one of the areas that could make the runtime unstable if a thread was aborted while it held unmanaged resources. Now a new architecture has been put in place that solves this problem. The Thread.Abort problem would actually surface in many different forms across the board, particularly in ASP.NET, those issues are now gone. Remoting Patrik Torstensson and Lluis Sanchez got our Runtime.Remoting stack to pass all of the regression tests for the first time: a combination of fixes in our runtime, the managed libraries and the Tcp transport were done to fix all of the outstanding problems. Mono Runtime The Mono runtime is now statically linked against libmono, which means that it’s simpler to debug it, but is also slightly faster. Plenty of bug fixing occurred in this release in every component of the runtime: io-layer, metadata reading, internal calls, code generation and optimizations. Zoltan added support for AsAny marshalling. Code generator improvements Massimiliano Mantione checked in his implementation of Arrays-Bounds-Check elimination for the Mono VM. If the JIT can infer that an index will remain within range, it can now eliminate the expensive tests for ranges. Use the -O=abcrem to use. Since this optimization is fairly expensive it is not turned on by default in the JIT, it is recommended that you use this with the Ahead-of-Time compilation flag (–aot). The old setup that used gcc exception information in internal calls has been deprecated and instead Zoltan has optimized the code paths to use the regular trampoline-based approach. This is not only more reliable, but does not depend on gcc/glibc to be kept in sync. X86 code generator improvements Patrik Torstensson made the register allocator favor physical registers in favor of virtual registers and reduced the spill/load in 30%. He also added various peephole optimizations to use membase addressing compared on X86 and implemented a new allocator for operations on 64-bit values (shl and shr are now inlined as opposed to use helper routines). Paolo Molaro continued to improve the PowerPC code generation engine and added support for the POWER 4 CPUs. The entire Mono stack is now routinely used on MacOS X, and even large applications like MonoDevelop are running flawlessly on it. Dick has fixed various problems in our io-layer related to the MacOS X. I/O libraries Dick Porter has contributed the File Locking support to File and FileStreams. We use advisory Unix locks, but notice that Mono will respect the more strong Windows semantics across Mono processes. Notice that by default files are open with the file share mode set to none, this might break older software that did not take this into account. Java and IKVM Mono ships with the new code drop from IKVM with the new renamed commands. Please visit the IKVM site for more information. System.Data Konstantin Triger and Boris Kirzner from Mainsoft contributed a new implementation of the internal data storage in System.Data. Data is now stored in typed arrays for improved performance, less objects are allocated. Container arrays never shrink, but the allocated space is reused between different rows. Atsushi Enomoto and S Umadevi continued doing bug fixing on this namespace: A large number of changes were done in the DataSet xml handling. It now infers, creates and handles relationships, namespaces, simple content columns and several other elements better than before. XmlDataDocument has been tuned for performance, and many problems were fixed there. Atsushi also improved xsd.exe: TypedDataSetGenerator supports more functionality such as DataRelation based features. xsd.exe now supports a custom ICodeProvider with the -generator option. This enables the user to generate code in any languages that have a CodeDomProvider. XSP, System.Web and Web.Services Gonzalo added support for sessions in Web Services and fixed the cookieless sessions and chunked encoding (server size). XSP now supports virtual hosts thanks to a patch from Jaroslaw Kowalsky. To configure virtual hosts, you can use a file with the extension .webapp, it looks like this: <web-application> <name>Root</name> <vpath>/</vpath> <path>/home/mono/public_html/html</path> <!-- vhost and port are ignored in xsp.exe --> <vhost>my.host.name</vhost> <vport>80</vport> </web-application> More details are available on the manual pages. Cesar Lopez continued his work on the JScript engine: switch, using, with, for, exceptions are now implemented. Anirban and Rafael continues to work on the VB.NET compiler grammar, and Dennis Hayes contributed to the continued effort to port the VB.NET runtime from Java to C#. Jordi, Ravindra and Sanjay continue to work on System.Drawing API. Now images can be rotated, bitmaps can be locked, GIF and TIFF files are supported and there were plenty of bug fixes. Most of the converters are done (only FontConverter is missing). All Hatch brushes are completed (with its 53 styles) Peter Bartok has continued his work on Windows.Forms, and many more applications run, most of the basic tests are now operational, and he is working now on a control-per-control basis to complete it. Notice that Windows.Forms is only shipping as an alpha preview with Mono 1.0 Bug fixing, completion On the runtime engine, Patrik Torstensson has fixed various outstanding complex problems related to app-domain code reuse. Matthijs ter Woord contributed to the Html32TextWriter class. Gert contributed many small API changes all throughout the class libraries as well as getting many different attributes correctly in our class libraries. A big help in our push for API signature compatibility. Mike Kestner did our API completion for EnterpriseServices. One of our main goals developers that contributed since the Mono Beta 1 are: Alon Gazit, Andreas Nahr, Andrew Arnott, B Anirban, Benjamin Jemlich, Ben Maurer, Bernie Solomon, Boris Kirzner, Carlos Alberto Cortes, Carlos Guzman, Cesar Nataran, Christopher McGinnis, David Sheldon, David Waite, Dennis Hayes, Derek Woo, Dick Porter, Duncan Mak, Elan Feingold, Erik Dasque, Fawad Halim, Francisco Figueiredo Jr., Gert Driesen, Gonzalo Paniagua, Gustav Munkby, Gustavo Giraldez, Iain McCoy, Jackson Harper, Jaroslaw Kowalski, Jean-Marc Andre, Jeroen Zwartepoorte, John Luke, Jonathan Pryor, Jordi Mas, Jörg Rosenkranz, Joshua Tauberer, Larry Ewing, Lluis Sanchez, Lluis Sanchez, Marek Safar, Mark Crichton, Martin Baulig, Martin Willemoes, Massimiliano Mantione, Miguel de Icaza, Mike Kestner, Mike Shaver, Neale Ferguson, Nick Drochak, Owen Fraser-Green, Paolo Molaro, Patrik Torstensson, Pawel Rozanski, Peter Bartok, Rachel Hestilow, Radek Doulik, Rafael Teixeira, Raja R Harinath, Ravindra Kumar, Robert Shade, Roops, Sachin Kumar, Sanjay Gupta, Sebastien Pouliot, S Umadevi, Todd Berman, Urs Muff, Vladimir Vukicevic and Zoltan Varga. This list is not complete, but we will update it in the next few weeks, if your name is missing, please let us know. For any direct feedback (excluding bug reports) regarding Mono 1.0 Beta 2 please contact monobeta@ximian.com. Other venues are available for discussion about Mono, including our mailing-lists and IRC channel (#mono on irc.gnome.org).
http://www.mono-project.com/docs/about-mono/releases/1.0.0-beta2/
CC-MAIN-2018-26
refinedweb
1,731
55.13
J J This question in Jmeter JMeter HTTP request example JMeter HTTP request example Concerning: how do I set path? also what do i need to do to get the helloworld servlet work? Thanks in advance> < download excel download excel hi i create an excel file but i don't i know how to give download link to that excel file please give me any code or steps to give download link file download file download I uploaded a file and saved the path in database. Now i want to download it can u plz provide code Java download Java download Hi, Is it possible to download the Java for Linux Centos 6? From where it can be downloaded for Linux Centos 6? Thanks download - SQL download i want download the mysql server Hi friend, The latest version of MySQL can be downloaded from. Read for more information. Download Java process of downloading and Installing Java Hi, How easy is the process of downloading and Installing Java? I have windows 7 machines. Can anyone tell me how to download and install Java? Thanks php download free php download free From where i download php free php download file script php download file script PHP script to download file in IE Download PDF file Download PDF file How to download PDF file with JSF download code from database download code from database how to download files from database how to download oracle 9i how to download oracle 9i how to download oracle 9i Download file - JSP-Servlet Servlet download file from server I am looking for a Servlet download file example how to download file how to download file How do I let people download a file from my page php download file code php download file code PHP code to download the files from the remote server File Download in jsp File Download in jsp file upload code is working can u plz provide me file download Free download java Free download java Hi, Where can I download java for absolutely free and quickly? Thanks JAVA S/W download JAVA S/W download Hi I'm new to JAVA, I want to download JAVA S/W from where I need to download. Please help me with the link. Thank you, Hi ! u can download JAVA from How to download Java How to download Java Hi, My windows 7 computer is ready. Now I want to download and install Java. Can anyone tell me How to download Java and then install on my computer. Thanks Download a file - Java Beginners Download a file Hi, I need a java code to download a file from a website and before downloading it should give me an option to browse and select the location to put the file. Thanks in advance URL folder download to local URL folder download to local I have a requirement to download all the folders and files inside from a SVN link to my local directory. Can you help how it could be done upload and download mp3 upload and download mp3 code of upload and download Mp3 file in mysql database using jsp and plz mention which data type used to store mp3 file in mysql database Download java 1.6 Download java 1.6 Hi, I have to develop a software in Java 1.6. From where I can Download java 1.6? Provide me the url Downloading java 1.6. Thanks download pdf files download pdf files pls help me,I don't know how to convert .doc,.docx files into pdf files and download that pdf files using servlet or jsp download jar - Framework download jar hi somebody could ya tell me from where i can get portal-ejb.jar php download file browser php download file browser limiting downloading file system via browser only through my web application Oracle 9i free download Oracle 9i free download where to download oracle 9i for database connectivity in j2ee File Download Security File Download Security Hello, I trying to create an application in PHP to prevent or provide file download security. So, I need the help of senior... files downloading. Please read this article about File Download Security Java Free download Java Free download Hi, What is the url for downloading Java for free? Is java Free? Can I get the url to download Java without paying anything? Actually I am student and learning Java. I have to download and install Java on my Java FTP Download Download. Thanks Hi, The easiest way to connect to FTP server and then download file from server is to use the Apache FTPClient library. View the complete example at FTP Server : Download file. and Build from Source Download and Build from Source Shopping cart application developed using Struts 2.2.1 and MySQL can be downloaded from... Of An Application Download Source Code Upload and Download multiple files Upload and Download multiple files Hello Sir/Madam, I need a simple code for upload and download multiple files(it may be image,doc and txt)which has to be store in database and also retrieve the file from database download - JSP-Servlet download here is the code in servlet for download a file. while...; /** * * @author saravana */ public class download extends HttpServlet...(); System.out.println("inside download servlet"); BufferedInputStream Running the Test Plan one visualizer because during the test run the JMeter will ensure that no sample is recorded more than once to the same file. Configuring JMeter The properties of JMeter can be modified by implementing changes in the jmeter.properties csv file download csv file download Hello Every one, when user clicks download button I want to retrieve data from mysql database table in csv format using java.if you know please send code. Thanks in Advance Please visit upload and download video upload and download video how to upload and download video in mysql databse using jsp? plz give me demo of this with table... 1)page.jsp...">Download File</a> 3)download.jsp: <%@page import What is Bandwidth or Download What is Bandwidth or Download? In this article you will learn about Bandwidth. Understanding the Bandwidth or data transfer is very important in the web hosting world. What is Bandwidth? Bandwidth is a measurement Templates Download - Java Beginners Photo upload, Download php download file from ftp php download file from ftp Need to download files from ftp using php. a simple example download image using url in jsp download image using url in jsp how to download image using url in jsp php download file from url php download file from url how to download multiple files in PHP using the URL's java/jsp code to download a video java/jsp code to download a video how can i download a video using jsp/serv Download Current Web Page As PDF Download Current Web Page As PDF Need a utility to save current web page as pdf file into specified directory... Thanks In Adv upload and download files - JSP-Servlet upload and download files HI!! how can I upload (more than 1 file) and download the files using jsp. Is any lib folders to be pasted? kindly... and download files in JSP visit to : Which java can i download? Which java can i download? Hello, i'm a beginner on java. Which java can i download for to exercise with my codes? Thanks in advance. nobody. And i download Eclipse java. But when i want to R.Ragavendran.. Actually i am in a urgent need of a program for Uploading... files and similarly when the user clicks the download option, he must be Upload and download file - JSP-Servlet Upload and download file What is JSP code to upload and download............ Now to download the word document file, try the following code...(); response.getOutputStream().flush(); %> Thanks I want to download a file.... Upload and Download images: 1)page.jsp: <%@ page language="java...; <a href="showimages.jsp">Download Image</a> 3 Open Source Download Manager Open Source Download Manager Open source Download manager A download manager is a computer program designed to download files from the Internet... Wide Web (with file downloading being of secondary importance). Download
http://www.roseindia.net/tutorialhelp/comment/89121
CC-MAIN-2013-48
refinedweb
1,369
60.85
I have a computation that can be divided into independent units and the way I'm dealing with it now is by creating a fixed number of threads and then handing off chunks of work to be done in each thread. So in pseudo code here's what it looks like # main thread work_units.take(10).each {|work_unit| spawn_thread_for work_unit} def spawn_thread_for(work) Thread.new do do_some work more_work = work_units.pop spawn_thread_for more_work unless more_work.nil? end end sleep 10 until work_units.empty? If you modify spawn_thread_for to save a reference to your created Thread, then you can call Thread#join on the thread to wait for completion: x = Thread.new { sleep 0.1; print "x"; print "y"; print "z" } a = Thread.new { print "a"; print "b"; sleep 0.2; print "c" } x.join # Let the threads finish before a.join # main thread exits... produces: abxyzc (Stolen from the ri Thread.new documentation. See the ri Thread.join documentation for some more details.) So, if you amend spawn_thread_for to save the Thread references, you can join on them all: (Untested, but ought to give the flavor) # main thread work_units = Queue.new # and fill the queue... threads = [] 10.downto(1) do threads << Thread.new do loop do w = work_units.pop Thread::exit() if w.nil? do_some_work(w) end end end # main thread continues while work threads devour work threads.each(&:join)
https://codedump.io/share/C807cMm4b1iY/1/how-do-i-manage-ruby-threads-so-they-finish-all-their-work
CC-MAIN-2017-09
refinedweb
230
79.16
Thanks for the clarification – I’ll try to dig into this and use execute.cpp as a starting point for my problem… Kind Regards, Hubert Advertising From: Guangya Liu [mailto:gyliu...@gmail.com] Sent: Mittwoch, 21. September 2016 09:19 To: user@mesos.apache.org Cc: dev; Vinod Kone Subject: Re: Support for tasks groups aka pods in Mesos The answer is No, the taskGroup can be treated as Pod in Kubernetes, it will be a set of containers co-located and co-managed on an agent that share some resources (e.g., network namespace, volumes). For your case, you may want to split your 50 tasks to small task groups. Also the `execute` cli is just an example framework and it can only launch one task group for now, you may want to enhance it if you want to launch multiple task groups. On Wed, Sep 21, 2016 at 1:41 PM, <hubert.asa...@dlr.de<mailto:hubert.asa...@dlr.de>> wrote: Hi,<tel:%2B49%20%280%29%208153%2028-2894> | fax +49 (0) 8153 28-1443<tel:%2B49%20%280%29%208153%2028-1443> | hubert.asa...@dlr.de<mailto:hubert.asa...@dlr.de><> From: Vinod Kone [mailto:vinodk...@apache.org
https://www.mail-archive.com/user@mesos.apache.org/msg08107.html
CC-MAIN-2016-40
refinedweb
202
65.83
You built your first Lightning web component, everything looks great in the IDE, but something is not working as expected in your Salesforce org. That’s the point where it’s important to know how you can debug Lightning web components. This blog post will show you the available techniques. Before we look into debugging, it’s important to understand how we serve Lightning Web Components to the browser in what we call production mode and what utilities you have at hand for them. That mode is what you experience out of the box when a user uses Salesforce. It provides you two things when it comes to Lightning Web Components: While it’s not related to debugging, it’s also noteworthy that we ship heavy JavaScript transformations for older browsers, like IE11. That way you can use modern JavaScript and actually don’t have to care what browser your users are using. Minification means that we compress JavaScript into as few bytes as possible by removing any unnecessary characters and elements like line breaks, whitespace, tabs, code comments and so forth. This reduces the overall traffic that’s required for sending a file to a browser. Minifaction also changes the names of functions or variables, for example const mySuperVariable can become const d . Every Lightning Web Component JavaScript file that your browser receives from Salesforce in production mode is minified. Without any special setting, you can use the pretty format option in Chrome DevTools (or similar counterpart in your preferred browser) to get some sort of code formatting. This doesn’t give you full readability because of the changed names of variables and functions, but it’s pretty decent for a first check. This code is already debuggable, which means you can set breakpoints, inspect values during runtime, and use Chrome DevTools to work with the debugged values. Note, that you see in the GIF already the location of custom Lightning web components in the Source tab – it’s modules (compared to components for Aura components). Next, we proxy certain things, like data that is provisioned via decorators (@api, @track, @wire). Some of that is to be considered read-only, like @api and @wire decorated properties. By using JavaScript Proxies we make sure that they stay read-only. And for @track decorated properties, we use proxies to observe data mutation. This also means that you only see the Proxy object and you either have to use something like JSON.stringify() in Chrome DevTools (note: if you have an Object with circular references, it’ll throw when stringified), or you have to inspect the object structure itself. Now, you already get some good debugging information in production mode. But what if you want the real cool stuff? Let’s take a look at that. Besides production mode, you can enable debug mode for specific users. Debug mode gives you a few things, and particularly a few things that were not available previously for Aura components: You can read more about debug mode and how to enable it for users in the Lightning Web Components documentation. You can also use Salesforce CLI to create a new user in a scratch org with debug mode enabled using this command: sfdx force:user:create -a mydebuguser UserPreferencesUserDebugModePref=true More information on how to use the force:user:create command can be found in the documentation. Real exciting is that we ship unminified JavaScript in debug mode. What you get in the browser is what you coded (at least in JavaScript + CSS). The mapping from what’s in your IDE to browser is close to 1:1. That’s pretty cool, right? Especially in longer JavaScript classes you don’t have to guess, or remember, what variable d was again — you’ll now see the name as it lives in your source code. And you can debug it, like you can do with minified JavaScript. Note that the JavaScript file in the browser also includes additional code, like transformed decorators or relative imports that got rolled up. Check out the video at the bottom to see more. A tip on debugging data that you receive via decorated properties (@api or @wire): if you bind the decorator to a property you won’t currently be able to debug it. In case you see some behavior that you need to debug, change the property to a function (for @wire) or to a setter (for @api). Then you can debug based on the deconstructed data and error property. As you recall from the previous section, we’re using JavaScript proxies to enforce that certain types of data are read-only. To simplify the readability for you, we’re shipping a custom formatter in debug mode for these proxies. That means you won’t see the proxy object in Chrome, but instead the real value. We’re basically unwrapping the visual aspect of the proxy (but it’s still proxied, so no chance to modify the data, ok?!). Custom formatters aren’t enabled by default in Chrome, so you’ll need to set Chrome DevTools => Settings => Enable custom formatters. The third feature we’re shipping for helping you to better debug your Lightning web components are “LWC engine” warnings. In debug mode the Lightning Web Components engine actually identifies bad patterns that are only detectable during runtime, and then prints them as console warnings. It’s a best practice to develop your Lightning web components in an org using debug mode. That way you see if your code can be even more improved while you develop. There’s one caveat at the time of this writing: you’ll likely see other console warnings from the LWC engine on the console, as we with the initial release don’t filter on your own namespace. Because of that, you see other warnings for things like base Lightning components. Also note that in the current version of the filter in Chrome DevTools has no effect on the logged warnings themselves. Some of the warning messages are also obsolete and will be removed soon, like Property “xyz” of [object] is set to a non-trackable object, which means changes into that object cannot be observed (meaning you don’t have to care about that at the moment). And yes, we’ll improve that in the next few months (#safeharbor). The best way to actually filter for only your own components is to remove the “info” log level which hides the stack traces. This doesn’t mean that you should take a break when your code hits an exception (although sometimes it’s the right thing to do). This means that you can — and should — leverage the functionality in your browser to pause JavaScript execution when an error is caught. When having this feature activated, you’ll find yourself in the situation that your browser will also halt on any exceptions that are not caused by your own code, but by ours. In that case, we recommend that you make use of blackboxing, which is a feature available in Chrome and Firefox. Blackboxing allows you to define JavaScript files to be excluded from pausing, so you will only pause on your own exceptions. Chrome allows to set regular expressions for blackboxed scripts. Using these two patterns (which you can add via Chrome DevTools => Settings => Blackboxing), you will exclude most of JavaScript errors that may be surfaced by other (read: not yours) components. /aura_prod.*.js$ /components/.*.js$ All the information of this blog post is also available as a short walkthrough video in our newly launched Lightning Web Components Video Gallery. Lightning Web Components are not only based on modern web standards: they are also debuggable using standard tooling. While production mode gives you good capabilities for peeking into potential issues, the new and enriched functionality for debug mode makes the experience better. In one of the next releases (#safeharbor), we’ll iterate over the experience and will add new functionality like JavaScript source maps to make debugging even easier. If you want to learn more about how to use the Chrome DevTools, check out the documentation. This website also contains a lot of useful tips to make you even more productive with Chrome DevTools. Once you’re ramped up, you should try your newly gained knowledge by deploying one of the Lightning Web Components apps from the Trailhead Sample Gallery. Another important element that we haven’t yet discussed is how to actually unit test your Lightning Web Components functionality before you deploy them, so that you may not have to debug your code at all. We’ll cover that in an upcoming blog post. René Winkelmeyer works as Principal Developer Evangelist at Salesforce. He focuses on enterprise integrations, Lightning, and all the other cool stuff that you can do with the Salesforce Platform. You can follow him on Twitter @muenzpraeger.
https://developer.salesforce.com/blogs/2019/02/debug-your-lightning-web-components.html
CC-MAIN-2019-39
refinedweb
1,478
59.84
in reply to Re: (crazyinsomniac) Re: Coding superstitionsin thread Coding superstitions sub fred { return ( ( $_[0] eq 'ooyah' ) ? 1 : 0 ); } [download] And I disagree with the lecturer about leaving the last return to dangle -- what he/she suggested is obsfucation, and you're not doing yourself any favours with that approach. --t. alex "Excellent. Release the hounds." -- Monty Burns. 1. Keep it simple 2. Just remember to pull out 3 in the morning 3. A good puzzle will wake me up Many. I like to torture myself 0. Socks just get in the way Results (289 votes). Check out past polls.
http://www.perlmonks.org/index.pl?node_id=137464
CC-MAIN-2016-44
refinedweb
101
77.33
#include <MFnPfxGeometry.h> This is the function set for paint effects objects. PfxGeometry is the parent class for the stroke and pfxHair node. The output geometry for pfxHair and stroke nodes may be accessed through this class. Destructor. Class Destructor Constructor. Class constructor that initializes the function set to the given MObject. Constructor. Class constructor that initializes the function set to the given constant MDagPath object. Constructor. Class constructor that initializes the function set to the given MObject. Function set type. Return the class type : MFn::kPfxGeometry Reimplemented from MFnDagNode. Class name. Return the class name : "MFnPfxGeometry" Reimplemented from MFnDagNode. Get line data for the current output pfx tubes. The passed in arrays will be filled with pointers to MrenderLine classes. If there are no leaves or flowers then the passed in leafLine and flowerLine arrays will be left empty. Arrays are generated for only the specified attributes. This routine creates the memory for the arrays that it computes. This memory can only be released using the deleteArray method on the MRenderLineArray class. DeleteArray should be called for the mainLine, leafLine, and flowerLine variables when done. These variables wrap the returned data and allow access but the MRenderLineArray destructer does not delete this wrapped memory, so one must use MRenderLineArray::deleteArray(). The MRenderLine, MVectorArray and MDoubleArrays returned from MRenderLineArray will point to deleted memory after calling deleteArray, so be careful to only call deleteArray when finished using the line data(or copy the arrays first). Gets the bounding box of the specified geometry. The passed in double arrays will be filled with minimum and maximum coordinates of the geometry.
http://download.autodesk.com/us/maya/2009help/api/class_m_fn_pfx_geometry.html
CC-MAIN-2015-14
refinedweb
269
50.23
Case for Preprocessing Capabilities in the Java Language Java implementation does not have a preprocessor. In this article, we have described what a preprocessor does and what Java offers as a substitute. We have taken the C preprocessor's features and discussed its pros and cons with respect to Java. Finally, we have by a code example illustrated a few powerful features, which can be imbibed by the Java language. What Is a Preprocessor? A preprocessor is a program that works on the source before the compilation. As the name implies, the preprocessor prepares the source for compilation. The notion of the preprocessor has been there from the earliest times of programming languages. The concept of embedded scripting languages and methodologies is derived from the principle of preprocessing, JSP/ASP/PHP engines are essentially like preprocessors. Let us take the example of a JSP page. A JSP page is an HTML page with JSP tags embedded in it. 1: <html> 2: <%@ page import = "java.util.*, java.lang.*" %> 3: <p> The time now is <%= new Date() %> 4: </html> Line 3 within <% %> contains Java code; this Java code is processed by the JSP engine at the server side and the content returned to the browser that requested this JSP page is pure HTML. The HTML is then processed by the browser and rendered. The important point to note here is that the JSP engine worked on the source file and modified it. A formal preprocessor like the cpp, very much like the JSP engine, looks for preprocessor directives and translates the source file for the compiler. The preprocessor directives work like a scripting language for the compiler where the preprocessor is the interpreter. Even though most languages, with the exception of C/C++, do not include a preprocessor in their definition, a preprocessor can be used with any language. For example, the GCC (GNU Compiler Collection) has a -E flag which directs it to only do preprocessing and not compilation. This can be used to preprocess any source file. In this article, we will build a very simple preprocessing program to highlight the need for a small enhancement. Preprocessor and Java Java does not have a preprocessor. Having said that, we will now explore what a preprocessor does and how we do it in Java without it. - File inclusion: The #include directive tells the preprocessor to copy the text of the file specified in the directive to this source file. Java has advanced from the notion of files and modules to classes. A Java class exists with respect to a namespace that is the package of the class. The packaging of classes maps directly to the directory structure of the operating system, so when a class is required to be used in another class, the compiler knows exactly where the class is located if it is found in the CLASSPATH. The import statement is just an aid in name resolution. Instead of using fully qualified class name like java.util.HashMap everywhere in the code, we can simply use HashMap if we have the statement import java.util.* in the beginning of the source file. - Defining symbols: public static final is the Java's answer to #define. As far as defining constants is concerned, the Java's static final is far more superior because there is no type checking associated with #define declarations. Also, because a final variable is defined in a class (properly packaged), there is no chance of name collision. An equivalent of conditional compilation can be achieved by the use of public static final variables in conjunction with a modern Java compiler. - Macro substitution: There is no substitute for a macro in Java. M4 (the preprocessor for RATFOR, Rational Fortran compiler) and cpp are capable of very powerful macro constructs. Take a simple macro: #define ARR_SIZE (sizeof / sizeof([0]))This macro can be used much like a function call where the preprocessor replaces the macro with the actual C code. It can be argued that macros are inherently type unsafe and lead to hard-to-find bugs and a Java method with a powerful compiler does a much better job, still properly written macro is a lot of convenience. - Predefined macros: The ANSI C standard predefines six macros, the ones significant for this discussion are __FILE__ (the name of the source file) and __LINE__ (line number of the source). There is no equivalent of these predefined macros in Java. There is no direct way of knowing which classfile is being executed and no direct way of knowing which line number of the source is being executed in Java. - Rest of the features: The other features of a preprocessor such as Line Splicing, Stringizing, Token pasting, and so forth are not as conspicuous in their absence as to warrant a discussion. Logging Demands One place where it is very important to know the line number and the class/file name is logging and field debugging. In C/C++, we can make use of the __LINE__ macro and let the preprocessor translate it to the correct line number. With JDK1.4 was introduced the logging API in the java.util.logging package. With these APIs, the mechanism of logging has become structured in Java. The LogRecord object has the source class name and the source method name that is either set explicitly or the LogRecord object infers it by analyzing the call stack. Another way to know about the name of the class in question is through the call: this.getClass().getName(). Similarly, the line number also can be accessed by: StackTraceElement ste[] = (new Throwable()).getStackTrace(); int lineNumber = ste[0].getLineNumber(); All these mechanisms use either the stack frames or reflection, which are both expensive and difficult to use. Wouldn't it be a lot easier if we have a __LINE__ macro? It could also fit in the existing code that does not use the new logging framework. A Simple Line Number Preprocessor The following is the listing of a simple preprocessor that takes the name of the Java source file as the argument and does __LINE__ macro substitution. 1: import java.io.*; 2: import java.util.regex.*; 3: 4: public class LinePreProcessor { 5: public static void main (String args[]) { 6: LinePreProcessor lp = new LinePreProcessor(args[0]); 7: } 8: 9: public LinePreProcessor (String filename) { 10: System.out.println("Name of the file is "+filename); 11: Pattern p = Pattern.compile("__LINE__"); 12: Matcher m = null; 13: LineNumberReader lnr = null; 14: File outputFile = null; 15: PrintWriter pw = null; 16: try { 17: outputFile = File.createTempFile (filename, null); 18: pw = new PrintWriter (new FileWriter (outputFile), true); 19: } 20: catch (IOException ioe) { 21: ioe.printStackTrace(); 22: } 23: try { 24: lnr = new LineNumberReader (new FileReader (filename)); 25: } 26: catch (FileNotFoundException fnfe) { 27: fnfe.printStackTrace(); 28: } 29: String line = null; 30: try { 31: while( (line = lnr.readLine())!=null) { 32: m = p.matcher(line); 33: if (m.find()) { 34: line = m.replaceAll (""+lnr.getLineNumber()); 35: } 36: pw.println(line); 37: 38: } 39: pw.close(); 40: lnr.close(); 41: File f = new File (filename); 42: cp (outputFile, new File(filename)); 43: } 44: catch (IOException ioe) { 45: ioe.printStackTrace(); 46: } 47: } 48: 49: public File cp (File src, File dest) throws FileNotFoundException { 50: FileInputStream fis = new FileInputStream (src); 51: FileOutputStream fos = new FileOutputStream (dest); 52: 53: byte buffer[] = new byte [32]; 54: int b_read = 0; 55: try { 56: while (true) { 57: b_read = fis.read(buffer); 58: if (b_read <=0) break; 59: fos.write(buffer,0, b_read); 60: fos.flush(); 61: } 62: } 63: catch (IOException ioe) { 64: ioe.printStackTrace(); 65: } 66: return dest; 67: } 68: 69: } The program on line 11 creates a regular expression pattern "_LINE__", which will be replaced in the subject source file. It then uses the LineNumberReader class from the standard Java IO library to read the file specified on the command line (line 24). Another interesting thing to note is outputFile = File.createTempFile (filename, null); on line 17. Here we create a temporary file that is guaranteed to have a unique name. The program then reads one line at a time and replaces the __LINE__ macro with the actual line number. Finally, the program uses the cp (copy) function (line 49) to copy the temporary file created to the actual source file (one that was given on the command line). Let's say we have a source file LineNumberTest.java : public class LineNumberTest { public static void main(String args[]) { StackTraceElement ste[] = (new Throwable()).getStackTrace(); System.out.println(ste[0].getLineNumber()); System.out.println("The line number here is __LINE__"); LineNumberTest tt = new LineNumberTest(); } } After having compiled our LinePreProcessor.java, we can call the preprocessor on our LineNumberTest.java: java LinePreProcessor LineNumberTest.java The resulting LineNumberTest.java becomes: public class LineNumberTest { public static void main(String args[]) { StackTraceElement ste[] = (new Throwable()).getStackTrace(); System.out.println(ste[0].getLineNumber()); System.out.println("The line number here is 5"); LineNumberTest tt = new LineNumberTest(); } } The __LINE__ string is replaced by 5, which is the correct line number. Case for Compiler Enhancement In order to use this, we must have a separate staging area for the source files. The CVS will have the version that contains the __LINE__ directive and the source files will be converted to Java files with __LINE__ substituted with actual line numbers in the staging area. Then finally, the Java files can be compiled to the classfiles. Evidently the tedium of the staging area has taken the fun out of our toy program, but that apart, isn't the __LINE__ macro a powerful feature? And what's more: Even with spoofing (as in LogRecord ) and code obfuscation, the line number information will always be there because it becomes part of the code. It will not be very difficult for the Java compiler writers to predefine a few macros that the developers can then use in the code. As discussed above, in most of the cases Java does not actually need a preprocessor; however, adding a few macro substitution features in the compiler itself can give greater flexibility and power to the developers. Suggested References The Java logging overview. A comparison of programming languages. Java syntactic extender.<<
http://www.developer.com/java/other/article.php/1567101/Case-for-Preprocessing-Capabilities-in-the-Java-Language.htm
CC-MAIN-2013-48
refinedweb
1,688
63.49
: > > > <HTML xmlns:ns3="" xmlns:ns2=". > > com/moreover" xmlns: > > <BODY> > > <page xmlns=""> <slashdot > > > > <table width="100%" border="0"> > > <tr > > > <td STYLE="background-color : #B0E0E6; font : x-small > > Arial, Helvetica, > > sans-serif;" colspan="5"> > > <CENTER > > > <b >Current News from Slashdot</b> > > </CENTER> > > </td> > > </tr> > > <tr STYLE="background-color : lightgrey; font : x-small > > Arial, Helvetica, > > sans-serif;"> > > <td> > > <CENTER> > > <IMG BORDER="0" HEIGHT="25" WIDTH="25" SRC="topichumor.gif" > > > > I suspect <page> and <slashdot> shouldn't have them either, and the three > namespaces at the beginning don't need to be there if they are not used > later on. This is a SAX problem. How to know a declared namespace isn't used afterward? And as the definition of the aggregation states that there should be namespaces attached to the page and slashdot elements they should appear there. Sorry, I still don't see what the aggregator is doing wrong. Giacomo > > Stuart. > > > On Wednesday, May 2, 2001, at 07:12 pm, giacomo wrote: > > > > > Hi Stuart > > > > Thanks for your test. I'm in no way a namespace expert, so, could you > > give me some suggestion on what we have to change to make it right? > > > > TIA > > > > Giacomo > > > > On Tue, 1 May 2001, Stuart Roebuck wrote: > > > >> Giacomo, > >> > >> Sorry for being a bit brief. Here's the first 15 lines of the HTML > >> output > >> from Cocoon (CVS about 12 hours ago), looking at the "news/aggregate.xml" > >> match. The current defaults use the xincludesaxconnector: > >> > >>> <HTML xmlns: >>> xmlns:ns2=". > >>> com/moreover" xmlns: > >>> <BODY> > >>> <page xmlns=""> <slashdot > >>> > >>> <table xmlns="" width="100%" border="0"> > >>> <tr xmlns=""> > >>> <td xmlns="" > >>> STYLE="background-color : #B0E0E6; font : x-small Arial, Helvetica, > >>> sans-serif;" colspan="5"> > >>> <CENTER xmlns=""> > >>> <b xmlns="">Current > >>> News from Slashdot</b> > >>> </CENTER> > >>> </td> > >>> </tr> > >>> <tr xmlns="" > >>> STYLE="background-color : lightgrey; font : x-small Arial, Helvetica, > >>> sans-serif;"> > >>> <td xmlns=""> > >>> <CENTER xmlns=""> > >>> <IMG xmlns="" > >>> >>> > >> > >> As you can see, the 'myspace' and 'slashdot' namespaces have been > >> attributed to HTML tags for no apparent reason. If you revert to the old > >> saxconnector, these go away. In most browsers this doesn't seem to make > >> much difference, but it confused OmniWeb enough to alert me to something > >> being! > >> :-) > >> > >> Stuart. > >> > >> > >> On Tuesday, May 1, 2001, at 10:07 am, giacomo wrote: > >> > >>> > >>> > >>> On Tue, 1 May 2001, Stuart Roebuck wrote: > >>> > >>>> There are also problems with the almost arbitrary attributing of > >>>> namespaces to elements processed with the new xincludesaxconnector. > >>>> Look > >>>> at the resulting source output of the aggregation example in CVS and > >>>> you'll see what I mean. > >>> > >>> Stuart, could you please explain to me what you've found is the problem > >>> in more detail so that one can use it to correct the behaviour? > >>> > >>> TIA > >>> > >>> Giacomo > >>> > >>>> > >>>> Stuart. > >>>> > >>>> On Monday, April 30, 2001, at 10:45 pm, Donald Ball wrote: > >>>> > >>>>> my issues: > >>>>> > >>>>> 1. why should this be a saxconnector instead of a filter? > >>>>> > >>>>> 2. the schema upon which it operates doesn't conform to the official > >>>>> xinclude spec, but it operates on the official xinclude namespace. > >>>>> one > >>>>> or > >>>>> the other needs to change. specifically, at the least, the src > >>>>> attribute > >>>>> should be an href attribute and the ns and prefix attributes don't > >>>>> exist. > >>>>> > >>>>> - donald > >>>>> > > ------------------------------------------------------------------------- >
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200105.mbox/%3CPine.LNX.4.31.0105032106360.1499-100000@lap1.otego.com%3E
CC-MAIN-2015-14
refinedweb
518
60.14
Access cloud data in a notebook. Doing interesting work in a Jupyter notebook requires data. Data, indeed, is the lifeblood of notebooks. You can certainly import data files into a project, even using commands like curl from within a notebook to download a file directly. It's likely, however, that you need to work with much more extensive data that's available from non-file sources such as REST APIs, relational databases, and cloud storage such as Azure tables. This article briefly outlines these different options. Because data access is best seen in action, you can find runnable code in the Azure Notebooks Samples - Access your data. REST APIs Generally speaking, the vast amount of data available from the Internet is accessed not through files, but through REST APIs. Fortunately, because a notebook cell can contain whatever code you like, you can use code to send requests and receive JSON data. You can then convert that JSON into whatever format you want to use, such as a pandas dataframe. To access data using a REST API, use the same code in a notebook's code cells that you use in any other application. The general structure using the requests library is as follows: import pandas import requests # New York City taxi data for 2014 data_url = '' # General data request; include other API keys and credentials as needed in the data argument response = requests.get(data_url, data={"limit": "20"}) if response.status_code == 200: dataframe_rest2 = pandas.DataFrame.from_records(response.json()) print(dataframe_rest2) Azure SQL Database and SQL Managed Instance You can access databases in SQL Database or SQL Managed Instance with the assistance of the pyodbc or pymssql libraries. Use Python to query an Azure SQL database gives you instructions on creating a database in SQL Database containing AdventureWorks data, and shows how to query that data. The same code is shown in the sample notebook for this article. Azure Storage Azure Storage provides several different types of non-relational storage, depending on the type of data you have and how you need to access it: - Table Storage: provides low-cost, high-volume storage for tabular data, such as collected sensor logs, diagnostic logs, and so on. - Blob storage: provides file-like storage for any type of data. The sample notebook demonstrates working with both tables and blobs, including how to use a shared access signature to allow read-only access to blobs. Azure Cosmos DB Azure Cosmos DB provides a fully indexed NoSQL store for JSON documents). The following articles provide a number of different ways to work with Cosmos DB from Python: - Build a SQL API app with Python - Build a Flask app with the Azure Cosmos DB's API for MongoDB - Create a graph database using Python and the Gremlin API - Build a Cassandra app with Python and Azure Cosmos DB - Build a Table API app with Python and Azure Cosmos DB When working with Cosmos DB, you can use the azure-cosmosdb-table library. Other Azure databases Azure provides a number of other database types that you can use. The articles below provide guidance for accessing those databases from Python: - Azure Database for PostgreSQL: Use Python to connect and query data - Quickstart: Use Azure Redis Cache with Python - Azure Database for MySQL: Use Python to connect and query data - Azure Data Factory
https://docs.microsoft.com/en-us/azure/notebooks/access-data-resources-jupyter-notebooks
CC-MAIN-2020-34
refinedweb
553
57.4
Hello all. I've been staring at this program I need to do for the better part of two days now. I'm relatively new to coding, but have had almost no trouble at all until this. After reading up as much as I could on classes and objects, I still can't figure out exactly what the syntax is that I need to use. Basically I need a die object called into a diepair class and then that called into a simulation class. It needs to tally up the totals for each die roll. I've been given the starting code and have even put in the directions I was given: I'm sure this is relatively simple and yet still can't seem to figure it out. import java.util.*; public class Die { private Random rgen; private int face; public Die(Random gen) { //needs to have a stored reference to the random # generator and a face value between 1 and 6 } public void roll() { face = rgen.nextInt(1-6); //needs the random integer generated and stored in the face field } public int getFace() { return face; //face needs to be returned } } import java.util.*; public class DicePair { public Die die1, die2; int doubles=0, sum; Random rgen; public DicePair (Random gen) { Die die1 = new Die(rgen); Die die2 = new Die(rgen); //construct two die objects and pass the specified random number generator to them } public void roll() { //roll them } public int getSum() { sum = die1(getFace) + die2(getFace); //return the sum of the face values } public boolean isDoubles() { if(die1(getFace) == die2(getFace)) doubles = doubles + 1; //return true if double } } import java.util.*; public class Simulation { private Random gen; private DicePair pair; private int[] tally; private int doubles; public Simulation() { DicePair pair = new DicePair(); tally = new int[2-12]; //create the random number generator, dicepair object, and array to tally the sums } public void simulate(int times) { doubles = 0; for (count = 0; sim <= 200; ++ count) { } //initialize tallies and doubles to zeros, simulate rolls, and tally the number of rolls and count doubles } public void report() { System.out.printf("%nSum Tally%n"); for (count = 2; count < 13; ++count) { System.outprintf ("%3d %8d ", count, tally[count]); for (star = 0; star < tally[count]; ++star) { System.out.printf("*"); } System.out.printf("%n"); } //print a report of the simulation } public static void main(String[] args) { Simulation sim = new Simulation(); sim.simulate(200); sim.report(); } } If someone could help me out here, I don't know if it's the syntax that's confusing me or what, but it's been driving me crazy.
http://www.javaprogrammingforums.com/whats-wrong-my-code/12520-need-help-coding-dice-rolling-simulation.html
CC-MAIN-2015-06
refinedweb
428
52.12
"Neil Schemenauer" <nascheme at enme.ucalgary.ca> writes: > Not for my problem on Linux. My code didn't call sleep(). It > may be a bug with the pthreads in libc6 for Linux. I can't > reproduce it with C code though. I think I can explain what happens. Look at the following code: #include <stdio.h> #include <string.h> #include <unistd.h> #include <pthread.h> void * thread(void * v) { for (;;) { char buf[40]; int len = sprintf(buf, "%lu\n", (unsigned long) getpid()); write(1, buf, len); sleep(1); } } main() { pthread_t t; pthread_create (&t, NULL, &thread, NULL); fork(); sleep(3600); } This clearly shows that the thread is not duplicated on fork(). Now look at the Python source code: static PyObject * posix_fork(self, args) PyObject *self; PyObject *args; { int pid; if (!PyArg_ParseTuple(args, ":fork")) return NULL; pid = fork(); if (pid == -1) return posix_error(); PyOS_AfterFork(); return PyInt_FromLong((long)pid); } void PyOS_AfterFork() { #ifdef WITH_THREAD main_thread = PyThread_get_thread_ident(); main_pid = getpid(); #endif } long PyThread_get_thread_ident _P0() { volatile pthread_t threadid; if (!initialized) PyThread_init_thread(); /* Jump through some hoops for Alpha OSF/1 */ threadid = pthread_self(); return (long) *(long *) &threadid; } In the child process, all threads have disappeared, but the code doesn't seem to be prepared to handle this.
https://mail.python.org/pipermail/python-list/2000-March/031693.html
CC-MAIN-2017-17
refinedweb
198
67.15
By: Deepak Shenoy Abstract: Hovering over a file in Explorer brings up a "hint" window. The Infotip shell extension allows you to customize the text displayed. Deepak Shenoy shows you how it's done. By Deepak Shenoy Windows 2000 (and Windows 98 with IE 5 desktop integration installed) gives us a new Shell Extension - the Infotip. This is a hint window that pops up when you hover over any file. The standard hint shows the name of the file and the size, but you can customize this, based on the extension of the file. There is a default Infotip extension for Microsoft Word and Excel documents - you can see the name, author and title of the document in the infotip. Here's what a standard infotip looks like (Windows 2000): What we will do now is make an infotip for Delphi form files (DFM files). Here's what we hope to achieve: You'll find the full code for this article at and at. An Infotip shell extension is a Windows DLL that: 1.Implements IQueryInfo and IPersistFile 2. Registers itself in the registry. This step is slightly different from other shell extensions. Implementing IQueryInfo and IPersistFile IQueryInfo gives the text of the Infotip that's shown in the hint window. IPersistFile is what the shell uses to give you information about which file the user is hovering over. First we'll create a new automation object. Let's call it DFMInfoTip. Here's what the dialog looks like (after selecting File|New...|ActiveX|Automation Object): This generates a simple type library and an implementation for the IDFMInfoTip interface that's automatically generated. In this file, let's support the other interfaces: TDFMInfoTip= class(TAutoObject,IDFMInfoTip,IQueryInfo,IPersistFile,IPersist) Note: We need to implement IPersist also because IPersistFile inherits from IPersist. Now let's see some code. IPersistFile has a function called Load, which is called to give us the name of the file that the mouse is hovering on. function TDFMInfoTip.Load(pszFileName: POleStr;dwMode: Integer): HResult; begin FFile := pszFileName; Result := S_OK; end; For the rest of the functions in IPersistFile (and IPersist) we'll return E_NOTIMPL. Now let's look at IQueryInfo, which has just two functions. GetInfo is called by the shell to retrieve the text of the infotip. Here's how I've implemented it: function TDFMInfoTip.GetInfoTip(dwFlags: DWORD; var ppwszTip: PWideChar): HResult; var szTip : string; begin Result := S_OK; // The current file name is in FFile through IPersistFile szTip := GetDFMInfo; ppwszTip := pMalloc.Alloc( sizeof(WideChar)*Length(szTip)+1); if (ppwszTip <> nil) then StringToWideChar(szTip, ppwszTip, sizeof(WideChar)*Length(szTip)+1); end; The function GetDFMInfo extracts some information out of a DFM file. First, we figure out if it's a binary or text dfm: fStream := TFileStream.Create(FFile,fmOpenRead or fmShareDenyNone); slStrings := TStringList.Create; try fStream.Position := 0; pFirst := @First; fStream.Read(pFirst^, 1); fStream.Position := 0; if First = $FF then // binary DFM begin Result := Result + 'Type: Binary'; inStream := TMemoryStream.Create; ObjectResourceToText( fStream, inStream ); inStream.Position := 0; slStrings.LoadFromStream(inStream); inStream.Free; end else begin // Delphi 5's text DFM slStrings.LoadFromStream(fStream); Result := Result + 'Type: Text'; end; szText := szFullText; //get the caption iPos := Pos(' Caption = ''', szText); if iPos <> 0 then begin Inc(iPos, Length(' Caption = ''')); iEnd := iPos; while true do begin if (szText[iEnd] = '''') then if (Copy(szText, iEnd+1, 4)='#39''') then Inc(iEnd, 4) else break; Inc(iEnd); end; Result := Result + #13#10 + 'Caption: ' + szText ; end; The code included with this article gets the caption, width and height of the selected file. Registering an Infotip Extension To register an Infotip Extension, you must: The default value for this key must evaluate to the CLSID of the COM object implementing the shell extension. I've put in the CLSID of the DFMInfoTip object implemented. Note: The CLSID {00021500-0000-0000-C000-000000000046} is the IID of IQueryInfo. Of course, under Windows NT and 2000, all shell extensions must be "approved." This is obvious only if you log on as user other than administrator. The registry key you must create in order to "approve" a shell extension is: HKEY_LOCAL_MACHINE SOFTWARE Microsoft Windows CurrentVersion Shell Extensions Approved Under this key, create a new string value with the name of the shell extension CLSID (in this case, {A6614304-6DFB-4A31-8032-C4E0CCA42D81}) and assign it any description text. That's about all you need to do. The code sample has two .REG files that you can run to import the correct registry entries. Why two? I found that the Windows 2000 regedit tool exports the registry as Unicode instead of ASCII - so there's one file for Windows 2000 and one for Windows 98. Deepak Shenoy(shenoy@agnisoft.com) is a director at Agni Software, a software company in India that offers consultancy and offshore development solutions. Deepak is currently working with Windows 2000, COM+ and XML. In his spare time he tries to play the guitar, which is why he is not allowed too much spare time. Server Response from: SC3
http://edn.embarcadero.com/article/22987
crawl-002
refinedweb
836
65.12
At 09:26 PM 4/21/2010 +0200, Manlio Perillo wrote: >But I do not want to use a feature that it is here for compatiblity >only, in a new project. Python itself has supported namespace packages through a stdlib utility since Python 2.3, and special import mechanism support has been proposed for addition in 3.2. Zope may have developed the idea, and setuptools made it more accessible for people to use, but it's been a standard Python feature all along. The new support in 3.x will also allow setuptools and its clones to drop some of their hackier bits of implementation, and hopefully make the adoption of namespace packages even more widespread. While the Zen of Python says that flat is better than nested (i.e., "zope.*" vs. "org.zope.*" as would be done in Java), it also says that namespaces are a great idea, so let's have more of them. ;-)
https://mail.python.org/pipermail/distutils-sig/2010-April/015992.html
CC-MAIN-2016-50
refinedweb
157
69.72
I originally wrote this over two years ago. It was getting a little long in the tooth, especially now that HTML5 has come along and made HTML far more beautiful than even XHTML 1.1 was. So I updated it!. Large PNG Original PSD Text of HTML It’s big enough to print out and tape up inside your locker to impress your friends. Well, it might be a bit of an awkward size. I’ll make the PSD available in case you want to alter it. -. Very good article. I like the idea of putting an id on the body, have not thought of that before. I have a website im creating at the moment which has a different footer, so that could have been handy :D Nice work Chris. Here’s another useful link for your readers . . . I love clean code no matter which language I use – the only problem is when you have a looming deadline and you catch yourself farting around with beautifying your code rather than completing that important requirement !!!.” I’m not sure if it makes a difference for seo (and personally, I think seo is just another trendy bandwagon people hop on for whatever reason — write good, valid code and good content and the rest follows), but… It makes a WORLD of difference for accessibility. Ever try to access a website, sight unseen, using a screenreader? Do you really need to plow through a 50-member list of a blogroll before you get to the actual article? Important content first is also just common sense. I mean, why would you not do that? Is there any valid reason, any AT ALL, for NOT doing it that way? I second that, David. “SEO” should be “BO” (Browser Optimization). Very few souls know what a Search Engine Is looking for anyway. Like you said “write good, valid code and good content and the rest follows”.. A good Design company should optimize every web page that they upload. Use simple tags like title, description and ALT and everything will be fine. The HTML 5.1 spec now has the element ‘main‘ to solve this problem of differentiating the main content of your page from header, footer, nav panel, and so on.”. I think it’s extremely important to keep your code tidy, not just for yourself but anyone else in the future that may have to work on the site. I’ve been handed sites where the time is nearly doubled simply due to the fact that somebody clearly didn’t care for any type of order. Even if you do use a CMS, it will most likely be using templates that you will at some point want to make an edit too. Surely you must be joking. Your code formatting is critical to maintaining easily managable files. Even with a CMS you often have to make changes on the template level, and you’re digging all around messy code, looking for that list item in a sea of chaos, you’re going to wish you were more organized. The quality of your code formatting is a clear indication of the diligence and thoroughness of the coder. Plain and simple. The issue here is just “when.” Tabbing is for humans. Browsers just want tight, valid code. So, you’re both right. Ever heard of code tidy tools u noob?? You use the id attribute on body tag as shown in the post, and the reference that div accordingly. @smickworks: I’d said by creating separate stylesheet for just one page you loose the benefit of having CSS in cache, but still — content should be separated from presentation, no matter if it only does appear on a single page or is common. You can put all pages css in one file. It’s just a very long css file (depending on the site). Do this new decriptive blocks support non HTML5 browsers? Nice writing ;) Title should go AFTER the meta tag with the encoding, because if the title contains non-ascii chars, you may be seriously screwed — IE can show an empty page. Good point! Doesn’t valid HTML5 require that the charset be first in the head, anyway? No. It doesn’t have to be the first one. It just has to be within the first x characters or lines or something (I don’t remember exactly). But there’s really no reason not to put it first IMO. Very useful article, thanks especially for the body id. I’ll use this as a checklist for my future websites.? 1: We shouldn’t. Use a template system instead, personally I like PHPTAL 1: Why one file? Different application tiers have different MVC uses, you can have views on a database tier, but those views form the model for the tier above. Same goes for the client tier, the server serves its view (the HTML) and that is the browser page’s model. Presentation should be governed by CSS if applying MVC. 2: You can use the heredoc syntax in PHP, and your PHP will be a lot cleaner and very readable. 3: That’s application logic, you can always try to format and validate it, should be fun.! Like you, I view source of sites I visit, just to see if competitors do clean code as clean as I can do.. Nice list! I wonder if there is a tool that can check my website and give me hints how to improve it. Or is there even a tool that helps transforming a table based design into a clean div design? Any hint would be highly appreciated! @Inwit, Yes there is such a tool, the human brain and its motivation for standards and clean code ;-) Seriously: I think it is much better to redesign/refactor code by hand than to let software do it. I rarely trust machine-generated code. Case in point: Horrid Dreamweaver and Frontpage sites… *shudders* I have the same kind of addiction, though I more often do a CTRL+SHIFT+S in firebug. One could argue that a semantically clean menu doesn’t need a wrapped around it. Enjoyed the article.… In this case, using an id on the body element makes sense. It’s used to indicate the specific page being loaded. this can be useful if you need to override the default CSS for a single page, now you have an id you can use in your selector statement. I disagree. A body only appears once, so there’s no reason you can’t use an ID instead of a class. As far as why you should use an ID instead of a class is stated in the article/image. It’s for applying particular unique styling to a particular unique page. Now, if one were to have a subset of multiple pages styled this way, a class would be appropriate. Say, for example, you want a certain different look for your online catalog versus your blog. All the pages for the catalog could have a body class of “catalog” and get a green background. But then, you have a page in the catalog of sale items, give that body an id of “red_hot” and style the background red, since it’s unique. Some of this is unnecessary when using a CMS, but if you want to avoid loading extra stylesheets, or you want to give special treatment via JavaScript, an id and/or class on the body can provide a very useful hook into your markup. There’s only one body-element in the document and inside the html-element, so you can identify it with an ID. It doesn’t matter, how many documents exist in your website. . Hey, i love the article.. I was wondering if you could show me a HTML of a text box. I need it for somthing, but of course i don’t know what it looks like.. Thanks! -Ashley !!! 1) You’ve presented zero reasons why this kind of markup would not work. It would simply have more elements, and probably more dynamic content. 2) Commenting is mentioned and highlighted in both the image and the article. To be honest, html comments are much less important that code comments. And by code, I mean real code (PHP, JavaScript, etc), not markup (HTML). The only reason to use an HTML comment (as illustrated in the article) is if it’s not completely self evident what the code is doing. Like the php includes. Example: You can see the site, and see the markup… any real reason you need a comment that says “this is the main navigation menu” for a UL element with the class of “main_nav_menu”? You clearly have not done any big project so you don’t know. Go to bbc.co.uk, nytimes.com, msn.com or any other large and well known website and then tell me bout beatuful semantic markup re ‘wow’: Allow me to introduce you to the ought/is fallacy. What is ain’t necessarily what ought be. And the reverse. Stop trolling. Commenting…is that the bracket-exclamation point-thingie I do to leave notes for myself so I know where the heck I am in the midst of all my gobbledegook?? I live by those. ;-) I really don’t think I can agree with you more! Also, “writing HTML/CSS” != “programming” It’s just markup. If you’re smart with your namespace and use descriptive id’s and class names, there’s rarely a need for HTML comments, IMHO. That is a beautiful image of what beautiful HTML code looks like! Thanks. Good example how to code in HTML. Thanks. I only add that title tag are very important and I agree with Chris Coyier that title tag should go after meta tags. The body id can be done automatically with php: <body id="<?= basename($_SERVER['PHP_SELF'], ".php")?>"> Particularly useful if you’re using a template.. “Why change your level 2 headings from page to page – it just confuses the user.” The reason is that level 2 headers may all mean the same thing across your site, but that doesn’t dictate that they should all look the same.? Heading levels are announced by screen readers, they can navigate by them too. So if you’re listening to a page, it starts by reading all the guff in the header, you can just press the key for H1 or H2 to skip the jibber-jabber and start hearing the content actually relevant to the page. Consistent use of heading levels across your site will be a godsend to a blind user. You do have some nice clean html and some great recommendations. I would argue that you left out one more thing. The html should also be minimized.’d.. First, I completely disagree with nearly every single thing you just said. # DOCTYPE Properly Declared A) Doctypes exist for a reason. If you don’t understand, perhaps you should google. # Body IDed A) Explained in an above comment of mine. It’s for control over, and hooks into, your markup. # Semantically Clean Menu A) Everything should be semantically clean. This isn’t just about your “made for IE 6” site looking alright or your Frontpage/Dreamweaver code working… it’s about longevity, stability, and compatibility of markup. # Main DIV for all Page Content A) This is, currently, the only way to solve certain design problems like fixed-width, centered websites. # Important Content First A) Again, previous comment. Accessibility. Screenreader users shouldn’t have to listen to your whole blogroll or list of probably unrelated corporate services just to get to an article. And, why NOT put the important content first? I’ve yet to hear a valid reason why you would purposefully not do this, other than developer laziness. # Proper Ending Tags A) An image tag is, by definition, a self closing tag. Like link tag and meta tags. Not sure why you’re arguing with the rules. It’s like saying, yeah, but I hate using if/else code in programming… i’d rather use turtle/shell o_0 # Hierarchy of Header Tags A) People who blindly read and follow SEO crap are their own worst enemies. Valid, semantic code, and useful, relevant content are the only two things you ever need to worry about. The rest is all diminishing returns, especially if you’re paying some “SEO expert” to muck up your site. # P vs BR is more of a style choice. A) No, actually it’s not. It’s a semantic and structural choice. If something is a paragraph, you use the P tag, if you just want to break a line of text, use the br tag. Pretty simple, really. # putting too much of your formatting in css will just get in your way and increase code complexity A) No, just plain wrong. Using table for something that’s not tabular data is just ignorant. Using CSS doesn’t complicate things, at all — it greatly simplifies things. Ever hear of DRY? Reusable code? Modularity? Ease of maintenance? # put the CSS into a style attribute right where it gets used… reeks of religion not fact A) That’s potentially the dumbest thing I’ve ever heard a supposedly smart person say. Second, I lost all hope in have meaningful debate with you on the topic once I saw this: “wasting my money on “proper” formatting that only served to complicate and obfuscate the code ” Proper formatting obfuscates the code? Really? I wouldn’t mind being fired by you for that reason, but it also makes me really, REALLY thankful I don’t work for someone like you. @Robert You’re kidding, right?! Wow. I can’t believe you took the time to write that. What a waste of energy. You couldn’t be more wrong. I think Robert was just joking, right? I mean, everything he said was clearly way, way off. Then I went and looked at his website. I’m still laughing my arse off. HAHAHA ? Thomas, don’t loose your energy with Robert, we obviously don’t live in the same world ;-) At best it’s a troll, at worst he really believe what he is saying… He’s precisely the kind of client that every freelancer should fear… He thinks he’s smart enough to not only do your job, but do your job better. In short: Dangerous. ’re not going to use the characters in it? Just put the © symbol directly in your code, a practice that has it show up properly as long as you’ve. If you are not able to save your files utf-8 then you shouldn’t be web devoloping (or learn harder). If you know what your editor does (and your web server) then there’s no point in writing encoded entities. Amen to that. Why should one use these horrible encoding strings instead of saving your files UTF-8 (no BOM) encoded. Saves you a lot of trouble. You’d want to include the content before, say, your sidebar, using a typical blog as an example. The main reason is Accessibility. A person with a screenreader (which reads linearly down the page) should not have to skip through your 50-item list of categories and blogroll just to get to your article. Hey, if you use Haml for all your HTML needs, you’re already half way there!. What?! Jpeg is a supported web image! Thanks for letting all the dummies out Chris, lmao! Very good article! I also had never thought of using an id on the body :) Clean code isn’t a hobby, chore or job, it’s a way of life. I can’t stand it when I see a horribly coded website that looks good, it just ruins my admiration for the developer who created it. By the way, I view sourced your site as well because I am a web developer myself and do exactly the same thing. Your site has some pretty clean code my friend, good job. If only every would be web developer read this post.. You could simplify your IE CSS a lot more by using conditional comments to render divs with ID of the IE version and a class of IE. This way you can keep all of your styles together in one stylesheet. Makes it a lot easier to manage. Look at the source code of newsweek.com to see what I mean. I’m assuming you’re using a reset stylesheet in your main.css file. This is fine for a small site, but bigger sites need stylesheets broken out more for flexibility. With HTML5 you do not need to declare a “type” attribute. Does HTML5 is now supporting php type tags? :) Anyway, good work :) Why are JS files declared at the bottom of the page? They should really be in the head. Thanks, This is common practice so that loading JS won’t potentially slow down the rendering of the whole site. As stated to prevent blocking (slow down), that said you can use parallel loading of your JS files, whilst still in the head of your document to achieve the same thing. I personally use parallel loading, in the bottom of my document, not only ensuring faster loading of content, but faster rendering and zero blocking. In fact my scripts appends my scripts into the head of the document, which means it still looks correct in things like firebug. x The only thing I would be inclined to do differently, is the location of your H1 tag. I normally put it around the site logo or title, if it has one. In this case it would come in the line that says: That way, if your page is viewed or printed without styles, you will still have a nice big bold H1 site title at the top of your page. The remaining headings can then start with H2, H3, etc. It may not be strictly semantically correct, but isn’t H1 the closest thing we have (apart from TITLE, which does not belong in the body) for semantically describing the site title? I thought it might also be useful for SEO purposes, but I could be wrong, if the title tag has the same content and takes higher precedence for SEO. Anyone know the answer to this? I used to be 100% on board with that but now I disagree. I think logos should just be plain old anchor links and <h1>’s should be reserved for whatever the main heading for that page is. Just a personal preference. I want call out the #1 most important thing on the page being the content on that page, not the logo. I would have to agree with Chris, as content is king and logo is just secondary. H1 tags should definitely be used on content titles not logos…. If anything, I’d put a blog’s tagline in an h1, and leave the name and logo in regular ‘a’ and img tags. Even then, you’re wasting an h1 tag, so be careful where you choose to allot it. Your H1 is possibly the most important unique content element from an SEO standpoint. Page Title helps as well, but more from a user viewing SERPs standpoint than a a crawler looking at your content and determining its relevance. Anchor the logo, title it and leave it. “Introduce” your page clearly with the H1 and you’re set. In my opinion to increase the relevancy of your keywords, your title and main heading should be almost same. so there will not be the issue that where we should put H1 as title and H1 serve the same purpose i.e. describe the content of the page Enjoy the example. Small typo in the title bar of the _image_ caught my eye, when everything else is so carefully presented. “beatiful-code” doesn’t matter, but fyi. Chris, Just a question about indentation. Do you use tabs or spaces to do it? As far as the final product, as much as I would like to keep my code nicely organized, there is always a piece of content (i.e. CMS or some server-side process) that will mess it up. Cheers I don’t think the end-result needs to be perfectly formatted as compared to the templates you’re using. Machines don’t care how it’s formatted. You, on the other hand, need to be able to read and maintain the code. I, too, have formatting issues when I include certain things, but my view templates are totally clean, so that’s the perfect solution for me. You use UTF-8 and encode special characters.. why is that ? I was about to ask the same. Surely if you are writing your pages in UTF-8 then it makes sense to write the raw character rather than the character code? It makes sense from two points of view: you aren’t inflating the file size with stuff you don’t need; and you can read what you’ve written without having to parse character codes in your head! If you’re going to write all the character codes out, then there’s really no point in using UTF-8. I’d love to use the extensible tags like section</code and asidebut I though that IE6 wouldn't pick them up. Can anyone confirm or deny this? IE6 won’t see them without a JavaScript shiv -> IE 6 doesn’t do many things correctly, and it sure doesn’t support HTML5 without some external help from JS. Awesome! I need to learn HTML5. The great thing about that is that it isn’t much to learn, HTML5 is quite similar to XHTML 1.0 – 1.1 :) Hi, I like your description of beautiful HTML code, but I’d like to point out one addition to HTML5 which makes this even prettier. Defining the charset in HTML5 is possible like this: An explanation for why this works can be found in this public-html mailinglist message (via Mark Pilgrim). Yikes, that didn’t work at all—code didn’t show up. Let me try again (and feel free to merge this comment with the previous one): <meta charset="utf-8"> Ah, and secondly, there is no need to escape any character except for < and &. Code looks much better with only the necessary characters encoded. Thanks for updating this, Chris! It’s replacing the old one on my wall. Great stuff, Chris. I have one question that might be my own oversight. I noticed you added a class to the footer tag which implies it should be styled, rather than a div called footer. But without a javascript createElement shiv, IE7 and IE8 don’t recognize the footer tag. Are the conditional IE CSS calls in the head meant to accomodate for that? I went back and forth on the shiv… I chose not to include it because it’s kind of a temporary thing and this article isn’t really about that… About the class though, it doesn’t need that class in order to be uniquely styled. Notice that class name is shared with many other elements throughout the page (“container”). That’s the class name I use for the clearfix. This is nice for pure HTML. However, when there is web scripting involved (i.e. PHP), there tends to be output that is not properly indented. You can still fix that by proper indenting within the PHP code. It’s harder, and more time-consuming, but it’s still possible. :( i wish my html looked like this. it’s getting better, but right now its kind of messy. I don’t have my title first, and my javascript is in the head. I’ll porbably move it to the footer eventually, but ow much of a deal is it? I have document.ready, so how much different does it make? I’m not trying to contentious or anything, I’m genuinely curious. Hey Chris, one more thought: It’s unprintable! Even on 11×17 tabloid paper, the descriptive text is about 4pt. I hacked your PSD and significantly cropped the code sample window and then pushed the right column to the left one full column-width. Now, on 11×17, the type is legible. Thanks again for making this available! Art Just wondering why you would “encode special characters” if you are already specifying utf-8? I don’t buy the “hard to tell what your editor is using” argument because… well… it’s not hard to tell what your editor is using. Being able to copy and paste provided content into code without replacing anything is pretty nice… For the most part I like your HTML layout. I would say that the head and body elements should be indented inside the html element though to be consistent. You are probably right, it’s just for some reason over the years I’ve always left those hard-left. Really no idea why but anything else looks funny to me now. I don’t know if this has been mentioned but if you’re using html5, and you want it to wwork in IE, then you need to include this to style the elements This addresses a question I had: What problems will I run into with browser support if I start using these tips. I am assuming that the new tags might not be supported in older browsers. I would love to start implementing all of this if I knew what kind of browser support was available. Any one care to enlighten me? excellent tips article! I was aware of several..but definitely not about all of them! beautiful! thanks! Script tags belong in the section, not strewn about wildly in the page body. If you want code to run after page load or domready, then use the correct event handlers. Those script tags aren’t “stewn about wildly,” they’re specifically put at the bottom of the page so they don’t block other parts of the page from downloading, thus decreasing the amount of time it takes for the page to render. Above comment should say <head> section, apparently HTML is not escaped here…. hrmmm alert(‘Vulnerable to script injection’) Why are you doing three conditionals for the stylesheet? I usually do a normal include, which IE6 sees as well, and then do a conditional for only IE6 that fixes broken elements. Good one Chris. Good one. Just sent this to a friend. For accessibility shouldn’t tags have a title attribute? That I’m aware, title attributes are never a requirement, for validation, accessibility, or otherwise. Alt attributes on img tags, sure. But title tags are purely optional. Title tags are certainly not a requirement, but they can be extremely useful for accessibility purposes, especially when dealing with links. Screen readers are capable of skipping from link-to-link. Therefore, if the text of your link does not make sense all by itself, then you need a title tag to help add context to it. That said, I’ve been told it’s worse to add titles if they are identical to the link text. Therefore, something like: You can <a href="" title="View an article about clean HTML">read the article</a> if you feel like it. is good, but something like: You can <a href="" title="View an article about clean HTML">View an article about clean HTML</a> is bad. Title tags should only be used to add context where it is missing. See, to me, using classes instead of ID’s (where ID was intended to be a single call), is a sign of sloppy code. I wouldn’t approve of my developers doing the class on the section and footer, when they should be styling the footer directly, and only using an ID to undo the styling in unique scenarios, nested footers inside other elements, or using advanced CSS selectors to get around it. Maybe I’ve been ruined by seeing horrible front end templating code (cough, zen cart, os commerce, cube cart, most cms templates), but classes overall are a last resort for me, and my department. In the second paragraph you should’ve said (the) instead of (they). This article is outstanding. I’m only sad that I didn’t run across it earlier in my coding career. I’m actually stunned that anyone could have the nerve to challenge this. Have you ever updated a “dirty” site authored by someone else??? Apparently not.. i dont know. i thing, the footer element is not valid when using the headings (like h4) Using h4 headings is completely valid, this case included. Code is art: P.S. your form requires JS. Please refer to Cool, I love the code formatting, it looks really great! But.. this is very useless and a waste of our bandwidth. HTML == output, not art. If you stop indenting, your page will be less smaller in filesize and load faster. LOL I really think there are some people who believe search-engines would rank a site higher if it is perfectly indented. Minified and compressed HTML is much more valuable: less data, loads faster. So if your site has a lots of visitors, it has to be delivered Minified/compressed. Also, you are using UTF-8, so you don’t need to use HTML-entities, isnt it? I also think it is very evil to put the script-tags at the end of the body; it should be loaded inside the head-tags. I see you are using jQuery, so you just should use the ready event. I usually include scripts inside head, why end of the body? <aside><h3>…</h3></aside>is far from beautiful if you read the HTML5 spec. Inside sectioning elements, reset your headers and start anew using h1. By the way, this site’s comment escaper fails: Why do you ask for unescaped code when you really mean escaped code? You can paste in unescaped code (the kind you can read easily), and it gets escaped by the browser if it’s in a ‘code’ tag. To everyone nagging about the JavaScript not being referenced in the head: Great discussion! Here’s my two pennies worth: Script calls at the bottom. I know YSlow is big on this, but I’ve found that some scripts don’t work if placed at the end, so I tend to leave them in the head and make sure the .js docs are minified. Anyone else with me on this? Umm.. Read this and had to comment… You appear to be saying how clean your code is, but there are still pretty bad habits in there. * Your CSS hacks in the header for IE are NOT W3C standard – even though they would pass a W3C check they are still a hack. * Your paths are mostly ABSOLUTE, rather than relative. You are only using relative paths in your PHP includes. * Your use of utf encoding for apostrophes is not necessary. * They are PHP includes, NOT Server Side Includes – there is a big difference. * If you turn the php parser on for every html page your server will slow down, unless you’re saying that your page is PHP… In which case it isn’t beautiful HTML, it’s ‘beautiful’ PHP. * Google analytics is an unnecessary php include – the tracking code is pretty standard and to anyone who has done a website or two it’s pretty clear what it is. * Your saying ‘col’ is a good, clear, name for a css class? “…though they would pass a W3C check they are still a hack.” Conditional comments are not a hack. They are exactly that comments. IE just interprets them as something else. The start hack … now that is a a hack because it messes with a bug in the rendering engine. Conditional comments are not a bug. They were planned that way. If you want to say that conditional comments are a hack, then you must say that the Webkit CSS animation/transition stuff is a hack, since it is not in the spec yet. Taking advantage of a legitimate browser feature is never a hack. “They are PHP includes, NOT Server Side Includes – there is a big difference.” There is not a BIG difference between php includes and server side includes. They are basically the same things and honestly your are nitpicking. It is a commonly accepted thing to call them server side includes because they are included on the server rather than the client. “In which case it isn’t beautiful HTML, it’s ‘beautiful’ PHP.” No it is still HTML. PHP is only the stuff within the PHP tags. Everything else is still HTML. The PHP parser doesn’t mess with or bother with anything outside of the PHP tags. Google analytics is an unnecessary php include – the tracking code is pretty standard and to anyone who has done a website or two it’s pretty clear what it is. Not necessarily. Depends on your reason for putting it into an include. What if you want to have some code in the include that checks the page name and selectively puts in the tracking code. So, you could have the code show on some pages but not others. That is just one example, I could think of others. “Your saying ‘col’ is a good, clear, name for a css class?” Sure, if it is a column and you want all your columns to look the same. Actually “col” is still presentational. The HTML is telling me this should be a column. On a phone it probably shouldn’t be a column. On the other hand I’m not sure we can 100% get away from it. Something has to tell the browser that its going into tab, dialog or column. I’m playing around with a site I’m trying to make 0% presentational and what I ended up doing was giving each block (section since I’m using HTML5) an ID, then using jQuery to .addClass(“.dialog”) to those sections that are intended to be dialogs. In the iPhone javascript, I don’t insert the class since it has no meaning there (the styles for .dialog are for browsers only). Putting scripts at the end of the body should ensure that the html gets loaded before javascript. When javascript files are being downloaded and executed, they can often result in other downloads stalling. So when you put them in the head you are running the risk of displaying a blank page while your javascript loads. Putting them at the end of the body eliminates this problem. This ensures that content isn’t waiting on the presentation layer. That being said, I do put js in the head when the content relies on it, such as loading a SWF file or launching a mapping application. If “code is poetry”, this is Neruda. Well done Chris. I’m curious though, A List Apart’s example here: Has within and you’ve done the opposite. While I agree with your example simply because it seems semantically logical to have a section then article(s). Is there any difference? Has within and you’ve done the opposite. Was supposed to be: Has section within article and you’ve done the opposite. I would assume the above example is not intended to be a hard and fast rule of where to place elements semantically on a page. Rather it is an example of a clean and easy to read layout. I doubt the author is saying he stumbled onto the best way to create a layout. Rather I would think he is just trying to make the case that clean code makes a pretty site and an easy to read one as well. Where you put a section element is really going to depend upon your site. If you stick it in side an article elements you may shoot yourself in the foot CSS wise. Each site needs to be evaluated for what you are trying to do with the site, but we do need to stay somewhat within the realm of accepted practice. But there is some fudge room. For instance I never put a wrapper class around everything within the body tag. But I see sites do this all the time. I’m not going to cry foul. I think it is unnecessary, since you already have a wrapper … it’s called the body element. But many people do it … fudge room. Nice Article. But when i can use it. In 10 Years when all old browser are dead? Web developer must seek it possible to develop all browsers You can use it right now… that’s the point of this article. Serve something that works for older browsers, but give users of modern browsers an enhanced experience. Hi,? Thanks, Jack? This is a good question I would like to see addressed. I think that was a great view of what HTML should look like About time someone made note that you should only have one stylesheet. I would go one further and say you don’t need conditional ones anymore. If you play around with a site for long enough, you can get around most things these days, even for IE6. You shouldn’t have to hack IE7, in my experience. Semantic Classes were also another great thing you pointed out, people don’t think about the next man trying to edit their code sometimes! Which would be my only other gripe – multiple classes on one element. This can be a complete ballache if you’re trying to work on someone else’s code and you have to look in 10 different places to figure what all the classes are doing. Great article, thanks for posting up. It’s worth noting that, if code is indeed poetry, then this should be taken as one person’s specific vision, and not necessarily one that everyone should hold themselves to. I’m not arguing that the suggestions are invalid, but that beauty is in the eye of the beholder! *** I work with another language extensively, so I can attest to the fact that it’s very important to encode special characters. It doesn’t matter how you declare your document or how you save it in your editor; someone will find a way to view the page in an unintended encoding. Encoding special characters bypasses the whole mess that can result (weird characters in the best case scenario, and broken pages in the worst). The one challenge for me sometimes is to make the code look beautiful. Sure it’s easy when you are coding from scratch or have full control over the output functions. Some existing functions, like wplistpages, “unfortunately” echo the list in one single line. I’m totally for having clean code as well and tend to go nuts when my php adds unwanted spacing… But I followed a therapy group to get over it :p now as long as my code is clean in my editor and almost as clean in my rendered output I’m happy and not losing as much time just trying to make it look good for clean code freaks as myself :) One thing you missed, internationally friendly, such as not using dates like “11/9/2009” which confuse your non-Ammerican readers. The US is not the only country that uses mm/dd/yyyy. So, by that definition it is international. I don’t think we need to get that picky in web design. If we start down that road then maybe we should standardize on US or British English as well. I think the rule is use the formatting and rules that apply to your constituents. If you have international constituents then you might need to provide a multi-lingual site that also uses the date format of that particular country. Very many (most European?) countries use dd/mm/yyyy or some variation with DD before MM, so Peter has a very good point, although Americans aren’t the only ones using the MM first. Even though I am American (living in Europe) I must concur his point. UNLESS your site is for a specific region where you know your visitors will understand the date format, a more standard and easy to understand format should be used, i.e. 12 November 2009 or 12 Nov 2009. Avoids any confusion, even if it does take up a bit more space. @Davin Studer – Actually, the only other country to use mm/dd/yyyy besides the US is Belize. See Date format by country. This simplest solution to the date format problem is to use the international standard ISO 8601, recognizable everywhere without ambiguity: YYYY-MM-DD Example: 2013-10-29 Does HTML5 cross browser ? I request, nay, demand that a large wall poster of this be made available for purchase. It would look excellent next to my Sins of a Solar Empire tech tree posters. :D I can’t see any benefit to using absolute paths for your images – Can you please clear up what you mean by ‘assuming content is syndicated’? Do you mean if your images are on a separate domain? That’s the only reason I could see for using absolute paths, but goes without saying really. Or are you advocating putting your media assets on a separate server to increase download speed (by getting round the two HTTP connection per domain limit)? Regards Pete Interesting, although the “jquery best js librairy ever” comment is pretty useless (but that’s really a tiny detail). One idea, putting a ‘back to top’ small button in the bottom of the pages chris, cos sometimes the scroll is shuge (like here). Great article, clean code is always easier to look at and update! Question for the experts – should your PHP includes be absolute URLS to allow for use in different folders? And if so, is the recommended practice to just use the $_SERVER[‘DOCUMENT_ROOT’] variable in front? Interesting post – I am not that into CSS but somehow I would add a few meta tags as description and keywords. I think rating sites still suggest to add these. Also from a search engine point of view I don’t know how the navigation as php will look on the final html onsite – so will the links then be visible… the final html site here would be great to see the result. cheers Back when I hand coded all my sites and the sites of the companies I’ve worked for I was always quite strict about html standards. But now that I use WordPress thats kind of went out the window, not because I don’t think that quality code is key but because WordPress seems to enjoy rewriting my code when ever it likes. I like the benefit of using WordPress more than I hate it rewriting my code. Great article, I’m an addict for clean code! Code is executed by machines, but machines don’t craft code, humans do. Ugly code makes it harder for someone else to work with your code, a likely situation if you ever into into a professional development “team.” I’m loving your poster sample of clean code. Very nice! :) A couple questions: I have been using H1 as a sort of holder for the site name, and css-ing the text away to replace it with a logo. I read that was SEO friendly. Is it better to reserve them for actual content? (I’ve been starting with H2 as main content headline, and down from there.) How practical is it to encode special characters on a large site where content is dynamically inserted via CMS by non-tech-savvy peeps – often journalists? Is the CMS software supposed to translate & to & and < to < Ø to Ø etc? Or is that only for static pages? How soon do you foresee being able to use HTML5? I love it too, and have used it in iPhone web apps I have made, but for real pages? Thanks! :) Rich Darn it, I wrote my list using OL and LI instead of numbering it myself (1. 2. 3.) but that didn’t work. I also used PRE for my special character encoding example, but that seems to be ignored. :-/ Anyways, I hope you get the idea. :) Btw Chris: what does “a little long in the tooth” mean? Sounds cool but I’ve never heard it before. Vampire? :) I think you could shorted it a bit by using Dan Cederholm’s trick <!--[if gte IE 7]><!--> <link rel="stylesheet" type="text/css" media="screen, projection" href="screen.css" /> <!--<![endif]--> Have you checked how does it look on IE6? not pretty nice. -> for testing. IE6 does really screw up things. :S The does not pass the W3C validator. What happen to keeping it valid? Excellent Tip Chris :) i’ve always wondered what ‘good code’ looked like.. this’ll definitely help me in a multitude of ways (in terms of coding of course ) Keep it up. :) Excelente!!! Muy buen avance!! Greetings from Spain!! Even thought HTML5 is good than 4 and xhtml but only new ver browser support right? so i dont think people who are stupid in computer will upgrade their browser like IE6 user Love it! Why do you need to specify height and width on an image. That seems old school to me. Doesn’t anyone here know? It has to do with how the browser renders the page. Adding width and height lets the browser block out the right space for the image. Otherwise it figures out the dimensions from the image metadata as it is transferred from the server to the client. Bottom line is it causes less judder as the page loads. Cool! Thank you Russell. I’m all for clean coding. Great article. I will look into the scripts at the bottom, though sometimes they need to be up top, but interesting idea. Great example Chris, thanks for sharing the psd. Can’t wait to actually use HTML5 ! really useful article Chris, my own website is coded in XHTML but I am doing a redesign and might code this in HTML5 :) thanks ok, so i’m doing it k, cos my html looks a lot like this I think you can make your code even more beautiful by leaving out the .col class. You can address the ul’s by “footer ul”, can’t you? the footer lists could be inside tags Hi Chirs To be honest, the code is just so so, especially you used such trivial example for demo purpose =) I know there are many folks are complementing, but we should be able to express our opinion equally. Hopeful my comment wont get deleted. Cheers, I’m very much inspired with the coding styles. Thanks for the article. Cheers. I recently created a php function to clean and indent xhtml which can be found at: snipplr.com and which may be a good addition to your tips. Small typo in the second paragraph: Let’s take a look at some markup written they way markup should be written and see how beautiful it can be. “File Paths – Site resources use relative file paths for efficiency” That idea breaks down when you want to serve your assets from different domains or even subdomains. For instance if you have a high traffic site and want to spread the load. Use absolute paths to point at the various content locations on your content delivery network. e.g static2.example.com/css/main.css “Use one css file” This is a good idea speed wise as the browser only has to make one connection to download your file. For modularisation it’s a complete PITA. The best solution is to write modular CSS files and bung them together into one file at runtime using a server side handler. That way you can server up all your CSS parts as one file that can also be cached and compressed (and cache it so its only hit once). Try reading the YSlow documentation for more info. “Indenting” – for sure your code needs to be readable especially in view-source. Using gzip or deflate compression will reduce the need for taking whitespace out of your code. All my brain was thinking of when reviewing this was paying per MB of data downloaded on my mobile phone and having to download the entire main.css file, then letting the phone work out which parts of the CSS to use, rather than having a smaller mobile specific file. Great read thanks! Great one, thank you for share Great article. I think my markup looks pretty much like your outline already, but I can probably improve some areas. I might even have to print that image! Cool. It’s hard explaining good code and is good to see it on one large image. This needs to be framed. It’s a work of art! Hi Chris, thank you so much for providing the PSD download. However, where can I find the FunctionLH Heavy font? You use it for your section titles and it looks quite nice. The best code explanation i’ve seen. Easy, simple, clean language, nothing deprecated :) like W3C validator. You have a very informative blog that offer a lot of great ideas. the following code worked for me: <body id="”> it gets the file name of the script file currently loaded, and strips the extension Great update Chris, thanks. I think this is relevant to what @paul was getting at: I like to use page variables to declare many things, including the body id. So you can declare: $activepage = 'about-us'; Then you can echo the value to the body id as well as use it in other places like inside of includes, etc. I think that markup looks like blog structure Good, but inline PHP ruins everything. I’ve read that with UTF-8 encoding and HTML5 you don’t use “©”, you should use the real character ©. Can you provide sources or w3c link that sais © is better than © in HTML5 ?
https://css-tricks.com/what-beautiful-html-code-looks-like/
CC-MAIN-2021-10
refinedweb
8,408
73.07
First, make sure you have installed wxPython Then drop this code into a file called browser.py : import sysYou launch the browser from the command line, passing the URL to go to as argument, like so: import wx import wx.webkit theApp = wx.PySimpleApp(0) theFrame = wx.Frame(None, -1, "", size=(640,480)) w = wx.webkit.WebKitCtrl(theFrame, -1) w.LoadURL(sys.argv[1]) theFrame.Show() theApp.MainLoop() python_32 browser.py last bit needed is the script python_32, which ensures python will run in 32 bit mode, because the 64 bit mode is broken unless you have installed the wxPython Cocoa libraries. Place these lines in the file 'python_32' #!/bin/bashand place that file in the same directory as 'browser.py'. Now, also make sure that this script is executable export VERSIONER_PYTHON_PREFER_32_BIT=yes /usr/bin/python "$@" chmod a+x python_32Nitpickers take note. One might say that this is no different from calling a browser from the command line directly, which, according to the line counting above, would give you a browser in 0 lines of code: open, I would not call that 'your own browser' in the sense that you cannot wrap your own controls around it and interact with the browser contents as you can with the script above.
http://henkpostma.blogspot.com/2012/
CC-MAIN-2017-51
refinedweb
210
72.66
In this article, I will explain how you can load dynamic content in bootstrap tabs by clicking on it using AJAX and Partial View in ASP.NET MVC. So to begin with we need to create a new project in our Visual Studio, by navigating to File-> New -> Project ->Select "ASP.NET (Left pane)" and "ASP.NET web application (right-pane)"-> Provide a name your application and Click "OK", Select "MVC" from the template and Click "OK" to get basic layout of MVC generated automatically. Now, In your Views-> Home->Index.cshtml , create the Bootstrap tabs using HTML as give below <!-- Tab Buttons --> <ul id="tabstrip" class="nav nav-tabs" role="tablist"> <li class="active"> <a href="#_FirstTab" role="tab" data-Submission</a> </li> <li> <a href="#_SecondTab" role="tab" data-Search</a> </li> </ul> <!-- Tab Content Containers --> <div class="tab-content"> <div class="tab-pane fade in active" id="_FirstTab"> <!-- Call partial view to load initial page load data --> @Html.Partial("_FirstTab") </div> <div class="tab-pane fade" id="_SecondTab"> </div> </div> As you can see we have two tab's with name "_FirstTab" and another "_SecondTab", inside first tab we are fetching partial view initially on page load, so let's create the partial view for _FirstTab, and ActionMethod to call it. Navigate to your HomeController.cs and create the two ActionMethod for "_FirstTab" and "_SecondTab" public ActionResult FirstTab() { return PartialView("_FirstTab"); } public ActionResult SecondTab() { return PartialView("_SecondTab"); } So your Complete code for HomeController.cs would look like using System.Web.Mvc; namespace BootstrapDynamicTab.Controllers { public class HomeController : Controller { public ActionResult Index() { return View(); } public ActionResult FirstTab() { return PartialView("_FirstTab"); } public ActionResult SecondTab() { return PartialView("_SecondTab"); } } } We also have to create two patial View's inside Views->Home folder, "_FirstTab.cshtml" with demo HTML <p>hello from first tab, write your dynamic content</p> and "_SecondTab.cshtml" with demo HTML code as below <p>Hello from the Second tab</p> Now, the final step is to load Tabs dynamically on button click so let's create on Tab button click AJAX request in our Index.cshtml page. So, here is our jQuery code, which is called when "a"(anchor tag) is clicked inside "#tabstrip" <script> $('#tabstrip a').click(function (e) { e.preventDefault() var tabID = $(this).attr("href").substr(1); $(".tab-pane").each(function () { console.log("clearing " + $(this).attr("id") + " tab"); $(this).empty(); }); $.ajax({ url: "/@ViewContext.RouteData.Values["controller"]/" + tabID, cache: false, type: "get", dataType: "html", success: function (result) { $("#" + tabID).html(result); } }) $(this).tab('show') }); </script> In the above code, the $.ajax() request retrieves the partial view. The cache parameter. That's it we are done with building part, let's build and runit in browser, you will get output as below Now each time you click on a tab it will load dynamic content from a partial view. Keeping content separate can keep code cleaner and more organized.
https://qawithexperts.com/article/asp-net/bootstrap-tabs-with-dynamic-content-loading-in-aspnet-mvc/176
CC-MAIN-2019-39
refinedweb
480
56.76
WCSTOK(3) BSD Programmer's Manual WCSTOK(3) wcstok - split wide-character string into tokens #include <wchar.h> wchar_t * wcstok(wchar_t * restrict str, const wchar_t * restrict sep, wchar_t ** restrict last); The wcstok() function is used to isolate sequential tokens in a NUL-. The wcstok() function returns a pointer to the beginning of each subse- quent token in the string, after replacing the token itself with a NUL wide character (L'\0'). When no more tokens remain, a null pointer is re- turned.); strtok(3), wcschr(3), wcscspn(3), wcspbrk(3), wcsrchr(3), wcsspn(3) The wcstok() function conforms to ISO/IEC 9899:1999 ("ISO C99"). Some early implementations of wcstok() omit the context pointer argument, last, and maintain state across calls in a static variable like strtok() does. MirOS BSD #10-current October.
http://mirbsd.mirsolutions.de/htman/sparc/man3/wcstok.htm
crawl-003
refinedweb
133
54.52
Hey does anyone have a simple base project for eclipse for SDL_2.0/OPENGL ES 1.1/NDK Base? that complies and works? I’m getting a huge headache, just trying to get OpenGL enabled using SDL 1.2. Just spend 3 hours trying to get OpenGL going with no success. #include <GLES/gl.h> works but I still get this. stuff like this: Description Resource Path Location Type ’GL_QUADS’ was not declared in this scope main.cpp /SDLActivity/jni/src line 42 C/C++ Problem?? I’ve been advised to go SDL 2.0, but I also need OpenGL working and the NDK C++. But I’m have massive trouble just getting a working base.
https://discourse.libsdl.org/t/andriod-sdl-2-0-opengl-es-1-1-ndk-base-for-eclipse/19556
CC-MAIN-2022-21
refinedweb
116
87.52
Dropping Boxes with JavaFX If you want to work for DropBox, they have an interesting programming test which solution must be submitted together with the CV. I’m not considering a position at DropBox, but their test was too fun to ignore: an interesting challenge in algorithms, and another opportunity to exercise JavaFX as any geometric problem surely deserves some GUI. (Don’t read this blog if you actually plan to apply for a job at DropBox. I don’t think the company would use this problem as its single method of recruitment; this is more valuable for the candidate.) The problem is to find the ideal packing of some number of arbitrary “boxes” (rectangular objects), so the total area is minimized. The boxes can be rotated if necessary, and the area of the smaller rectangular area that contains all packed boxes is the solution. Like most geometric problems, this is easy to solve by sketching some empiric solution and pondering how to translate it to an algorithm. The first, somewhat obvious idea: - Sorting the boxes: First place those with bigger areas, to avoid inefficient placement that could happen when trying to place large boxes on a very fragmented free space caused by smallish boxes. I don’t have a formal justification for this heuristic... it's just a reasonable analogy with similar fragmentation problems, from memory allocation to disk filesystems. Partitioning by size is usually a good idea. Let's start the implementation with the sorting function and some random utilities: function area (c: Point2D) { c.x * c.y } function area (r: Rectangle2D) { r.width * r.height } class RectComparator extends java.util.Comparator { override public function compare (r1: Object, r2: Object) : Integer { (area(r2 as Rectangle2D) - area(r1 as Rectangle2D)) as Integer; } } function sort (rects: Rectangle2D[]) { Sequences.sort(rects, RectComparator{}) as Rectangle2D[]; } I was surprised that I needed to write a Comparator class like that. I can't just pass an equivalent JavaFX Script function to the sort() method, because the language doesn't support SAM conversion (like proposed for Java 7). I wonder if some kind of support for Java's generic types would also be needed in that case, so a function (r1: Rectangle2D, r2: Rectangle2D) {...} would be type-safely converted to a Comparator Also, I had to write helper functions for areas. I noticed that javafx.geometry is much simpler than the equivalent AWT package. JavaFX APIs generally have a minimalist design, due to footprint especially on its lower profiles; but I wonder if we could have some extra power here, at least for the Desktop profile. Sophisticated manipulation of geometric models is something I see a significant number of JavaFX applications doing. The javafx.scene.shape is much more complete, but its classes are UI components and not adequate for anything else. The second idea came easily too: - Candidate positions: Have a list of possible positions for box placement. In the initial state, the only candidate position is the origin (0, 0): the top-left corner of a virtual bounding box that grows to contain all placed boxes. This list will be updated as each box is positioned. The figure below shows the virtual bounding box (thick lines); the initial candidate position (red disk); the next box to place (dashed line); and the ideal position for that box (blue arrow). This ideal position is calculated by our brain, not by the algorithm: we're still working on the problem of finding some algorithm that will select this position! Placing the first box is trivial because there’s only one candidate position, but this trivial special case doesn't help much to further reveal the algorithm. Let's write down the data structure that will represent each iteration, or step, of the solution: class State { public-init var output: Rectangle2D[] = []; public-init var candidates: Point2D[] = Point2D { x: 0 y: 0 }; public-init var limits: Point2D = Point2D { x: 0 y: 0 }; } In this class State, I have a sequence of output rectangles (the already-placed boxes), a sequence of candidate positions, and the limits of the virtual bounding box of all placed rectangles. This class will be immutable; functions that operate on its data will be member functions (methods) of State. I consumed the original placing position, and this reveals two new interesting candidate positions: the northeast (top-right) and southwest (bottom-left) corners of the placed box. The southeast (bottom-right) corner, or other positions (e.g. at the middle of some edge), don't make sense. All boxes must be tightly packed, so each new box should go as much north and west as possible, always touching other boxes' edges (or the origin axes). This looks like a promising solution, at least a good start. In fact, this solution is already good enough for the simple 3-box problem from DropBox’s challenge. Now I can envision the core algorithms for box placement: - Placing a box: For each new box, try placing it in all candidate positions. Try this twice per box - with its original shape, and rotated (inverting its height and width). Discard placements that create intersection with any existing box. Calculate the total area for each attempt, and choose the option that delivers the smaller bounding area. - New candidate positions: Remove the position consumed by the placed box. Add the NE and SW corners of the placed box. Here is the code for the box-placing part: function flip (r: Rectangle2D) { if (r.height == r.width) null else Rectangle2D { width: r.height height: r.width } } function intersects (placed: Rectangle2D) { for (o in output) if (o.intersects(placed)) return true; false } function pack (next: Rectangle2D) { var bestLimit = null; def currArea = area(limits); var bestArea = Float.MAX_VALUE; var bestRect: Rectangle2D = null; var bestCand: Point2D = null; for (n in [ next, flip(next) ], c in candidates) { def newLimit = Point2D { x: max(c.x + n.width, limits.x), y: max(c.y + n.height, limits.y) } def newArea = area(newLimit); if (newArea < bestArea) { var placed = Rectangle2D { minX: c.x minY: c.y width: n.width height: n.height }; if (not intersects(placed)) { bestLimit = newLimit; bestArea = area(newLimit); bestRect = placed; bestCand = c; if (newArea == currArea) break; } } } State { output: [ output, bestRect ], candidates: nextCandidates(bestCand, bestRect), limits: bestLimit } } Function Scene.pack() is basically half the solution; it depends on a nextCandidates() function that we'll see later. It also uses two new helper functions, flip() and intersect(). Notice that flip() returns null if the rectangle is square - in that case we don't try the same placement twice! The use of this null value is a JavaFX Script trick. The loop for (n in [ next, flip(next) ]) iterates a sequence that contains the "next box" and also its flipped version. But if flip(next) returns null, that sequence will contain only next, because sequences cannot contain nulls; sequence construction silently ignores nulls. One important decision is the handling of multiple candidate placements that don't increase the current bounding area - an extremely frequent event, for smaller boxes that fit in some existing "hole". My first instinct was trying a best-fit decision, scanning all candidates and having a bias for placements closer to the top-left corner or some other heuristics. But the performance cost of this decision was big, because it implies in a full Cartesian product of boxes X candidate positions; and the result in better overall placement was virtually zero. So in the end I opted for a mix of best/first-fit strategy: keep searching candidates, but abort that search when I find the first placement that is "good enough" - doesn't increase the bounding area - the critical variable that the algorithm must optimize. Back to our boxes... We placed the second box; so far so good, but look at the next box... this will cause us some trouble. Now the rule for adding new candidate position has failed; the new set of candidate positions is clearly not sufficient. The ideal position for the next box is just under the last placed box, touching its bottom edge; but with the X coordinate more to the left, touching the right edge of the bigger box. Our original rule was too simple, it ignores the gaps caused by placements that don't preserve the neat, ladder-like arrangement of the first three figures. At this point in the problem, there are some possible solutions: - Brute-force. Add to the candidate list the full Cartesian product of the right and bottom coordinates of each placed box X all existing boxes. This will produce a large number of candidate positions, most of them useless, like this: This solution works; the problem is scalability. The number of candidate positions will be roughly O(N2) on N = number of boxes, even with some rules to discard a few positions (repeated positions produced by different combinations of boxes; positions already used by some placed box; SE position of any placed box). Positions that land in the left or top edge of any box are especially useless, can't place boxes of any non-zero area. You could trim such useless positions, but that would be expensive, requiring intersection tests with all existing boxes. - 2. Rocket-science. We could design a smarter placement algorithm that uses the candidate positions, but is able to "shift" blocks to the left (or to the top) until they touch another box. The picture shows this idea: the green arrow is the shift-left that will adjust the placement from the initial candidate position. How to implement this? I considered mapping the free space, starting with the bounding rectangle for all previously-placed boxes, then subtracting all these boxes to produce a shape containing all the free space... then partition this shape into a list of rectangular free-slots... so I can finally make intersection tests between my next rectangle and any free slots immediately to its top or left... this would be a potentially incremental process, stopping when there are no more free neighbors or when I intersect some other box. This solution will also work; the problem is, it's complex. I am basically implementing a small physics engine - just add gravity, momentum, mass centers, some Newton's formulas... and my head is starting to hurt! Remember that I will have to do this smart-ass stuff for all candidate positions X all "next" boxes! The preprocessing of the initial state (like creating a map of free space) can be performed incrementally from each state to the next; but the remaining effort per iteration is still significant, and the code would be quite complex. Overall, it doesn't look much better than the brute-force approach. - 3. Think, Iterate, Refine. I have actually considered both previous solutions before I found the ideal solution - and as you will see, it can be considered a refinement of the previous idea. I have changed the algorithm to create the new candidate positions after placed boxes. Now it reads like this: -. You will notice that this simple scan of intersections is a watered-down version of the "rocket-science" solution: this effectively finds the positions that would be created by shifting boxes to the left or top. The next picture illustrates the result; the green arrow shows the SW-originated candidate position that was shifted left. At this point I considered the algorithm complete and proceeded to code the GUI, write this blog, analyze the results. But I eventually found a small problem. What happens if, in the state above, the next box to place has a different shape so its ideal position is that inner corner (red circle right above the one pointer by the arrow)? In this case, the position that we just shifted to the left will be unusable because it will be in the middle of the new box's left edge. That position is not even removed from the candidates list, because it was not used as a placement position. This last figure shows what should happen: while placing the central square box as indicated, we must find any existing candidate positions that are captured by its left edge, and move these positions to the right (green arrow), so they coincide with the right edge of that box, at the same vertical coordinate. This trick will keep the candidate position useful, for example for the smaller square box as indicated by the figure. Similar handling will be applied to candidate positions captured by the top edge of a placed box. The final algorithm: -. Move any position captured by the left edge of the placed box to its right edge. Move any position captured by the top edge of the placed box to its bottom edge. Avoid any duplicates. So now we can code it... function findIntersections (ne: Point2D, sw: Point2D) { var bestX = 0.0; var bestY = 0.0; for (o in output) { if (o.maxX < sw.x and o.maxX > bestX and o.minY < sw.y and o.maxY > sw.y) bestX = o.maxX; if (o.maxY < ne.y and o.maxY > bestY and o.minX < ne.x and o.maxX > ne.x) bestY = o.maxY; } Point2D { x: bestX y: bestY } } function nextCandidates (usedCand: Integer, next: Rectangle2D) { def ne = Point2D { x: next.maxX y: next.minY }; def sw = Point2D { x: next.minX y: next.maxY }; def inters = findIntersections(ne, sw); [ for (c in candidates) { if (indexof c == usedCand) null else if (c.y == next.minY and c.x >= next.minX and c.x < next.maxX) Point2D { x: c.x y: next.maxY } else if (c.x == next.minX and c.y >= next.minY and c.y < next.maxY) Point2D { x: next.maxX y: c.y } else c } ne, sw, if (inters.y == ne.y) null else Point2D { y: inters.y x: ne.x } if (inters.x == sw.x) null else Point2D { x: inters.x y: sw.y } ] } I ignore intersections that coincide with the original candidate positions - a common case, that happens when no shift is possible because that position is already touching a placed box or the axes. I consider the code above "self-documenting" - it reads almost identical to the English statement of the algorithm. In a real project, I'd only add comments to explain the reasoning of the algorithm (why these intersections and moves must be performed, etc.). This is my litmus test for readable code: you don't need comments to explain what it is doing. Higher-level language syntax makes this obviously much easier to achieve. That's it - our solution is almost complete. It's only missing the "driver" function that packs all boxes from an input sequence: function pack (input: Rectangle2D[]) { var state = State {}; for (next in sort(input)) state = state.pack(next); state } Now let's write some test code. I want to create a sequence of random boxes. def rnd = new java.util.Random(77); function randomInput (size: Integer) { def skew = if (size <= 10) 5 else 10; for (i in [1 .. size]) Rectangle2D { width: round(pow(2, (rnd.nextFloat() * skew))) height: round(pow(2, (rnd.nextFloat() * skew))) } } An exponential random distribution produces more interesting input data. Also, I'm forcing the random seed to a fixed value so I can run reproducible benchmarks. User Interface Finally, let's make some GUI - a JavaFX program wouldn't be complete without that :-) def SIZE = 512; var solution: State = null; var solutionDebug = false; function solve (panel: Panel, count: Integer) { panel.parent.scene.stage.title = "Boxes: {count}..."; def input = randomInput(count); def start = DateTime {}; solution = pack(input); solutionDebug = false; def time = (DateTime {}).instant - start.instant; var totalArea = 0; for (r in solution.output) totalArea = totalArea + area(r) as Integer; var limitArea = area(solution.limits); def scale = SIZE / max(solution.limits.x, solution.limits.y); panel.parent.scene.stage.title = "Boxes: {count} Usage: {100.0 * totalArea / limitArea}% " "Time: {time}ms ({(time as Float) / count}ms/box)"; panel.content = [ Rectangle { fill: null stroke: Color.YELLOW width: solution.limits.x * scale height: solution.limits.y * scale } for (r in solution.output) Rectangle { x: r.minX * scale y: r.minY * scale width: r.width * scale height: r.height * scale strokeWidth: 0.5 stroke: Color.BLACK fill: Color.rgb(0, (r.minX * scale * indexof r) mod 256, (r.minY * scale * indexof r) mod 256) } ] } function showDebug (panel: Panel) { if (solutionDebug) return; solutionDebug = true; def scale = SIZE / max(solution.limits.x, solution.limits.y); def fill = Color.rgb(255, 0, 0, 0.25); insert for (c in solution.candidates) Circle { centerX: c.x * scale centerY: c.y * scale radius: 8 fill: fill } into panel.content; } def stage = Stage { width: SIZE height: SIZE title: "DropBox JavaFX - Press SPACE, 1..0, ENTER" scene: Scene { content: Panel { layoutInfo: LayoutInfo { width: SIZE height: SIZE } onKeyTyped: function (e) { def panel = e.node as Panel; if (e.char.compareTo('1') >= 0 and e.char.compareTo('9') <= 0) solve(panel, 100 * Integer.valueOf(e.char)) else if (e.char.equals('0')) solve(panel, 1000) else if (e.char.equals(' ')) solve(panel, 10) else if (e.char.equals('\n')) showDebug(panel); }}} }; stage.scene.content[0].requestFocus(); stage The GUI is quite simple. If you press any key, the program creates a new input dataset, solves the box placing problem, and creates Rectangle components that show the solution ("placed" boxes). Most of the code is concerned with secondary stuff, like measuring execution time and computing and formatting some interesting statistics. There's also some scaling logic to fit the result neatly in the scene, and some pseudo-random color computing for a clear and interesting display. The dataset will have 10 boxes if you press SPACE, or 100 ... 1.000 boxes for '1 ... '0'. You may notice the awkward code that maps keys to box counts. I can't do arithmetic like e.char - '0' because e.char is a String; the language has no Character type. Writing e.char.charAt(0) is no help, that returns the same single-char string (this is bad enough to make this Java API useless, so that's an important interop bug). I can't typecast e.char to any integral type. In short, no easy way to manipulate a character as a numeric value. On top of that, e.code is useless for my purposes, because the KeyCode type doesn't expose an ordinal number, so no luck writing e.code - KeyCode.VK_0. Even a range check like e.code.compareTo(KeyCode.VK_0) doesn't work: this compiles, but returns bogus results!. (The language has no enum construct.) Finally, JavaFX Script has no switch/case, and it doesn't support a native map type construct that could allow me to build a KeyCode->Function mapping; programmers depend completely on the if-else-if... syntax for multiple-choice decisions. Now add this to the difficulty of making range checks on typed keys (or any enum-like API), and the result is awkward. I love JavaFX Script, but it does show a few traits of an Ivory Tower design: a language that didn't yet mature in the trenches of real projects, as it fails in such a common "pragmatic" idiom like mapping a range of keys to numeric values through simple arithmetic. Yes, it's trivial to call a small Java class to compensate for these limitations; but this facility should not make the language designers complacent, sitting on top of basic deficiencies. Even the DSL status is no excuse - my code needs to handle some keyboard input and this certainly belongs to the scope of a GUI DSL; if the language can't do that in a simple and elegant way, it has to be fixed. The picture above - click it to launch a JNLP app - would make Piet Mondrian proud! Notice the semi-transparent red circles showing the full list of candidate positions at the final state: these circles don't appear by default, you must press ENTER after the solution. The efficiency of the algorithm looks pretty good: packing densities are usually in the 96%-98% range for 100 boxes, and never below 99,4% for 1.000 boxes, for my input data. My naked-eye analysis of a couple dozen solutions didn't show any obvious failure of optimal placement (well, not after all algorithm iterations summarized above). But this is by no means a formal proof; I doubt the algorithm is optimal. I didn't make any research to find my solution, but I know this is an important and well-researched problem. For one thing, my algorithm uses a mix of first-fit decisions (for performance) and best-fit heuristics (for good placement). Also, my code does local optimization (finding the best placements for each box independently), instead of global optimization (considering all boxes at once). An ideal solution should be both purely best-fit, and globally-optimizing (perhaps through backtracking, or maybe smarter preprocessing like pre-grouping boxes with edges of similar sizes). The ordering or bigger boxes first seems to be a very good heuristic, but it's not sufficient for global optimization. Performance I'm basically done with algorithm work, so now it's the right time to think about code efficiency. I fired up the NetBeans profiler and looked at one run for 1.000 boxes. On HotSpot Server: The biggest offender is intersects(). I expected this function to be the most called, but I didn't expect it to use so much time! The problem of course, is that each of the N boxes have at least one candidate position, and each position must be tested for intersection against many previously-placed (up to N-1 at worst-case) boxes. I added some code to display the final number of candidate positions, and total number of invocations to intersects() and intersection tests inside that method: - Iterate rotation, then candidates: 2.740 candidates, 438.019 calls, 54.142.832 tests - Iterate candidates, then rotation: 2.719 candidates, 652.823 calls, 84.139.905 tests If you asked why pack() does have the iteration c in candidates in the innermost loop and n in [ next, flip(next) ] in the outer loop, that's why: it's more efficient to scan the candidates in the inner loop, which really makes sense. Anyway the number calls to intersects() is roughly O(N2), while the number of intersection tests is roughly O(N3). (The latter seems to be closer to O(N3 / 20), but in Big-O analysis this dividend is basically irrelevant.) One obvious solution is sorting or indexing. If the candidate positions were sorted, I could narrow the search and only perform intersection tests against a relatively small number of boxes. But any single-dimensional ordering, e.g. by distance from origin, would at best allow me to cut intersection tests by a linear factor; moving from O(N3 / 20) to (say) O(N3 / 100) doesn't seem worth the extra code and effort. I clearly need something that cuts time quadratically, so the total intersection number becomes O(N2). Some ideas are Binary Space Partitioning, Quadtrees, or maybe just a simple grid (e.g., I split the space in regions of 10 x 10 units of length, and I keep track of which boxes and/or candidate positions belong to each region). But I have to finish this blog some day, so this improvement is left as an exercise for the reader. ;-) How fast is the code in real-world units? I benchmarked with the following procedure: start the program; run a single untimed 1.000-box test for warm-up; then run timed runs for 100 ... 2.000 boxes. HotSpot Server is ~2,35X faster than Client, but both degrade performance in a curve of similar shape. A linear regression of the most regular section of the series (for Server) results in the formula 0,0002 * N1,5, better than expected, but our data series is limited and the cost of per-box intersection tests extremely small. At higher box counts (many thousands and up) it's likely that the curve would reveal a higher exponent. My code is not tuned for performance; for one thing, I could easily get rid of some object allocations - including the full recreation of the output and candidates sequences at each step of the solution. JavaFX Script is not Lisp - its sequences are not linked lists. It's not Clojure either - its sequences are "persistent" but their structure is not optimized for frequent updates of a few items. Anyway, allocation and GC are not a problem here. A 1.000-box solution causes 73 Mb of data to be allocated and collected, in 17 GC events, total time 13ms (0,83% of the total execution time for HotSpot Client), average pause time 0,7ms, heap occupation min = 3,8 Mb / max = 8,4 Mb, and back to 2,2 Mb when the solution is complete (even with the JavaFX GUI up). These numbers look excellent. But I am a stubborn, low-level, optimization-obsessed hacker, so I had to make the experience of changing some code to avoid the perceived language inefficiencies. I did these changes (MainJ.java in the project): - Using a java.util.LinkedList (instead of a sequence) for candidates and output; - Abandoning all functional purism: these sequences, and the State object, are updated in-place at each step; - Defer some object allocations (newLimit and placed in the pack() function), even at the inconvenience of writing code like intersects(c.x, c.y, n.width, n.height); As I expected, these changes saved a lot of memory churn. Now the program burns only 13,3 Mb for the 1.000-box test; this is more than 5X better, a really amazing feat so I was all slapping on my own back... and performance (for a 2.000-box test) improved ~5% for HotSpot Client. But it degraded ~20% for Server, which was shocking. JavaFX Script's are really very efficient, but remarkably when they benefit from a superior JIT compiler. Even when you consider that the VM that's available for RIA clients is just HotSpot Client, the 5% speedup is probably not worthy of the much uglier, "optimized" code. The exercise revealed a little undocumented secret of JavaFX Script: its for loop can iterate Java collections, so I could declare output: LinkedList, and the code for (o in output) would still compile and run perfectly! The major inconvenience was adding typecasts, because generic types don't carry over to the JavaFX side. Inspection of javafxc-generated bytecode shows a lot of useless overhead (e.g. binding support, missing sequence optimizations that I've already blogged over). This may explain the very big advantage of HotSpot Server: it does a much better job of removing redundancy (this is what advanced code optimization is all about). I expect the Server x Client gap to decrease with time, as javafxc evolves to produce more efficient bytecode. More Exercises for the Reader In the final algorithm for new candidate positions, the positions created by intersection with left/top edges are added together with the base SW/NE positions. Why can't we drop these base positions and only add the ones created by the intersections? At least in this figure, the base SW position seems redundant. For large box counts like 1.000, you'll notice that most runs produce a solution with very skewed shape. Either the total width is equal or little bigger than that of the widest box, or the total height is equal or a little higher than that of the tallest box. Can you tune the algorithm to have a bias for a more "square" result, or for a specific shape? (The easiest way of course is using a fixed-size bounding box, which is an important real-world scenario; but I don't want that). Evolve the algorithm and the code (including the GUI) to a 3D version, e.g. for filling shipping containers. :-) - Printer-friendly version - opinali's blog - 4368 reads typo? by ronaldtm - 2010-09-03 13:51"but their test was very fun to ignore", or "but their test was too fun to ignore"? oops! by opinali - 2010-09-03 16:45 That was a mistake; I fixed it... at least in the main article, because the blog platform doesn't let me change the blurb.
https://weblogs.java.net/node/476219/atom/feed
CC-MAIN-2015-22
refinedweb
4,721
65.01
#include <stdint.h> #include <rte_os.h> Go to the source code of this file. RTE SWX Table Table interface. Definition in file rte_swx_table.h. Table memory footprint get Definition at line 130 of file rte_swx_table.h. Table mailbox size get The mailbox is used to store the context of a lookup operation that is in progress and it is passed as a parameter to the lookup operation. This allows for multiple concurrent lookup operations into the same table. Definition at line 145 of file rte_swx_table.h. Table create Definition at line 162 of file rte_swx_table.h. Table entry add Definition at line 180 of file rte_swx_table.h. Table entry delete Definition at line 197 of file rte_swx_table.h. Table lookup The table lookup operation searches a given key in the table and upon its completion it returns an indication of whether the key is found in the table (lookup hit) or not (lookup miss). In case of lookup hit, the action_id and the action_data associated with the key are also returned. Multiple invocations of this function may be required in order to complete a single table lookup operation for a given table and a given lookup key. The completion of the table lookup operation is flagged by a return value of 1; in case of a return value of 0, the function must be invoked again with exactly the same arguments. The mailbox argument is used to store the context of an on-going table lookup operation. The mailbox mechanism allows for multiple concurrent table lookup operations into the same table. The typical reason an implementation may choose to split the table lookup operation into multiple steps is to hide the latency of the inherent memory read operations: before a read operation with the source data likely not in the CPU cache, the source data prefetch is issued and the table lookup operation is postponed in favor of some other unrelated work, which the CPU executes in parallel with the source data being fetched into the CPU cache; later on, the table lookup operation is resumed, this time with the source data likely to be read from the CPU cache with no CPU pipeline stall, which significantly improves the table lookup performance. Definition at line 250 of file rte_swx_table.h. Table free Definition at line 264 of file rte_swx_table.h. Match type. Definition at line 23 of file rte_swx_table.h. List of table entries.
https://doc.dpdk.org/api-22.07/rte__swx__table_8h.html
CC-MAIN-2022-40
refinedweb
404
53.41
Last updated on December 21st, 2017 | offline persistence for its web SDK. A few weeks before writing this I emailed my subscribers asking what excited them the most, and without a doubt, offline persistence and chained queries were tied neck to neck. Today will learn how to integrate Cloud Firestore into our Ionic projects using AngularFire2 and will cover how to CRUD data from it. Integrate Cloud Firestore To integrate Cloud Firestore into your Ionic app, the first thing you need to do is to make sure you’re using at least version 4.5.0 of the Firebase JS SDK and version 5.0.0-rc.1 of AngularFire2. $ npm install firebase@latest angularfire2@latest Once that’s installed, go into your app.module.ts folder and import the AF2 packages we’ll use: import { AngularFireModule } from 'angularfire2'; import { AngularFireAuthModule } from 'angularfire2/auth'; import { AngularFirestoreModule } from 'angularfire2/firestore'; And then add it to your imports array inside the @NgModule: @NgModule({ ... imports: [ BrowserModule, IonicModule.forRoot(MyApp), AngularFireModule.initializeApp(firebaseConfig), AngularFireAuthModule, AngularFirestoreModule ], ... }) NOTE: The only thing you need to do to enable offline persistence is to add it to AngularFirestoreModule. AngularFirestoreModule.enablePersistence() CRUD with Cloud Firestore Once everything is “up and running,” we’ll create a few examples of how to manipulate data. First, remember what I said in the beginning, Cloud Firestore is a Document based NoSQL database, if you need to learn more about that, this quick video from the Firebase team will help. Reading Data from the DB To read a list of data, you have to create a reference to it first, in this case, we’ll imagine that we have a collection called groceryList in our database. Inside that collection, we’ll have different documents for different groceries. Let’s create that reference: const groceryListRef = this.fireStore.collection<Grocery>(`/groceryList`); That line is creating a reference to the groceryList collection in our database, and now we can play with it. If we want to fetch the grocery list to display on our page, then our code would look something like this: this.groceryList = groceryListRef.valueChanges(); That’s it, and now you can display it in your HTML like this: <ion-list> <ion-list-header> My Home's Groceries </ion-list-header> <ion-item * <h2> {{ grocery.name }} </h2> <h3> There are <strong>{{ grocery.quantity }} </strong> in stock. </h3> </ion-item> </ion-list> Since .valueChanges() returns an Observable for us to use, we can use the async pipe to tell our HTML that the data is coming in asynchronously. Push data to the DB Writing Cloud Firestore feels weird, so going to keep it just as Firestore To add a new grocery to that list, Firestore gives us the .add() method, very similar to the .push() method we use in the RTDB. groceryListRef.add({ name: 'Apple', quantity: 5, inShoppingList: false }); That will create a new document located at groceryList/<new_grocery_id>/ Updating data from the DB To update a value from a document we can use the .update function groceryListRef.doc(new_grocery_id).update({ quantity: 8, }); That will go into the document and change the quantity from 5 to 8. Deleting data from the DB. And lastly, to delete an item from that list you’d need to call .delete() in the document: groceryListRef.doc(new_grocery_id).delete(); Next Steps Did you find this post helpful? Jump start your development using Master Firestore for Ionic Framework, it covers Async/Await, User Authentication, CRUD, Firestore Transactions, Cloud Functions triggers for Firestore, Security Rules, and Offline Persistence among other things.
https://javebratt.com/cloud-firestore-intro/
CC-MAIN-2018-05
refinedweb
587
55.54
A module that allows you to build your project, resolving dependencies based on the @uses annotation. Before I dive into the technical specifics, I'll explain what this module is all about. In short, it allows you to annotate your files with the @uses annotation to specify your dependencies; which is convenient for the developer reading your code as he or she now knows what dependencies a file has. It looks like this: /*** My file** Some info about My file** @author RWOverdijk* @version 0.1.0* @license MIT** @uses ./my-dependency.js* @uses ./my/other/dependency.js*/// Code here... It's also convenient because this module will bundle all dependencies together for you. If you'd like a more detailed explanation of this module and its benefits, you can read about it in this blog post. This module allows you to: You can install useuses using npm: Save as dependency: npm install useuses --save Global (for the cli): npm install -g useuses This module can be used in a programmatic manner, or via the command line. This example assumes you have useuses installed globally. If that's not the case, simply replace useuses with ./node_modules/useuses/bin/useuses.js. useuses -i example/main.js -o example/dist/built.js -w All available options can be found further down this document. Below is an example on how to use Useuses. var Useuses =useusesoptions;options =in : 'example/main.js'out : 'example/dist/built.js'wrap : trueverbose: truealiases: foo: 'bar/baz/bat'search : 'bower_components';useuses = options;useuses; All available options can be found further down this document. The following options are currently available for useuses. Use this option to tell useuses where the main project file is located. Using this option you can tell useuses where to write the built file to. When supplied, useuses will output the files written to the build. When supplied, useuses will not stop on missing dependencies. When supplied, useuses won't write the actual build. In stead, it will output a list of files that would have been written if this weren't a dry-run. Note: Programmatically, the key for this option is dryRun. With this option you can set up aliases for your dependencies. This is particularly useful with external resources or vendor (lib) files. Example: -a angular= Now you can just use @uses angular to specify you're using angular, and it will be downloaded and added to the build. Using aliases can also be useful to specify alias paths. For instance, creating alias -a namespace/core=library/namespace/src/core would allow you to get rid of the lengthy @uses. You can now just specify namespace/core/array-utilities.js as a dependency. You can supply multiple -a options, or an array separated string of assignments. Example: -a vendor=vendor/bower_components,angular=library/angular/angular.js Note: Programmatically, the key for this option is aliases. An object where the key is the alias, and the value is what the alias links to. This option allows you to specify custom search paths; places for the module to look for your dependencies. Example: useuses -i simple/main.js -o examples/simple/dist/built.js -s examples -w Will now find simple/main.js in examples/simple/main.js and will also use the path examples for nested dependencies. Note: Programmatically, the value for this option should be an array of paths. Setting this to true, will instruct useuses to wrap the built code in a self-invoking function. The advantage here, is that your code will not pollute the global scope; but will still run. For example, this: var name = 'World';console; Would become this: {var name = 'World';console;}; If you have any questions / suggestions feel free to use one of the following resources:
https://www.npmjs.com/package/useuses
CC-MAIN-2017-47
refinedweb
629
59.09
Every now and then we have a client that needs to, on a regular basis, import data from one system to another and a lot of the time we go for the good old csv file. In the past i have always just written a simple function that reads each line of the file, splits it into cells and import it into the database. But this simple code never fully handles csv’s correctly. I kept thinking “there has to be a simpler way“. So i got thinking… Csv files are just like data tables, data tables can be queried… surely there must be a way to just treat this file as a datasource. After a bit of playing around and reading i can up with the following solution. It requires that the csv data be saved to a file and then we can just use an OleDataAdapter to perform almost any simple SQL statment against it! using System; using System.Data; using System.Data.OleDb; using System.IO; public static DataTable ReadCSVFileIntoDataTable(string pFilePath) { string fullPath = Path.GetFullPath(pFilePath); string file = Path.GetFileName(fullPath); string dir = Path.GetDirectoryName(fullPath); string connString = "Provider=Microsoft.Jet.OLEDB.4.0;" + "Data Source=\"" + dir + "\\\";" + "Extended Properties=\"text;HDR=No;FMT=Delimited\""; string query = "SELECT * FROM " + file; DataTable dt = new System.Data.DataTable(); OleDbDataAdapter da = new OleDbDataAdapter(query, connString); try { da.Fill(dt ); } catch (InvalidOperationException /*e*/) { } dAdapter.Dispose(); return dt; } Related posts: This is a very simple and effective way to deal not only with csv files, but e.g. with MS Excel documents as well – we can read and update them using OLE DB. There are lots of 3rd party OLE DB providers out there which may be used for the purpose of reading and writing to formatted files from c#.
http://www.revium.com.au/articles/sandbox/csv-parsing-the-easy-way/comment-page-1/
CC-MAIN-2014-52
refinedweb
298
66.23
Question 1 : Will the program compile successfully? #include < stdio.h > int main() { char a[] = "India"; char *p = "PARINAM"; a = "PARINAM"; p = "India"; printf("%s %s\n", a, p); return 0; } Because we can assign a new string to a pointer but not to an array a. Question 2 : For the following statements will arr[3] and ptr[3] fetch the same character? char arr[] = "IndiaPARINAM"; char *ptr = "IndiaPARINAM"; Yes, both the statements prints the same character 'i'. Question 3 : Is there any difference between the two statements? char *ch = "IndiaPARINAM"; char ch[] = "IndiaPARINAM"; In first statement the character pointer ch stores the address of the string "IndiaPARINAM". The second statement specifies the space for 7 characters be allocated and that the name of location is ch.
http://www.indiaparinam.com/c-programming-language-question-answer-strings/yes-no-questions
CC-MAIN-2019-22
refinedweb
126
53.92
Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources. This chapter begins with a quick look at the basics of ADO.NET and then provides an overview of basic ADO.NET capabilities, namespaces, and classes. It also reviews how to work with the Connection, Command, DataAdapter, DataSet, and DataReader objects. Before jumping into some of the new features of ADO.NET 2.0, step back and make sure that you understand some of the common tasks you might perform programmatically within ADO.NET. This next section looks at selecting, inserting, updating, and deleting data.
http://my.safaribooksonline.com/book/web-development/microsoft-aspdotnet/9780470041789/data-management-with-adodotnet/basic_ado.net_features
CC-MAIN-2014-15
refinedweb
102
59.3
- Sitecore has a very rich configuration surface, and it's good to be able to see what a setting does before you change it - Things go wrong, and it's helpful to be able to figure out why. - Sitecore code provides models for how to do things, both on the back-end, and in the client Go to type (Control-T)Control-T is the soul of ReSharper. With a couple of keystrokes, you can go to any class in your solution. Add to this nice bells and whistles like camel case interpretation (type "pfu" to jump to a class named PrintFormatUtility) and namespace interpretation (have a YourCompany.Library and a YourCompany.Web version of PrintFormatUtilty? Type "web pfu" or "li pfu" to disambiguate). In short, ReSharper lets you navigate your solution at the speed of thought. These features become all the more powerful when applied to an external library you are supporting. I discovered this a while back while doing research for a talk on Clones. Control-T "Sitecore clones" gave me a quick view of all classes that have "Clone" in the name: Go to symbol (Alt-Shift-T)Go to symbol does for properties and method names what Control-T does for classes, and the tool has the nice feature that you can switch from the one to the other without losing what is in your textbox. Here are some of the properties named Database in the Sitecore API, all a single click away. Go to definition (F12)F12 ships with Visual Studio, but with ReSharper, this will take you from a reference in your project to the implementation in a referenced library, so a reference to Sitecore.Data.Items.Item will take you to the implementation. Go to usages (Control-Alt-Shift-F12)Go to usages provides a breakdown of all uses of a class or member both in the library DLLs or in your solution. This one has a slightly more complex interface than the other commands, to allow you to fine tune your search, limiting it, for example, to library methods or your solution code: And here is the output:
http://www.dansolovay.com/2013/01/resharper-shortcuts-every-sitecore.html
CC-MAIN-2018-34
refinedweb
356
67.18
A Taste of Recursion the process of a function calling itself. For example: void funct(int x) { funct(x); } In this chunk of code, you see a terrible example of a recursive function, but it serves illustrative purposes here: The funct() function calls itself. That's recursion. Now what happens in this example is basically an endless loop, and, thanks to a technical something-or-other, called the stack pointer, the computer eventually crashes. But it's just an illustration. For recursion to work, the function must have a bailout condition, just like a loop. Therefore, either the value passed to the recursive function or its return value must be tested. Here's a better example of a recursive function: void recursion(int x) { if(x==0) return; else { puts("Boop!"); recursion(--x); } } The recursion() function accepts the value x. If x is equal to zero, the function bails. Otherwise, the function is called again, but the value of x is reduced. The decrement prefix operator is used so that the value of x is reduced before the call is made. The sample recursion() function basically spits out the text Boop! a given number of times. So if recursion() is called with the value 10, you see that text displayed ten times. The insane part about recursion is that the function continues calling itself, wrapping itself tighter and tighter, as though it's in a spiral. In the preceding example, the condition x==1 finally unwinds that twisty mess, increasingly pulling back until the function is done. The following code shows a full program using the sample recursion() function. #include <stdio.h> void recursion(int x); int main() { recursion(10); return(0); } void recursion(int x) { if(x==0) return; else { puts("Boop!"); recursion(--x); } } A common demonstration of recursion is a factorial function. The factorial is the result of multiplying a value by each of its positive integers. For example: 4! = 4 × 3 × 2 × 1 The result of this factorial is 24. The computer can also make this calculation, by either implementing a loop or creating a recursive function. Here's such a function: int factorial(int x) { if(x==1) return(x); else return(x*factorial(x-1)); } As with the other recursive functions, the factorial() function contains an exit condition: x==1. Otherwise, the function is called again with one less than the current value of x. But all the action takes place with the return values.
http://www.dummies.com/how-to/content/a-taste-of-recursion.html
CC-MAIN-2015-18
refinedweb
410
56.76
. Prerequisites: You will need to reference Newtonsoft.Json in your project (or in all your projects if using the shared project model). Here is the method that can be used to ‘get’ a List of objects from the server: using Newtonsoft.Json; using System; using System.Collections.Generic; using System.Net.Http; using System.Threading.Tasks; namespace Jankcat.Helpers { public static class RestHelper { public static async Task<List<T>> GetObjects<T>(Uri uri) { var client = new HttpClient(); client.MaxResponseContentBufferSize = int.MaxValue; var returnobj = new List<T>(); try { var response = await client.GetAsync(uri); if (response.IsSuccessStatusCode) { var content = await response.Content.ReadAsStringAsync(); returnobj = JsonConvert.DeserializeObject<List<T>>(content); } } catch { return new List<T>(); } return returnobj; } } } To call upon this method, simply pass in a URL and the type you are expecting from the server! var url = new Uri("") var myObject = await RestClient.GetObjects<MyType>(url); This will return a list of whatever class you are expecting, or an empty list if the call fails. You could instead catch the exception and throw another, adding your own details, but this would be more than what I needed in this case. Feel free to use this as you want! Later I may add another method for Posting or Putting to the API, but currently I don’t have the need… so, sorry.
https://jankcat.com/2016/03/27/net-simple-http-client-helper-class/
CC-MAIN-2020-29
refinedweb
220
60.72
. This page describes how the sbt loader helps you use different versions of Scala and sbt to build your projects. For details on how it can be used to launch other applications besides sbt in a similar manner, see GeneralizedLauncher. sbt versions before 0.3.8 were compiled against a specific version of Scala and test libraries (ScalaCheck, ScalaTest, and specs). You could only use this version of Scala to build your project unless you built sbt from source. You could also only use the version of a test library that was binary compatible with the version of Scala that sbt was compiled with. This has changed with sbt 0.3.8. The distributed jar is no longer the main sbt jar. It is instead a loader that has the classes it needs from Ivy and Scala included in the same jar (after shrinking with Proguard). This loader reads the versions of Scala and sbt to use to build the project from the project/build.properties file. If this is the first time the project has been built with those versions, the loader downloads the appropriate versions to your project/boot directory. The sbt loader will then load the right version of the main sbt builder using the right version of Scala. When a new version of sbt is released, you only have to do: > set sbt.version 0.7.7 > reload and the sbt loader will download the new version of sbt to build your project. Note that new features occasionally require updating the loader. The loader enables building against multiple versions of Scala. To use this feature, list multiple Scala versions in your build.scala.versions property. Then, prefix the action to run with +. $ sbt > set build.scala.versions 2.7.7 2.8.1 2.7.2 > reload > +compile or $ sbt "+compile" outputPath and managedDependencyPath have the version of Scala being used for the build appended to their path. By default, this means: target -> target/scala_2.7.2 lib_managed -> lib_managed/scala_2.7.2 If you use custom output directories or you redefine paths, you will want to wrap them in crossPath, which will append this component. For example, outputPath is defined as: def outputPath = crossPath(outputRootPath) def outputRootPath = path("target") java -Xmx512M -jar sbt-launcher.jar "$@" This means Scala doesn't have to be installed already. The sbt loader requires an internet connection to download the selected versions of the Scala and main sbt builder jars. Once the jars are downloaded, the loader does not need a connection to build projects on that machine unless you use a different version of Scala or sbt. Cleaning the Ivy cache, such as with the clean-cache action, will require downloading the jars again, however. The loader downloads the jars to the project/boot directory in a project. This directory can be copied between projects. Therefore, once you have a project/boot directory for the desired versions of sbt and Scala, you can copy this directory to other projects, including ones on other machines.~
http://code.google.com/p/simple-build-tool/wiki/Loader
crawl-003
refinedweb
504
64.51
Chapter 4In this chapter: Class Relationships and Class Features New Classes and Inheritance Casting Overriding Class Interfaces Events and New Events The Class Hierarchy Global Members Advanced Class Features Example Classes Chapter 3 outlined the REALbasic object model, explaining what classes and instances are, and how you work with them in REALbasic. We saw that in the IDE you edit classes; then the running program generates instances and sends messages between them. We talked about the message-sending mechanism, how instances are generated, and how your code can refer to the instance it wants to send a message to. Now we're going to talk about the notion of classes in more depth, and in particular about relationships between classes, other class features, and how to do things with classes. Also discussed are modules, which provide methods, properties, and constants that are available from anywhere. The chapter ends by describing a few useful example classes that you can make in the comfort and safety of your own home. Most object-oriented programming languages, along with a mechanism for making classes, also provide a means of expressing relationships among classes. There are three such relationships in REALbasic: - A class may be a type of some other class. When two classes are related in this way, the first is said to be a subclass of the second, and the second is said to be the superclass of the first. For example, if you have a Triangle class, you might also have an Isosceles class, where Isosceles is a type of Triangle. Isosceles is then a subclass of Triangle, and Triangle is the superclass of Isosceles. A class can have many subclasses (for example, Triangle might also have a Scalene subclass), but every subclass has exactly one immediate superclass. The subclass-superclass relationship is sometimes called "Is-A," because every instance of the subclass also is an instance of the superclass; for example, every Isosceles is a Triangle. This sort of relationship also brings with it the notion of inheritance, meaning that a subclass is everything the superclass is, and then some. For example, a Triangle has three sides; so does an Isosceles, but it adds a rule that two of the sides are equal. - A class may be declared to qualify as also being some other class, without inheritance, and without the other class having any real existence; the other class is just a name. The first class is said to implement the second, and the second class is not a real class at all, but a class interface. For example, we might have reason to want a Triangle, a Face, and a StopSign to be DrawableThings, as a matter of nomenclature, but to have nothing else in common. A class can implement multiple class interfaces, and can implement class interfaces even if it is a subclass, so this is also a way of getting around the rule that says a subclass can't have more than one immediate superclass. We may think of this as an "Also Is-A" relationship. - A class may have a property whose datatype is some other class. For example, there might be a Point class and a Line class; you could define the Line class as having two Point properties (because two points determine a line). This relationship, especially when the bond between the class and the properties is felt to be particularly strong, is sometimes called "Has-A." So here, a Line has two Points and just wouldn't be a Line without them, and perhaps our program doesn't use Points except as features of a Line; that's a good solid Has-A relationship. Another kind of "Has-A" relationship is where a class has a property that is of a different class, and also provides all the methods for working with that property; this is called a wrapper. An example appears later in this chapter. What classes you create for your project, and in particular how you set up the relationships among them, is a matter of design--object-oriented design. Your project ends up with an internal architecture of classes. There will be a class hierarchy: A and AA are subclasses of B, B and BB are subclasses of C, and so forth. Cutting across this hierarchy you might have some class interfaces, giving AA and BB some special extra commonality. D might operate only as a property of A, encapsulating functionality or building a data structure. The principles of object-oriented design are a mixture of science, art, philosophy, and expediency; it's a big subject, too big for this book. But the first step is to understand REALbasic's object model, how classes can relate and what you can do with these relationships; that's what this chapter is about. Relationships among classes in REALbasic, especially the class hierarchy, are not merely a convenience of design; they are crucial. REALbasic's application framework provides a hierarchy of built-in classes before you write any code at all; and adding to that hierarchy is how you take advantage of the built-in functionality of those classes. For example, you might want a class that acts just like a built-in PushButton but does a few things in addition. To get it, you'd make a subclass of PushButton. That's what the next section is about. New Classes and Inheritance To create a new class, choose File New Class. A listing for the new class will appear in the Project Window. If you immediately hit the Tab key, you'll be transported to the Properties Window, ready to give the new class a meaningful name.New Class. A listing for the new class will appear in the Project Window. If you immediately hit the Tab key, you'll be transported to the Properties Window, ready to give the new class a meaningful name. If the new class you want to create is a window class, you choose File New Window instead. Window classes can't be subclassed, so they are not the sort of thing this chapter is primarily about.New Window instead. Window classes can't be subclassed, so they are not the sort of thing this chapter is primarily about. When you create a new class, you may declare it to be a subclass of some other class, by setting, in the Properties Window, the new class's Super. To do so, you choose from a popup menu which lists all subclassable built-in classes and all classes you've added to this project.[1] This specifies your new class's superclass, and thus makes your new class a subclass of that other class. It is also possible to specify that a class is to be a subclass of no other class; to do this, choose "<none>" in the Super popup (this is the default when you create a new class). As we've already said, a subclass relates to its superclass through the medium of inheritance. Simply put, this means that an instance of the subclass also is an instance of the superclass ("Is-A"). More formally, the subclass has all the same methods and properties as the superclass, and an instance of the subclass can be sent all the same messages as an instance of the superclass. Furthermore, in the case of one of REALbasic's built-in classes that receives events as part of the application framework, a subclass receives those events as well. Subclassing is thus a quick and easy way to take advantage of an existing class's functionality. And most of REALbasic's built-in classes are part of its application framework, meaning that their functionality includes powerful stuff like displaying interface items on the screen or communicating over the Internet; so the ability to subclass such classes means that you can make some very powerful subclasses. WARNING: Unfortunately, aside from the Super listing in the Properties Window, the nature of the relationship between subclass and superclass is not in any way reflected in the IDE. Looking at a subclass's Code Editor, you are not shown what methods and properties it inherits from the superclass; to find out, you have to look at the superclass (or its documentation). Nor is there any way to learn what are the subclasses of a given class. In short, the REALbasic IDE doesn't make inheritance easy to use. This is one of the worst aspects of the IDE's interface. If a subclass is like its superclass, why make a new class at all? What makes a subclass a different class from its superclass? Basically, it's that you can add to a subclass members that its superclass lacks. A subclass is its superclass and then some. In the case of REALbasic's built-in classes, the ability to add to the subclass is crucial, because you can't modify the built-in classes; the way you take advantage of the functionality of one of REALbasic's built-in classes while customizing its structure and functionality is to make a subclass of it and customize that. For example, recall the arcade game described in Chapter 3. There, we imagined a ScoreBox class, which would have a Score property, and would know how to display its value in a window. Also, the ScoreBox class would have an Increase method, which would increment the value of the Score property. Let's actually implement the ScoreBox class. To do so, we'll take advantage of a built-in Control class called a StaticText, which already has the following useful functionality: it displays in its containing window the value of its own Text property. We can't modify the StaticText class, so we make a subclass of it: we create a new class, name it ScoreBox, and designate StaticText as its Super. Then, in the ScoreBox class's Code Editor, we give it a Score property which is an integer, and an Increase method, which goes like this: Sub increase() self.score = self.score + 1 // increment the score self.showScore // display the score End Sub We have postponed the question of how the ScoreBox will actually display its score in its containing window, by giving that job to an unwritten subroutine. Let's write it. It will be a method handler, as we know. A StaticText always displays its own Text property, so it suffices to set the Text property to the value of the Score property. The Text property is a string, while the Score property is an integer, so we must also convert. Here is ScoreBox's ShowScore method handler: Sub showScore() self.text = str(self.score) End Sub Finally, let's think about what happens when a window containing a ScoreBox instance first opens. That instance's score is autoinitialized to 0, which seems acceptable. But we also want to make sure it is displayed. Among the Events listed in ScoreBox's Code Editor is the Open event. An Open event handler, if we choose to write one, is automatically called by REALbasic when the control is instantiated. That's an appropriate moment to start displaying the score. Here is ScoreBox's Open event handler: self.showScore That's all there is to it! Let's try it out. Drag the ScoreBox listing from the Project Window into a Window Editor. A new control appears in the Window Editor; select it. Looking at the Properties Window, you can see that although this control is named StaticText1 by default, its Super listing says that it is indeed a ScoreBox instance. So StaticText1 should know how to accept the Increase message. Let's see if it does. Drag a PushButton from the Tools Window into the Window Editor; double-click it in the Window Editor to access its Action event handler in the window's Code Editor, and give it this code: staticText1.increase Run the project in the IDE. There's a number in the window! It's zero! And every time you press the button, the number increases! It's simple, almost trivial; yet, for the reasons explained in Chapter 3, it's tremendously powerful. The score is now maintained, appropriately, by the object that primarily operates on it; other objects can call a ScoreBox's Increase method without worrying about what this does or what the score is; the ScoreBox, for its part, doesn't care who is calling it; and a project can contain multiple ScoreBox instances, each maintaining a separate score, yet all behaving identically. We've seen that the code for the ScoreBox class's behavior doesn't live in the window's Code Editor; it lives in the ScoreBox class's own Code Editor, which you access by double-clicking the ScoreBox class's listing in the Project Window. All ScoreBox instances will look to this code for their behavior, and if you change this code, all ScoreBox instances will henceforward display the new behavior. Let's prove to ourselves that this is true. (If the project is running in the IDE, you'll have to kill it first, so that you can modify the project.) Start by dragging the ScoreBox listing from the Project Window into the Window Editor again. Now the window has two ScoreBox instances, StaticText1 and StaticText2. Change the PushButton's Action event handler to read: staticText1.increase staticText2.increase Run the project and push the button repeatedly; both ScoreBox instances behave identically (and they increment together, since they both started at zero and both receive the Increase message when we push the button; I remind you, however, that they maintain independent scores). Now, in the ScoreBox class's Increase method, change this line: self.score = self.score + 1 // increment the score to this: self.score = self.score + 2 // increment the score Immediately run the project again, and push the button. Sure enough, now the number increases by two every time, in both ScoreBox instances. Notice that you didn't have to do anything horrible and clumsy like delete the ScoreBox control instances from the window and replace them with new ones; an instance takes on the altered class behavior immediately. Behind the simplicity of this example lurks the power of inheritance. The ScoreBox instance can receive the Score message because we gave the ScoreBox class a Score property; but it can receive the Text message because the ScoreBox class's superclass, StaticText, has a Text property. The ScoreBox instance can receive the Increase message because we gave the ScoreBox class an Increase method handler; but it has an Open event handler, which is called automatically by REALbasic, because that's how REALbasic has defined its superclass, StaticText. Most impressive of all, we have created a working interface element without knowing anything about how to drive the Macintosh Toolbox, simply by subclassing a built-in class that does know how. Casting The existence of subclasses and inheritance complicates the question of what class a given instance is an instance of. Is a ScoreBox instance a ScoreBox, or is it a StaticText? This depends on what the meaning of "is" is.[2] At bottom, a ScoreBox instance is clearly a ScoreBox; ScoreBox is its final class.[3] Still, since a ScoreBox is a kind of StaticText (for that is what subclassing means), a ScoreBox instance is a StaticText instance as well; an instance of a subclass is also an instance of the superclass. And you can carry this on up the hierarchy of classes. StaticText is a subclass of RectControl, and RectControl is a subclass of Control. So a ScoreBox instance is also a RectControl instance and a Control instance. This fact must often be taken into account when manipulating instances and references to instances. That's what this section is about. We begin with this important fact: A subclass instance is acceptable in a context where an instance of the superclass is expected. We've already seen that this is true; a ScoreBox can talk about self.textbecause Self, a ScoreBox instance, is acceptable as a recipient of the Text message, where a StaticText instance is expected. Similarly, if we have a ScoreBox called TheScoreBox in our window, REALbasic will not complain if we say this: dim s as staticText s = theScoreBox The reason this works is that, by the terms of its declaration, assignment to sis legal if the value being assigned is a StaticText; and TheScoreBox is a StaticText, by virtue of the fact that it is a ScoreBox (because ScoreBox is a subclass of StaticText). But the reverse is not true; and for the selfsame reason. If our window contains a control called TheStaticText whose final class is StaticText, you cannot say this: dim s as scoreBox s = theStaticText // error: Type mismatch Similarly, you cannot send the Increase message to an instance whose final class is StaticText; a StaticText knows nothing of the Increase message. The reason is simple: a StaticText is not a ScoreBox. The criterion that REALbasic uses to determine what you can do with a reference is the declared type of the reference--not the class of the instance to which the reference really points. Consider the following: dim s as staticText s = theScoreBox // fine theScoreBox.increase // fine s.increase // error: Unknown identifier Even though sis pointing to a ScoreBox instance, TheScoreBox, and even though it is fine to send the Increase message to TheScoreBox under its own name ( theScoreBox.increase), yet the reference shas been declared as a StaticText, so it cannot accept messages that a StaticText doesn't know about. In general terms, we're allowed to treat a subclass instance as a superclass instance, but a reference declared as the superclass cannot be treated as if it refers to an instance of the subclass, even if it does refer to an instance of the subclass. (In technical terms, REALbasic does its type-checking at compile time.) This situation is illustrated in Figure 4-1. In that case, what's the good of being allowed to treat a subclass instance as a superclass instance? Why would I ever say: dim s as staticText s = theScoreBox if I cannot then treat sas a ScoreBox? Part of the answer is that it can be useful and powerful to have a reference accept a value of more than one datatype indifferently. For instance, we might have an array that we intend to populate with StaticTexts and ScoreBoxes. We cannot declare the array as of type ScoreBox, because then no element can accept a StaticText value. But if we declare the array as of type StaticText, then an element can be a StaticText or a ScoreBox (or any other subclass of StaticText). This raises the question of how we would know, once our array is populated, what class any given element actually is. The datatype of the reference we know already, since we declared it; but is there any way to tear this mask away and see the class of the instance? Unfortunately, REALbasic provides no way to ask an instance pointblank what its final class is. But it does provide a way to ask an instance whether it is (in any sense) an instance of a particular class. This is done by using the IsAoperator, which takes a reference and a class name, and returns a boolean reporting whether the instance pointed to by the reference is in fact an instance of the proposed class. Some examples will show both the syntax and the result: dim theTruth as boolean dim s as staticText s = theScoreBox theTruth = (theScoreBox isA scoreBox) // true theTruth = (theScoreBox isA staticText) // true too theTruth = (s isA staticText) // true theTruth = (s isA scoreBox) // true too theTruth = (s isA theScoreBox) // legal and true! To sum up: The IsAoperator ignores the declared type of the reference; it looks only at the actual instance pointed to, and returns trueif the proposed class is that instance's final class or one of that class's superclasses. In the case of controls only, IsAalso returns trueif the proposed class is the control's global name, even though that name is not what one normally thinks of as a class name (this is essentially the same syntax used with Newto clone a control, as described in Chapter 3). So, if a reference declared as the superclass is pointing to an instance of the subclass, we can learn that it is the subclass. What's more, we can treat it as the subclass--provided we explicitly inform REALbasic that that's how we want it treated. This is done through the use of casting, which is essentially a means of disguising one class as another so that REALbasic will see a message as appropriate. A reference to an instance is cast as a different class by using the class name as a function operating on the reference: dim s as staticText s = theScoreBox s.increase // error, remember? scoreBox(s).increase // fine! As we know, the third line doesn't work, because REALbasic refuses to look behind the mask of the reference's declared datatype; shas been declared as a StaticText, and REALbasic's attitude is, "That's all I know, and all I need to know." But we are more adventurous than REALbasic is, so we do look behind the mask! We use casting to tell REALbasic what we saw. By casting, we compel REALbasic to disbelieve the reference's declared datatype, and to treat the value of sas a ScoreBox, willy-nilly; since a ScoreBox can receive the Increase message, REALbasic doesn't complain. What we're doing here is nullifying REALbasic's strict compile-time type-checking; we're saying, "I know I declared sas a StaticText, but trust me, when the code actually runs, swill be pointing to a ScoreBox." In particular, we are casting down: we reveal to REALbasic that a reference declared as a superclass is actually pointing further down the hierarchy, at a subclass. But be warned: REALbasic does trust you when you cast; it obviously has no way to check in advance whether the instance really will be what you claim it will, so it simply throws all caution to the winds. So don't betray REALbasic's trust! You can get away with saying this: dim s as staticText dim c as string s = theScoreBox c = pushButton(s).caption // liar! error: IllegalCastException but your application will terminate prematurely when the last line actually executes and it turns out that sis not a PushButton after all. A common way to prevent this sort of mistake is to wrap casts in an IsAtest: dim s as staticText dim c as string s = theScoreBox if s isA pushButton then // nothing can go wrong now c = pushButton(s).caption end Notice that casting doesn't perform any coercion; in other words, it does not transform an instance of one datatype into an instance of another datatype. In fact, it has no effect whatever upon any instances. Casting doesn't turn a sow's ear into a silk purse; it just makes a sow's ear look like a silk purse, and even then this works only because the sow's ear really is a silk purse to start with. Overriding In the example from "New Classes and Inheritance," earlier in this chapter, we defined ScoreBox, a subclass of StaticText. Every message that ScoreBox can receive is defined either in itself or in its superclass (or in the superclass of that, and so on). The superclass defines some members; the subclass inherits these and adds some more. But in real life we may wish a subclass to replace the functionality of a member defined in its superclass. This is called overriding. To do this, one simply defines in the subclass a member with the same name as a message that the superclass can receive. Now the message is defined in both the subclass and the superclass. In making an example, there's no point using ScoreBox to override any of StaticText's members. This is basically the same issue discussed in See Resolution of Names: where a built-in class is concerned, to override a property would hide the built-in functionality associated with that property, which is undesirable, and to override a method is forbidden. So we need a superclass and a subclass both of which we ourselves have defined. Let's subclass ScoreBox to make a DoubleScoreBox class. This will be a class that differs from ScoreBox only in that when it receives the Increase message, it increments its score by 2. Create the DoubleScoreBox class and make it a subclass of ScoreBox. Clear the Window Editor of controls. Drag in a ScoreBox, select it, and give it a more meaningful name--TheScoreBox. Drag in a DoubleScoreBox and name it TheDoubleScoreBox. Drag in a PushButton and have its Action event handler go like this: theScoreBox.increase theDoubleScoreBox.increase Now it's time to define DoubleScoreBox's Increase method, to override ScoreBox's Increase method. Open DoubleScoreBox's Code Editor and define a method handler Increase, with this code: Sub increase() self.score = self.score + 2 // increment the score self.showScore // display the score End Sub Run the project and press the button repeatedly. It works. Let's talk about what's happening. It's true that our instance TheDoubleScoreBox is a ScoreBox, which defines an Increase method, to increment by 1; but its final class is DoubleScoreBox, which also defines an Increase method, to increment by 2, and that's the one that gets called when we send the Increase message to TheDoubleScoreBox. On the other hand, when TheDoubleScoreBox's Increase method calls self.showScore, DoubleScoreBox has no ShowScore method; but the message is acceptable, because a DoubleScoreBox is also a ScoreBox, and a ScoreBox does define a ShowScore method, which is what gets called. We may summarize by saying that message names are resolved upward through the instance's class and its superclasses: first we look in the final class of the instance to see if it accepts the message; only if not do we look at its superclass, and so on. This is illustrated in Figure 4-2. Our use of the phrase "the instance" may seem surprising, because in the previous section the important thing was the declared datatype of the reference. To understand what's happening, separate the message-sending process into two distinct stages. First, REALbasic decides whether the reference can be sent the specified message at all; this is done by resolving the message name upward from the reference's declared datatype. If it can, then REALbasic decides which class should actually receive the message; this is done by resolving the message name upward from the instance's actual final class. For example: dim s as scorebox s = theScoreBox s.increase s = theDoubleScoreBox s.increase This increases TheScoreBox's score by 1and TheDoubleScoreBox's score by 2, even though the reference both times is s, which is declared as a ScoreBox. Do you see why? The fact that the reference sis declared as a ScoreBox means that it can accept the Increase message, so this code is legal--and that's all it means. Now we come to the question of what this code will actually do; and that depends upon the instance that spoints to. When spoints to an instance of the ScoreBox class, it is ScoreBox's Increase method that is called. When spoints to an instance of the DoubleScoreBox class, it is DoubleScoreBox's Increase method that is called. The principle here is that all programmer-defined methods are virtual methods. That's just technical talk for the very thing we've just been saying: the class of a reference may make it a legal recipient of a message, because that class can handle that message; but that fact is no guide as to what class will actually handle the message, because the instance may be of a class that overrides it. Let's look at the matter from a different perspective. Instead of a linear architecture, imagine a situation where one class has two subclasses. We'll take advantage of overriding a virtual method so that it becomes, in effect, a decision-making mechanism. Suppose we're writing a Tic-Tac-Toe game. Every game piece is either an X or an O. But all game pieces have some common behavior. For example, a game piece should know how to draw itself in a given square on the board. Clearly we're going to have a GamePiece class and a Square class. We envision a routine that takes a piece and a square and tells that piece to draw itself into that square, like this: Sub drawPieceIntoSquare(p as gamePiece, s as square) p.drawYourselfInto s End Sub The exact behavior of pis going to depend on whether it is an X or an O. What will it mean, to be an X or an O? Let's make it mean that there is an X class and an O class! Obviously, these are both subclasses of GamePiece. Presume that pis either an X instance or an O instance. If the X class has a DrawYourselfInto method and the O class has a DrawYourselfInto method, then when we send pthe DrawYourselfInto message, the right thing will happen automatically. If pis an X instance, X's DrawYourselfInto will execute; if pis an O instance, O's DrawYourselfInto will execute. What handler will execute depends upon what class this particular instance is during this particular call to our routine. So the class structure becomes a decision-making mechanism! The fact that we can send a message to one higher class as a way of choosing among several lower classes is called polymorphism. However, we've left out a small piece of the puzzle. We define GamePiece. We define X as a subclass of GamePiece, and we give it a DrawYourselfInto method. We define Y as a subclass of GamePiece, and we give it a DrawYourselfInto method. We try to execute the previous code, and we get an error. What's happened? The answer is that we forgot the first part of the message-sending process. We know that the DrawYourselfInto message will get routed to the right class, the class of the actual instance that ppoints to. But we also have to make it legal to send the DrawYourselfInto message to pin the first place! This means that GamePiece must also have a DrawYourselfInto method handler. In other words, in order to take advantage of a virtual method, there has to be a virtual method. You can't override something that isn't there in the first place. Very well; but what should GamePiece's DrawYourselfInto handler do? It's perfectly possible that it will do nothing at all; it might be completely empty! It could be that the whole of X's drawing functionality is contained in its DrawYourselfInto handler, and the whole of O's drawing functionality is contained in its DrawYourselfInto handler. And every GamePiece is going to be an X or an O. Our routine is never going to receive a value for pwhose final class is GamePiece; it will always be an X or an O. So GamePiece's DrawYourselfInto handler does nothing, because it will never be called. Is this silly? Not at all. In fact, it's such a common technique that it has a name: GamePiece's DrawYourselfInto handler is said to be abstract. It exists only so that X and O can override it, so that a reference declared as GamePiece can be sent the DrawYourselfInto message, taking advantage of polymorphism. In fact, we can go further. It may be that no instance whose final class is GamePiece will ever be generated throughout the entirety of our program. Not just GamePiece's DrawYourselfInto handler, but the whole GamePiece class, might exist only so that X and O can subclass it and take advantage of polymorphism. Again, this is quite common, and GamePiece is then said to be an abstract class. (See the for a Tic-Tac-Toe game that employs this sort of architecture.) Class Interfaces A subclass and its superclass represent one way in which two classes can relate. Both are classes, and the subclass inherits the fullness of the superclass's power, which may be considerable. Compared to this, a class interface seems evanescent, almost intangible. A class interface is abstract, and its methods, if any, are abstract. A class interface has no code; it has no properties; you can't make an instance whose final class is a class interface! A class interface is barely a class at all: it's a mere appearance of a class, a skeleton of a class. Yet it adds great power and flexibility to REALbasic's model of object orientation, because it breaks out of the structure of the subclass-superclass hierarchy. To make a class interface, you choose. A class cannot have a class interface as its Super; a class does not inherit from a class interface. Instead, to form a relationship between a class and a class interface, you go into the class's Properties Window and edit its Interfaces listing. Here, you simply type the name of the class interface. If you like, the Interfaces listing can include the name of more than one class interface; separate the names with a comma. A class is said to implement the class interfaces you list among the Interfaces in its Properties Window. What does it mean for a class to implement a class interface? We already know part of the answer. A class can have only one Super; but a class can implement more than one class interface, and this is entirely independent of what Super it may have. So implementing a class interface is a separate relationship from inheritance, and it is a relationship that permits a kind of multiple parentage. But what kind of parentage? The answer lies in what the parent (the class interface) bequeaths to the child (any class that implements it). It bequeaths two things: - The name - As far as names are concerned, a class interface is just like a superclass. You know that a subclass instance is acceptable where an instance of the superclass is expected. In the same way, an instance of a class that implements a class interface passes the IsAtest for the class interface, and is acceptable where an instance of the class interface is expected (which is good, because you can't otherwise make an instance of the class interface). - The method declarations - A class interface bequeaths to a class that implements it the requirement that that class should declare the same methods that it itself declares. A method declaration in a class interface is thus effectively a rule that a class must obey if it wants the privilege of inheriting the name; it's an abstract virtual method with the power to force an implementing class to override it. Actually declaring the method in the implementing class's Code Editor is up to you (though no law says you have to give it any code). What are class interfaces for? It was pointed out in "Casting," earlier in this chapter, that the datatype of a reference is like a mask; regardless of the true class of the instance pointed to, the reference can accept only messages appropriate to its datatype. Class interfaces allow a reference to wear an arbitrary mask. You will typically use the power of this arbitrariness to pass instances around in ways that would normally be impossible, because you're cutting across the hierarchy of subclasses and superclasses. For example, in earlier sections of this chapter we implemented ScoreBox as a subclass of StaticText. But there are other built-in classes that can display text in a window, and these would be equally good candidates as a way of implementing a scorebox type of functionality. Consider an EditField; it automatically displays its Text property, just as a StaticText does. Let's implement a class just like ScoreBox, but based on EditField. Make a new class, call it EFScoreBox, make it an EditField subclass, give it a Score property which is an integer, and give it code exactly like the ScoreBox's code: the same Open event handler, the same Increase method handler, the same ShowScore method handler. Clear the Window Editor and drag one ScoreBox and one EFScoreBox into it; these will be called StaticText1 and EditField1 by default. Give the window a PushButton and give its Action event handler this code: staticText1.increase editField1.increase Run the project and push the button several times. It works; both the StaticText and the EditField are incrementing and displaying their score. Now let's think a moment. From the point of view of the code in the PushButton, there's something silly about this situation. A ScoreBox can accept the Increase message; an EFScoreBox can accept the Increase message. They should, in fact, be the same kind of entity: "things I can send the Increase message to." This sameness should be expressed as a class. We could call it Increaser, and we imagine being able to say something like this: dim i as increaser // ... set ito point to either a ScoreBox or an EFScoreBox ... i.increase Under the rules of the class hierarchy, that's impossible. For where might Increaser fit into the hierarchy? Both StaticText and EditField have RectControl as their superclass, and there's nothing we can do about that; those are all built-in classes and we can't access them. We could make Increaser a subclass of StaticText and make ScoreBox a subclass of Increaser; but then EFScoreBox can't be a subclass of Increaser. This impasse is schematized in Figure 4-3. The solution is to implement Increaser as a class interface, because class interfaces can cut across the hierarchy of subclasses and superclasses. Let's do it. Make a new class interface and call it Increaser. Go into Increaser's Code Editor and declare a method called Increase. Now select ScoreBox in the Project Window and type Increaser into its Interfaces listing in the Properties Window; do the same for EFScoreBox. Now make the PushButton's Action event handler go like this, and run the project: dim i as increaser i = statictext1 i.increase i = editfield1 i.increase It works: we've subsumed both our ScoreBox and our EFScoreBox under a single class umbrella. Of course, we're not doing anything very powerful with this capability; but we could. For example, let's turn our PushButton into a broadcaster. A broadcaster is an object that sends out a message without caring who receives it: "Whoever has ears to hear, let him hear." As a simplified implementation, we'll make it the case that our PushButton broadcasts to any Increasers in the neighborhood (that is, in the same window as itself). Here is the PushButton's Action event handler now: dim i as increaser dim c as integer for c = 0 to self.controlCount-1 if self.control(c) isA increaser then increaser(self.control(c)).increase end That's quite a remarkable piece of code. We've done the impossible: we've united two utterly distinct classes under a single class heading, and we've used the power of polymorphism to treat them indifferently under that heading, sending instances of each of them the Increase message without knowing or caring what they really are.[4] I should pause to describe the more usual implementation of a broadcaster, which is roughly as follows. Start with two class interfaces: let's call them Broadcaster and Recipient. Each defines a communication method; let's say that Broadcaster defines TellRecipient, and Recipient defines HearBroadcaster. The idea is that when the Broadcaster wants to broadcast, its TellRecipient method sends the HearBroadcaster message to a Recipient. There is also some implication that a class implementing Broadcaster has a property whose datatype is Recipient, or perhaps more than one such property, or perhaps a property that is an array of Recipients (I call this an implication because a class interface has no way to require or express it, but the presence of the TellRecipient method helps remind us); this is how it knows who its available recipients are. It then remains to point these Recipient properties at actual Recipient instances, thus hooking together the particular Broadcaster and its particular Recipients; this must be done in code, and when and how to do it is up to the programmer. The power of this architecture is well illustrated by an example I've constructed, too elaborate to describe here, but which you can download from my web site, where either a Scrollbar or a Slider (indifferently) can broadcast to either a StaticText or a ProgressBar (indifferently). Let's now turn to an example that views class interfaces from a different perspective. Imagine a class that implements a data structure, and provides some methods for manipulating this data structure. There might be several such classes, unrelated and implementing completely different data structures. It ought to be possible to pass an instance of any of these classes to a subroutine, which can then use the methods to manipulate the data structure, without knowing anything about what the nature of the data structure actually is. Class interfaces make this possible. A powerful example is a Sort routine (a routine that arranges data in order from smallest to largest). If a routine knows how to sort, it knows how to sort anything; it doesn't matter what the type of data is, or how the data are stored. The routine needs just three things: - It must be possible to refer to the data by index number, and the routine must be told explicitly the lower and upper bounds on this index number. - It must be possible to learn, given two index numbers, how the corresponding data compare: is one item smaller than, larger than, or identical to the other? - It must be possible to command that the data for two index numbers be swapped. The first requirement can be met by the way our routine is called. The other two requirements can be met through a class interface. Suppose we have a class interface, Sortable, which declares a method Compare and a method Swap. Now our routine can be handed as its parameters a lower bound index, an upper bound index, and an instance of the Sortable class. Our routine is secure in the knowledge that it can call this instance's Compare and Swap handlers; and that's all it needs to know! In reality, the Sortable can be an array wrapper, a delimited string wrapper, a ListBox subclass, and so forth; the data to be sorted can be integers, strings, dates, colors, anything at all--if you can decide on a way to compare them, our routine can sort them.[5] Events and New Events Events are the basis of a REALbasic application's ability to interact with the user and to behave in the live, responsive manner that befits a GUI-based application. An event is a message sent only by REALbasic, not by the programmer's code. It is triggered because some predefined occurrence has come to pass--typically some user action, or some action on the part of your code that resembles a user action. It is sent to an instance of a built-in class (or a subclass of such a class) that is predefined to be sent this type of event. When an instance receives an event, its event handler for that event is called. If you, the programmer, want that instance to respond to a predefined occurrence, you write code in the corresponding event handler. For example, when the user clicks the mouse on an instance of a PushButton (or a PushButton subclass), the instance is sent an Action event. To say what this instance should do when the user clicks the mouse on it, you write code in its Action event handler. If you don't, the button won't respond to being clicked. Although the programmer writes the event handler code, the event handlers themselves, and the events that call them, belong entirely to REALbasic. The programmer cannot add or remove an event handler. Nor can the programmer create an event; for example, you might wish that REALbasic should send an event to a certain instance every time the user changes the computer's speaker volume, or blows his nose, but that isn't going to happen. You simply have to hope and trust that REALbasic comes equipped with a set of events sufficient to allow your program to implement the kind of responsive functionality you desire for it. For the most part, it probably does; and where it doesn't, you'll have to understand and accept what it cannot do, and implement a workaround or modify your desires accordingly. Knowing what the predefined occurrences are, and in what order the corresponding events will be sent to what instances, is a crucial part of learning to program effectively with REALbasic, so naturally this book provides lots of details. An overview appears in the section called , and then the discussion of each built-in class in and (along with Chapter 6 on menus) provides a complete list of the class's events and the occurrences that trigger them. Where is the event handler where the programmer is to write code? The situation was simple before we knew about subclasses: drag a PushButton from the Tools Window into a Window Editor, and double-click it to see the PushButton's event handlers listed in the Controls section of the window's Code Editor. Now things have become confusing. We make a subclass of StaticText, called ScoreBox; open ScoreBox's Code Editor and its event handlers are there. Drag a ScoreBox from the Project Window into the Window Editor and double-click it; we're back among the event handlers in the window's Code Editor. Does the event handler code for a ScoreBox go in the class or in the window? Now bring ScoreBox's subclass, DoubleScoreBox, into the story. DoubleScoreBox's Code Editor has event handlers too! Should code for a DoubleScoreBox event handler go into ScoreBox, DoubleScoreBox, or the window? How do events relate to the class hierarchy? The answer is simple, once you know how an event travels as it is triggered and sent. For purposes of events, pretend that an instance in the window is part of the class hierarchy--in particular, that it's at the bottom of the hierarchy. So, for example, when there's a DoubleScoreBox instance called TheDoubleScoreBox in a window, the hierarchy from top to bottom runs Control, RectControl, StaticText, ScoreBox, DoubleScoreBox, TheDoubleScoreBox. An event is directed ultimately at an instance. But since it is the instance's built-in superclass that is responsible for the fact that the event is being sent to this instance at all, it is the superclass that receives the event first; the event then travels down the class hierarchy toward the instance at which it is actually directed, looking for a class that handles the event. At that point, the process comes to an end. By "handles the event," I mean, "contains an event handler bearing the event's name, which has code in it." In a Code Editor, REALbasic signals that an event handler handles an event by making its browser listing bold. The implication is that event handlers cannot be overridden. If ScoreBox contains code in its Open event handler, the Open event handler in DoubleScoreBox will never be executed, nor will the Open event handler belonging to a DoubleScoreBox instance in a window, because the Open event travels down the class hierarchy, finds that ScoreBox handles it, and stops. In fact, the IDE enforces this rule by physically preventing you from overriding an event handler. If you put code into any class's event handler, REALbasic actually deletes the corresponding event handler from the Code Editors of its subclasses and superclasses. So, because you wrote code in the Open event handler in the ScoreBox class, you'll find there is no Open event handler in the DoubleScoreBox class or in any ScoreBox or DoubleScoreBox instance in a window! Just the other way, if you write code in the Close event handler of any DoubleScoreBox instance in a window, the Close event handlers in DoubleScoreBox and ScoreBox will vanish. This means you have to be a little careful about the order in which you put code into event handlers! In the example just given, if we changed our minds and decided we wanted ScoreBox to handle the Close event, we would first have to remove the code from the DoubleScoreBox instance's Close event handler, in order to get ScoreBox even to have a Close event handler. This can get very painful when you've got dozens of instances in several different windows, and have to hunt through them all to find the one with code that's preventing you from writing an event handler in a class. The fact that event handlers can't be overridden seems unfair, because manifestly you might need instances of different classes, or individual controls derived from the same class, to be able to respond differently to an event. For example, both a ScoreBox and a DoubleScoreBox presently permit their Score to be autoinitialized to 0. But what if we wanted a ScoreBox's Score initialized to 1and a DoubleScoreBox's Score initialized to 2? Clearly the place to make this happen is in the Open event handler, before the score is first displayed. But only ScoreBox even has an Open event handler. We could make ScoreBox's Open event handler go like this: self.score = 1 self.showScore But what will we do about DoubleScoreBox? The best approach, I think, wherever it can be used, is to take advantage of virtual methods. Instead of initializing the Score property directly in an event handler, have the event handler call a method that initializes it. Methods are virtual, so the problem is solved. ScoreBox's Open event handler would then run as follows: self.initScore self.showScore It is now just a matter of writing InitScore methods in ScoreBox and DoubleScoreBox; obviously, ScoreBox's InitScore method says: self.score = 1 and DoubleScoreBox's InitScore method says: self.score = 2 But there are situations where this approach isn't so viable. What if we want one particular instance of DoubleScoreBox in one particular window to initialize its Score to 0? We can't put an InitScore method in the instance in the window, because instances in windows don't have method handlers. We could create a special class just for this instance, but that seems extreme. To help us out, REALbasic provides a second way of dealing with the matter--the New Event mechanism. The New Event mechanism enables code in an event handler to continue handing an event down the hierarchy toward the instance. You define a New Event in a class; its subclasses then inherit the right to handle that event. The only trick is that the class must explicitly trigger the New Event, so that it is sent on down the hierarchy. So, let's say that in ScoreBox we anticipate the possibility that some ScoreBox subclass or instance may wish to override our initialization of the Score property. To make this possible, we'll propagate a New Event down the hierarchy. In ScoreBox, choose Edit: me.score = 0 (You must use Me, because that's the DoubleScoreBox; Self is the window, remember?) However, this code has no effect, because the InitScoreOverride event is never triggered. We must take care of that. Go back to ScoreBox's Open event handler, and make it go like this: self.initScore initScoreOverride self.showScore The strategy is that we initialize the Score in the normal way, but then we send the InitScoreOverride event down the hierarchy just in case anybody wants to handle it. If nobody does, that's fine. In this particular case, one instance of DoubleScoreBox does handle it, changing the Score to suit its own taste before it is displayed for the first time. The only hard part, which wasn't particularly hard in this case, was deciding just when to trigger the InitScoreOverride event. You have to consider the order in which things are going to happen. Obviously it would have been wrong to do things in this order: initScoreOverride self.initScore self.showScore This would allow subclasses and instances to set the Score, but then we return it to its default setting, making InitScoreOverride useless. The New Event mechanism was of great importance in Version 1 of REALbasic, when virtual methods didn't exist; but now virtual methods do exist, and New Events have been relegated almost to the status of an unused appendix. Nevertheless they do have their uses. In particular, I still like to use New Events as my chief way of giving a control class a method whose implementation is to be unique to each particular instance in a window. That's because the alternative--creating a subclass for each instance, so that virtual methods can be used--is usually too painful to contemplate. Some final points about New Events. You will observe that in the ScoreBox Code Editor, an InitScoreOverride entry appears in the New Events category, but its code is blank and cannot be edited; its purpose is merely to register the fact that this is the class where this New Event is defined. A New Event can be triggered only from an event handler or method handler in the class in which it is defined (where it appears under New Events); no other class knows of its existence. Nor is it up to you where a New Event call is directed; it can only go on down the hierarchy toward the instance that was the target of the original event that started this chain. And so a New Event's name cannot be attached, with dot notation, to a reference (thus in our example we said initScoreOverride, not self.initScoreOverride). Finally, a New Event can have parameters, and can even return a value, like any subroutine. (But don't try passing a ByRefparameter to a New Event; there's a bug where a change in a ByRefparameter value in a New Event won't percolate back to the caller.)[6] The Class Hierarchy Some of REALbasic's built-in classes are subclasses of other built-in classes; indeed, with one exception, they all are, since a class that is a subclass of no class is considered (theoretically at least) a subclass of the Object class. They thus constitute a class hierarchy; to outline this hierarchy is the purpose of this section. Here are some notes about the outline. Some of the built-in classes cannot be subclassed by the programmer. Some of the built-in classes are abstract, meaning they cannot be instantiated but their subclasses can be. Some of the built-in classes are both: they cannot be instantiated and they cannot be subclassed by the programmer; instead, REALbasic has already provided subclasses of them, which you can instantiate (and possibly subclass). Window classes are a special case: programmer-defined subclasses of Window cannot be subclassed. Built-in instances, such as the Keyboard and System objects, are not listed. Object (abstract) Application (abstract; maximum one subclass per project) Window (abstract) Applet (abstract) Dialog (abstract) RuntimeException NilObjectException OutOfBoundsException StackOverflowException TypeMismatchException IllegalCastException Collection Date Screen (cannot be subclassed) Sound (cannot be subclassed) MouseCursor (cannot be subclassed) Clipboard DragItem (cannot be subclassed) Picture (cannot be subclassed) Movie (cannot be subclassed) EditableMovie QTEffect QTEffectSequence QTTrack QTVideoTrack QTUserData QTGraphicsExporter Graphics (cannot be subclassed) PrinterSetup StyledTextPrinter (cannot be subclassed) FolderItem TextInputStream (cannot be subclassed) TextOutputStream (cannot be subclassed) BinaryStream (cannot be subclassed) ResourceFork (cannot be subclassed) MemoryBlock (cannot be subclassed) AppleEvent (cannot be subclassed) AppleEventTemplate (cannot be subclassed) AppleEventTarget (cannot be subclassed) AppleEventDescList AppleEventObjectSpecifier (cannot be subclassed) AppleEventRecord Database (cannot be subclassed) DatabaseCursor (cannot be subclassed) DatabaseCursorField (cannot be subclassed) DatabaseRecord DatabaseQuery (cannot be subclassed) Thread (abstract) CriticalSection Semaphore Sprite Serial Socket Timer TextConverter (cannot be subclassed) TextEncoding (cannot be subclassed) Shell MenuItem QuitMenuItem (cannot be subclassed) Control (abstract; cannot be subclassed) ContextualMenu SpriteSurface NotePlayer Line RectControl (abstract; cannot be subclassed) Canvas ImageWell MoviePlayer Rectangle RoundRectangle Oval Separator PushButton CheckBox RadioButton BevelButton PopupMenu Placard PopupArrow LittleArrows DisclosureTriangle GroupBox StaticText EditField ListBox Scrollbar Slider ProgressBar ChasingArrows TabPanel The fact that the FolderItem class can be subclassed is probably a bug. The problem is that you can never assign an instance of the subclass any FolderItem functionality, such as the ability to point at a file on disk. The reason is that all FolderItem instances that have such functionality are generated by built-in functions, and these functions yield FolderItems, and a class is not its subclass. I regard the fact that the QuitMenuItem class cannot be subclassed as a bug. In REALbasic Version 1, it could be. ScrollableCursor, SoundConverter, and SoundFormat are listed in the Super popup, but these classes don't exist, so this is a bug. Global Members A thing is global if it is visible at all times to all code. Many of REALbasic's things are not global. For example, suppose a button's Action event handler generates a new Date instance: dim d as date d = new date This instance is not global; in fact, it's just the opposite--it's local. That's because the only reference to it is d, and dis a variable that exists only inside the button's Action event handler. The instance can't be seen from anywhere else. Under REALbasic's object model, where everything depends upon references to instances that come and go, it makes a certain sense to speak of degrees of globalness. A property is typically more global than a variable, insofar as the object to which it belongs is more persistent and more readily referred to; a window property is very global, because a window persists until explicitly killed, and because it is generally possible to obtain a reference to a window without maintaining a name for it. Still, windows can be killed, so even a window property is not truly global. Much of the discussion in See Referring to Instances was devoted to this matter, and it will be raised again under . On the other hand, REALbasic does come with quite a number of things that are truly global. It has many built-in methods belonging to no class; as pointed out in See Messages and Dot Notation, such methods can be regarded as belonging to a global object. And REALbasic has some built-in instances, such as the System object and the Keyboard object, that are global.[7] This section is about ways in which the programmer can create global methods and properties (and, by implication, global instances, since a property can be a reference to an instance). This isn't a terribly common thing to do, since if you make something global, you're not attaching it to any object, which means you've violated REALbasic's philosophy of object orientation. As a rule of thumb, if you think something needs to be global, think again; there is probably some object to which it properly belongs. But rules of thumb are made to be broken, and it is easy to imagine situations where something should indeed be global. For example, in Chapter 2 we created a function, Average, which returns the integer average of two integers. Let's imagine that this function is extremely useful, and that we find ourselves needing to call it quite a bit. The question is where to store it in our project. It pertains to no particular object or class; it is a mathematical function, intended in effect to supplement REALbasic's own built-in methods--we are doing in code something we think REALbasic should do intrinsically at the global level. Thus, Average could appropriately be global. Another example arises when information about an object needs to be maintained in the absence of that object. For example, let's say the user closes a window, and we want it to be the case that the next time a window of that class is instantiated, it should have the same position and dimensions as the window just closed. Where should the window's current position and dimensions be remembered? Not in the window; it's being killed, so the information will be lost. Nor in any object that might die. The information must be maintained no matter what, even if there are no windows or other instances present. Therefore it should be global. REALbasic provides you with two locations for global storage: modules and the Application subclass. Modules A module is a global container of methods and properties; these methods and properties belong to no class. To create a module, choose File. To access a module member from code, you just use the member's name, with no reference to the module. For example, a method named MyGlobalMethod, which is a procedure taking no parameters, stored in a module named MyModule, would be called by saying: myGlobalMethod // no mention of MyModule Since module member names are global, a question arises of how they are resolved with respect to the namespace as a whole. The main part of the answer was given in See Resolution of Names: the module namespace is consulted last, after REALbasic's built-in functions and after implicitly trying to supply Self. Care should be exercised not to overshadow a module member name with a local name accidentally. For example, if a module contains a property named Prop and you use the name propin a subroutine where prophas been declared with a Dimstatement as a local variable, it is the local variable that will be referred to. If a module contains a property named Prop and you use the name propin a class containing a Prop property (or a subclass of such a class), it is self.propthat will be referred to. In such situations, there is no way whatever to refer to the Prop in the module. A module method can refer to other members of the same module by dot notation through the Self function; and if Self is omitted, a module method's references to properties and methods with no dot notation will be resolved by looking in the same module before looking in other modules. This gives modules, if not object-orientation, at least some, er, modularity. Nonetheless, giving two different members in two different modules the same name is probably unwise, since in referring to such members from elsewhere you cannot specify which is meant, and you cannot be certain how the reference will be resolved. Modules can also contain global constants.[8] A constant is like a property that is initialized in the IDE; its value can be only a literal string, number, or boolean. Aside from the convenience of not having to initialize its value, a global constant's great advantage is that it can be localized for the different languages and platforms. This is done by pressing the Add button in the Edit Constant dialog. In Figure 4-4, for example, the constant yeshas been localized so as to have the value "Yes"on a U.S. Macintosh but the value "Oui"on a French Macintosh. These settings are consulted at the time you build the application: you specify the desired platform and language in the Build Application dialog. A constant's value can be accessed in code, by its name; for example: msgbox yes Constant values can also be assigned to properties initialized in the IDE, such as the caption of a button. To do this, the value supplied in the Properties Window must begin with #as a sign that it's the name of a constant and not a literal string. For example, if I initialize the Caption property of a PushButton in the IDE as #yes, the built application will display the button's caption in accordance with the value of the yesconstant localized for the platform and language of the build. (To make a string property value start with #, type # twice.) A constant can't have a calculated value. Since unprintable characters can't be expressed in REALbasic except by calculation (see ), this might seem to mean that a constant string can't contain an unprintable character. But pasting an unprintable character into the Edit Constant dialog does work. So a workaround is to use some other application (such as BBEdit) to create the string containing the unprintable character, then copy and paste it into the Edit Constant dialog. A constant value can be used to declare the size of a local array variable, but not an array property (I regard this as a bug). For example, given an integer constant HowMany, you can't declare an array property like this: myArray(howMany) as string The workaround is to declare the array with an arbitrary size (such as -1) and Redim it in code: redim myArray(howMany) The Application Subclass Modules are the only place where global localizable constants can live. Apart from this, the primary advantage of modules is that they can be exported and reused in other projects--like classes except that they don't need to be instantiated. If you don't need either feature, but simply want globally available methods and properties, consider instead subclassing the Application class. The Application subclass is automatically instantiated as your application starts up, and a reference to this instance is always available through the App function. Thus, if MyGlobalMethod lived in the Application subclass, it would be called by saying: app.myGlobalMethod The Application subclass offers several major advantages over a module as a repository of global members: - The Application subclass is sent an Open event, which provides an opportunity for its properties to be initialized. - There is no possibility that access to members of the Application subclass will be cut off by accidental use of identical local names, because all messages to the Application subclass are explicitly directed to app. - Modules defeat the purpose of object-orientation, whereas the Application subclass represents a definite "thing" and has a natural right to contain members. Advanced Class Features In this section we look at some miscellaneous further topics having to do with classes. Beginners, who may find the details presented here somewhat distracting, may wish to skim this section rather quickly at first, returning to it subsequently as examples arise later in the book or in the course of actual programming. What's My Window? Nothing in REALbasic is so important as obtaining a reference to a desired instance, and no instance is so vexing to get a reference to as the window containing a given control. There are three different ways for a control to get a reference to its containing window, depending on the context. Let's start with code inside a control instance's event handlers. I'm talking here about any code that appears in a window's Code Editor, within its Controls category. Such code can obtain a reference to its containing window by way of the Self function (Chapter 3). This is the only code where Self and Me yield different results; Me returns a reference to the instance itself. So a PushButton instance in a window can refer to the window's title as self.title(or simply title). Next, consider code inside the Code Editor of a programmer-defined subclass of the built-in Control class. (ScoreBox, from earlier in this chapter, is an example, because it is a subclass of StaticText, which is a subclass of RectControl, which is a subclass of Control.) Here, Self doesn't refer to the instance's containing window, but to the instance itself. However, the Control class has a Window property that returns a reference to the instance's containing window; the subclass inherits this. So, the ScoreBox class can refer to the window's title as self.window.title(or window.title). Finally, there is code inside the Code Editor of a programmer-defined class that is not a Control subclass. We presume, obviously, that the instance is being used as a control--that is, it is contained by a window. There is only one reliable way for such code to get a reference to the instance's containing window, and that is to use a New Event. The technique requires some preparation. You have dragged the class's listing from the Project Window into a Window Editor to make a control. You double-click the class's listing in the Project Window to access its Code Editor. Create a New Event: let's call it Owner, and let's have it be a function that returns a Window. Finally, in the Window Editor, double-click the instance to access its event handlers within the window's Code Editor. There is now an Owner event handler; give it this code: return self Code within the class can now refer to the instance's containing window as owner(). This device lacks elegance, since we must remember to code an Owner event handler individually for every instance in a window; but this is unavoidable. Both the Control class's Window property and our Owner New Event have a big drawback: they return an instance of the Window class. That's fine if you want to access a feature of the window common to every window, such as its title; but it's not so good if you want to access a feature particular to the window class containing the instance. For example, if a window contains a button called PushButton1, an instance in the window can refer to self.pushButton1, because Self is always the correct window class, and REALbasic knows that this kind of window has a PushButton1. But you can't say self.window.pushButton1or owner.pushButton1, because the Window class contains no controls. You have to cast to the correct window class. For example, if the type of window containing PushButton1 is a Window1, then you'd say window1(self.window).pushButton1. If you don't know what type of window contains the control instance, you will first have to test with IsAto find out. Constructors and Destructors If an instance is neither a window nor contained by a window, then if the programmer has added to its class a method handler whose name is the name of the class, that method will be called automatically when the Newoperator generates the instance. Such a method is called a constructor, and is commonly used to initialize the instance's properties. A constructor can have parameters, and if it does, the syntax of Newwhen instantiating that class must be modified to match: values for the parameters must follow the class name, in parentheses. For example, suppose DayInAugust is to be a Date subclass representing a day in August, 1954, and that it has a method DayInAugust, taking one parameter:[9] Sub dayInAugust(day as integer) self.day = day self.month = 8 self.year = 1954 End Sub That method is DayInAugust's constructor. Now when we generate a DayInAugust instance, we supply the value to initialize its Day property: dim d as dayInAugust d = new dayInAugust(10) // call constructor with parameter msgbox d.longdate // it worked A particularly elegant use of a constructor is as a copy constructor. This means that you hand Newan instance of the class and it returns another instance whose properties are initialized with the same values. This is the same effect achieved by the Clone method suggested on Sub clone(o as myClass), but implemented in reverse, and packaged in a neater syntax. So, for example, if our class is called MyClass: Sub myClass(o as myClass) self.itsProperty = o.itsProperty self.itsOtherProperty = o.itsOtherProperty End Sub Now, given an instance of MyClass, we can generate a new instance that copies it: dim oo as myClass // and o already exists somewhere oo = new myClass(o) It is also possible to make a destructor, which runs when the instance is destroyed. To make a destructor, give the class a method without parameters whose name is the name of the class preceded by ~(tilde). For example, a destructor for DayInAugust would be named ~DayInAugust. Destructors are automatically called sequentially through the class hierarchy, but constructors are not. Suppose B is a subclass of A and that both B and A have both a constructor and a destructor. Then when B is instantiated, B's constructor is called, but A's is not. When B is destroyed, B's destructor is called, and then A's destructor is called. Suppose now that B has neither constructor nor destructor, but A has both. When B is instantiated and destroyed, A's constructor and destructor are called. Thus, the rule is that destructors are called all the way up the class hierarchy; but in the case of constructors, REALbasic looks up the hierarchy until it finds a constructor, calls it, and stops. This does not mean that a superclass constructor can't be called when the subclass constructor is called; it just means that this won't happen automatically. The subclass constructor inherits the superclass constructor as a method and is perfectly free to call to the superclass constructor explicitly, by name. For example, if B wants A's constructor to be called before its own, the first line of B's constructor can say: A Those accustomed to other languages where superclass constructors are automatically called, such as C++, may be surprised at this behavior. It makes a certain sense, though, because it has the advantage of simplicity. Even in C++, after all, a superclass constructor that requires parameters must be called explicitly, so C++ involves a double treatment: sometimes superclass constructors are called automatically, sometimes they aren't. That's just the kind of complexity REALbasic's object model sensibly avoids. Let's go back to the proviso with which this section started: a constructor is called for an instance that is neither a window nor contained in a window. This implies that a constructor is not called for an instance that is a window or contained in a window. This is not as problematic as it sounds. A window or a Control subclass receives an Open event automatically when it is instantiated, so its Open event handler serves the same purpose as a constructor, and that's what you're supposed to use. Similarly, for a window or a Control subclass, use its Close event handler as a destructor. This ensures that you will get instantiation and destruction notification in an orderly manner. Nevertheless, there are two curious asymmetries. First, a destructor works for any instance, including a window, or an instance contained by a window; a constructor does not. Second, there is a category of class for which constructors don't work and that receives no Open event--namely, an instance in a window whose class is not a Control subclass; this is a major hole in REALbasic's functionality. The presence of a constructor in a class can make it impossible for code within that class to cast to that class (because the cast is interpreted as a direct call to the constructor). For example, if a class MyClass has a method MyClass, then it can't cast by saying myclass(mySuperclassInstance). One workaround, admittedly unpleasant, is to ask an instance of a different class to perform the cast. The presence of a constructor that expects parameters disables the "default" constructor with no parameters. For example, if a class MyClass has a method MyClass that expects a parameter, then if you say simply new myclass, you get an error ("New operator has incorrect number of parameters for constructor"). The solution is to add another constructor that takes no parameters, even if this does nothing. This is permissible because of overloading, which is the next topic. Overloading A class can have more than one method with the same name, but taking different types or numbers of parameters. Such methods are said to be overloaded. To be honest, the only thing that's overloaded here is the name; the methods are distinct in REALbasic's mind, because they are readily distinguishable. When you call the method, REALbasic looks at the parameters you supply, and decides on that basis which method you mean. For example, suppose you have two methods in MyClass, as follows: Sub greetMe(s as string) msgbox "Matt: " + s End Sub Sub greetMe(i as integer) dim c as integer for c = 1 to i beep End Sub REALbasic accepts these as distinct methods because their parameter lists are different: one takes a string, the other takes an integer. The following code calls the two methods in succession: dim c as myclass c = new myclass c.greetme("Hi") c.greetme(3) Notice the use of parentheses even though these are procedures taking one parameter. It appears that this is necessary when method overloading is involved. If you get a "Type mismatch" error, lack of parentheses may be the cause. For two parameter lists to be different, either they must contain a different number of parameters, or at least one corresponding pair of parameters must be of distinct types. A class and its subclass are not distinct types! For example, suppose we have these two methods: Sub myMethod(c as scoreBox) // ... End Sub Sub myMethod(c as doubleScoreBox) // ... (DoubleScoreBox is a subclass of ScoreBox) End Sub If you try to call MyMethod with a DoubleScoreBox parameter, REALbasic will complain: myMethod(theDoubleScoreBox) // error: ambiguous reference The problem is that since a DoubleScoreBox is also a ScoreBox, either version of MyMethod could accept this parameter, so the call is ambiguous. The situation is not untenable, however; this works: myMethod(doubleScoreBox(theDoubleScoreBox)) The explicit cast disambiguates the type of the parameter. Constructors can be overloaded, but only if the number of parameters differs; the type of parameters fails to distinguish the constructors. This is a major bug. If class A has a constructor with a single string parameter and another constructor with a single integer parameter, then REALbasic will pretend that one of them doesn't exist, and trying to generate a new A instance with it will fail: dim a as A a = new A(7) // error: Type mismatch The workaround, and it's a painful one, is to differentiate constructors by the number of parameters even if this means that some parameters go unused. Method overloading does not work across subclass/superclass boundaries. For example, make MySubClass a subclass of MyClass, and distribute the two GreetMe implementations across them: give MyClass the GreetMe with the integer parameter, and give MySubClass the GreetMe with the string parameter. Now try this: dim c as mysubclass c = new mysubclass c.greetme("Hi") c.greetme(3) // error: Type mismatch The rules of inheritance seem to be broken. MySubClass inherits from MyClass, and MyClass has a GreetMe that takes an integer; but MySubClass seems not to know about this. The reason is that method names are resolved upward, and resolution stops as soon as it succeeds. So if a class contains any method with a given name, even if it takes the wrong parameters, REALbasic won't even look to see if its superclass also contains a method with that name, and possibly with the correct parameters. However, there's an easy solution: your subclass has only to override the other superclass method.[10] In our example, we'd give MySubClass a GreetMe method that does take an integer parameter. Now the subclass contains a method whose parameters match those of the call, and everything is fine. Well, not everything. There's still a problem: the functionality you wanted is in the superclass! Luckily, there's a way for code in the subclass to call code in the superclass; that's the next topic. Class-Directed Messages Code in a class method can direct a method call at its superclass. This is called a class-directed message. The syntax is simple: in a dot notation expression, before the dot, instead of using an instance name, you use a class name. This must be a superclass of the class in which the call appears; in other words, class-directed messages can be directed only up the hierarchy from the class containing the code that's sending the message. This restriction makes sense; a subclass should have the ability to decide how it wants to take advantage of the storehouse of methods it inherits from its superclasses, but it shouldn't be possible to subvert the hierarchy in other ways. For example, let's go back to our ScoreBox and DoubleScoreBox implementations from earlier in this chapter. It may have occurred to you that there was something inelegant about our implementation of Increase. Here, you recall, is ScoreBox's Increase handler: self.score = self.score + 1 self.showscore Here is DoubleScoreBox's Increase handler: self.score = self.score + 2 self.showscore The routines are near duplicates of each other, and this duplication obscures the essential relationship between DoubleScoreBox and ScoreBox: a DoubleScoreBox is everything that a ScoreBox is, and then some. In particular, a DoubleScoreBox is a ScoreBox that increases its score one more than a ScoreBox does. To capture this fact, we can have DoubleScoreBox increase its score by 1(it increases its score one more than a ScoreBox does), and then call ScoreBox's Increase handler (it increases its score one more than a ScoreBox does): self.score = self.score + 1 scorebox.increase In the same way, you can see how class-directed methods solve the problem posed in the previous section. MyClass has one GreetMe method; MySubClass overrides that one and adds another. MySubClass's implementation of the overriding GreetMe can be as simple as calling the overridden GreetMe. Class-directed messages don't have to have the same name as the method that was originally called, as in these examples. And the method that is called doesn't have to live in the class to which you send the message; it can live in a superclass of that class. Perhaps the best way to envision class-directed messages is to think of them as directed at Me, the instance, but with a proviso as to where resolution of the name should begin. In such a scenario, this line: myClass.greetMe(i) actually means, " me.greetMe(i), but in resolving the name greetMe, skip upward past all classes until you reach MyClass; start resolving upward from there." Privacy Object-oriented programming is, in a sense, a way for the programmer to participate in the design of the programming language itself. When we create a class and populate it with members, we are telling other classes how they should speak when they want to communicate with this class. However, other classes may have the power to speak too freely. We may intend that every instance of the ScoreBox class, for example, should be the "owner and protector" of its own score; when any other class wants to increase a score, it should send the instance an Increase message. But other classes can subvert our intentions, by manipulating the instance's Score property directly: theScoreBox.score = theScoreBox.score + 1 It isn't enough to say, since we are writing the code ourselves, that therefore in actual fact no other class will speak this way, because we have a "rule" that this is forbidden. Where is this "rule"? If it is just in our own head, it might fall out of our head (we could forget); also, what's in our head is not in the head of some other programmer who might some day come to maintain the project, or to whom we might send an exported class. Self-restraint is no restraint. REALbasic alleviates such problems by permitting you to declare a member as private. This is done by ticking the Private checkbox in the method or property's declaration dialog. The question is then who is allowed to access a private member.[11] Once a class member is private, it is accessible only from the same class or its subclasses; one instance of a class (or its subclasses) can access a private member of the same instance or another instance of the same class (or its subclasses). This solves the difficulty with the Score property. If Score is private, then Increase is the only way for outsiders to do anything to the score; Increase is a setter for Score. Similarly, if we want outsiders to be able to learn the score, they can't read Score directly, so we must provide a getter: Function score() as integer return self.score End Function Experienced OOP programmers will observe that REALbasic's "private" corresponds to C++'s "protected"; total privacy, where not even a subclass can access a private member, is not available in REALbasic, and neither is "friendship," where a class may declare a limited list of other classes that can access its private members. There is no visual indication in the Code Editor that a method or property is private; you have to double-click its browser listing and inspect its declaration dialog. Class Properties and Class Methods A class property (sometimes called a static property) is a property whose value is shared among all instances of the class. A class method (sometimes called a static method) is a method that can be called without directing a message at any particular instance of the class (though it does still live in that class). In Chapter 3 it was pointed out that REALbasic lacks class properties and class methods. This section describes an architecture that helps to compensate for this lack. The idea is to take advantage of the fact that REALbasic permits global instances. We will store a special instance of the class as a property of the Application subclass. This instance will act as a "static" representative of the class; it will function as a master repository of information, and as an instance that can respond to messages in the absence of all other instances. As an example, let's posit a Widget class whose "static" representative is a Widget instance named WidgetMaster, a property of the Application subclass. We instantiate this property in the Application subclass's Open event, so that the WidgetMaster instance is absolutely global: self.widgetMaster = new widget Let's say that we wish to be able to ask any Widget how many Widget instances there are. That's like a class property: every Widget instance should give the same answer. The job of WidgetMaster will be to know this answer and to make it available to any Widget instance. Let Widget have a property MasterCount that is an integer. Only WidgetMaster's value for this property will be correct, so to prevent accidents we make MasterCount private. Widget's constructor increments the MasterCount: Sub widget() app.widgetMaster.masterCount = app.widgetMaster.masterCount + 1 exception End Sub The exceptionline isn't explained until Chapter 8, but the point is that the very first time this constructor is called, the WidgetMaster instance won't exist yet--because it's the instance that's being created! We don't mind this, because we don't want WidgetMaster included in the count anyway; but we have to prevent the application from terminating prematurely, and that's what the exceptionline does. Widget's destructor obviously just decrements the count: Sub ~widget() app.widgetMaster.masterCount = app.widgetMaster.masterCount - 1 End Sub Widget's Count method consults WidgetCount's MasterCount. Thus, Count acts as a mediator to MasterCount; nonwidgets can't get access to MasterCount except indirectly through Count, which acts as a getter: Function count() As integer return app.widgetMaster.masterCount End Function This works beautifully, as we can confirm through the following test: dim w1, w2 as widget w1 = new widget w2 = new widget msgbox str(w1.count) // 2, as expected Furthermore, we can learn the count globally, without a reference to a particular widget instance, by asking WidgetMaster: msgbox str(app.widgetMaster.count) The example could be much extended to demonstrate such powerful techniques as actual instance management being performed by the master instance (a factory architecture), but that would go beyond the scope of this book. Example Classes In this section we sketch a few ideas for programmer-defined classes that illustrate useful basic features of classes. Stack Class This is the Stack class promised in Chapter 3. A stack is a dynamic storage mechanism, where items are added or removed one at a time, and only the most recently added item is accessible (like a stack of plates). You understand that it's completely unnecessary to implement a Stack class from scratch in REALbasic, because REALbasic provides dynamic arrays! So, we could easily implement a stack by using an array, adding items with Append and removing them with Remove. However, the example provides a flexible model that can easily be modified to form other storage mechanisms such as queues, and illustrates the power of object references as pointers to form linked data structures. Our implementation, you recall, is that each item of the stack will contain a value and a pointer back to the previously added item of the stack; the earliest added item will point to nil. In Chapter 3 these were called ItsValue and ItsPointer, so let's keep those names. We'll make it a stack of strings; so ItsValue is declared as string, and ItsPointer is declared as Stack. The chief actions on a stack are to add (push) or remove (pop) an item at the front; it's also useful to know whether the stack is empty. The basic implementation, shown in Example 4-1, is a simple matter of manipulating pointers. Example 4-1: Stack class // stack.push: Sub push(s as string) dim newItem as stack newItem = new stack newItem.itsPointer = self.itsPointer newItem.itsValue = s self.itsPointer = newItem End Sub // stack.pop: Function pop() As string dim value as string value = self.itsPointer.itsValue self.itsPointer = self.itsPointer.itsPointer return value End Function // stack.isEmpty: Function isEmpty() As boolean return (self.itsPointer = nil) End Function We may test the Stack class as follows: dim st as stack dim s as string st = new stack st.push "hey" st.push "ho" s = st.pop() // throw away, just testing st.push "hey" st.push "nonny" st.push "no" while not st.isEmpty // pop and show whole stack msgBox st.pop() // expect "no", "nonny", "hey", "hey" wend The onus is on the caller not to pop an empty stack; in real life, Pop would probably raise an exception if the stack is empty, as described in Chapter 8. The example is remarkably economical. You might think we'd need two classes, one for the items of the stack and one to represent the stack as whole; instead, the reference that points at the stack is itself just like the items of the stack, but it doesn't store anything in ItsValue. Array Class We would like to extend the power of arrays. We cannot subclass an array, because an array is not a class. But we can make a class that has an array as a member and provides an extended interface to it. Such a class is called a wrapper. This section presents a class CArray that is a wrapper for a one-dimensional array of strings. There are several advantages to this class over a normal array. We will be able to define new array operations that ordinary arrays lack: for instance, a Swap function will cause the values of two items to be interchanged, a Reverse function will allow us to sort either ascending or descending, a CopyTo function will copy all items from one array into another, and a Clone method will create a new instance and CopyTo that. Also, it will be easy to subclass CArray to provide for even more specialized types of arrays; imagine, for example, an array that is constantly kept sorted by means of a binary search. Most important, an array wrapper class is a real class, as opposed to an ordinary array in REALbasic, which is neither a class nor a datatype, neither object nor scalar; an array wrapper class is a full citizen and can do things that an ordinary array cannot. For example, you cannot declare an array of array, but you can declare an array of a class that wraps an array; and a class that wraps a multidimensional array can be passed as a parameter to a subroutine (unlike an ordinary multidimensional array). Example 4-2 gives a sample implementation. We provide wrapper functions for all the standard array operations, using the standard names (except that we cannot name a method Redim because that's a reserved word, so MyRedim is used), plus Set and Get (because we can't overload the assignment operator), plus our various extensions. The private array property is called TheArray; it is declared as being of size -1initially, but a constructor is provided that lets us create a new CArray instance and Redim it to the desired size in a single command. Example 4-2: cArray class // cArray.cArray: Sub cArray() // constructor with no parameters // do nothing, accept declaration to -1 End Sub // cArray.cArray: Sub cArray(size as integer) // constructor with one parameter self.myRedim(size) End Sub // cArray.myRedim: Sub myRedim(size as integer) redim self.theArray(size) End Sub // cArray.set: Sub set(index as integer, s as string) self.theArray(index) = s End Sub // cArray.get: Function get(index as integer) As string return self.theArray(index) End Function // cArray.insert: Sub insert(index as integer, s as string) self.theArray.insert index, s End Sub // cArray.append: Sub append(s as string) self.theArray.append s End Sub // cArray.remove: Sub remove(index as integer) self.theArray.remove index End Sub // cArray.ubound: Function ubound() As integer return ubound(self.theArray) End Function // cArray.sort: Sub sort() self.theArray.sort End Sub // cArray.swap: Sub swap(index1 as integer, index2 as integer) dim s as string s = self.get(index1) self.set(index1, self.get(index2)) self.set(index2, s) End Sub // cArray.reverse: Sub reverse() dim i, u, u2 as integer u = self.ubound() u2 = u\2 for i = 0 to u2 self.swap(i, u-i) End Sub // cArray.copyTo: Sub copyTo(c as cArray) dim i, u as integer u = min(self.ubound(), c.ubound()) for i = 0 to u c.set(i, self.get(i)) End Sub // cArray.clone: Function clone() As cArray dim c as cArray c = new cArray(self.ubound()) self.copyto(c) return c End Function // cArray.display: Sub display() // for debugging dim i, u as integer u = self.ubound() for i = 0 to u msgbox self.get(i) End Sub Here is a brief test routine: dim a, b, c as cArray a = new cArray(4) b = new cArray(2) a.set(0,"hey") a.set(1,"ho") a.set(2,"hey") a.set(3,"nonny") a.set(4,"no") msgbox "Here is a:" a.display // expect "hey", "ho", "hey", "nonny", "no" a.copyTo b msgbox "Here is b:" b.display // expect "hey", "ho", "hey" c = a.clone() c.sort c.reverse c.set(0, "yo") c.myRedim(2) msgbox "Here is c:" c.display // expect "yo", "no", "ho" Hash Class A hash is a storage structure that's like a pigeonhole desk for storing index cards: a glance at a card tells us instantly which pigeonhole it goes into, but inside each pigeonhole the cards are just a jumble. The means used to identify the correct pigeonhole is called the hash function. For example, if the cards contain text, the first letter of the text on a card to be stored could act as a hash function, resulting in 26 pigeonholes. This wouldn't be a very good hash function if all the texts started with the same few letters, or if there were many index cards, because some pigeonholes would end up with a jumble of many cards. But when the hash function and the nature and size of the data are such that we are likely to end up with an even distribution where just a very few items end up in each pigeonhole, hashing is a very fast and efficient mode of storage. Hashing is typically a two-stage process: first use the hash function to choose the correct pigeonhole; then manipulate the jumble which that pigeonhole represents. So apart from the hash function, we must also decide how to implement the jumble. We'll use a CArray, the string array wrapper class developed in the previous example. Let's say, then, that the problem is to store every individual word of a text, without storing any duplicates. We propose to implement a hash of strings as an array of CArray--except that to prevent duplicates we will need to be able to ask each CArray whether it already contains a word, so we will use a subclass of CArray called MyCArray which adds a Contains method handler: Function contains(s as string) As boolean dim i, u as integer u = self.ubound for i = 0 to u if self.get(i) = s then return true end return false End Function For the sake of generality and flexibility, we will factor out the hash function from the class that does the hashing; that is, we will tell our hashing class instance in code what hash function to use, rather than hardcoding this information into the class itself. A different hash function can thus be substituted without modifying the class that does the hashing. To accomplish this goal, the hash function will be expressed by a class that implements a class interface. Call this class interface HashFuncPtr, and give it a method called HashFunc which accepts a string and returns an integer. This means that any class that has a HashFunc method can act as a HashFuncPtr, and can be used by our hashing class to derive the index for any string. (This technique is REALbasic's equivalent of pointer-to-function.) For this example, we will use a very fast but extraordinarily primitive and unrealistic hash function: we will sum the numeric equivalents of the first and last letters of the string (these numeric equivalents are discussed in Chapter 5). Create a class PrimitiveHashFunc which implements HashFuncPtr; its HashFunc method goes like this: Function hashFunc(s as string) As integer return ascb (leftb(s, 1)) + ascb (rightb(s, 1)) End Function We come at last to the class that does the hashing, which we call Hasher. It has two properties, ItsArray (the array of MyCArray, initially declared to size -1) and ItsHashFunction (the HashFuncPtr containing the hash function). The code, displayed in Example 4-3, is minimal: we can store strings, but we cannot retrieve them except for debugging purposes. The instance is initialized through its constructor, which needs to be told how big the array should be and what hash function to use. Storage is then performed through the InsertIfAbsent method. The GetHashIndex method just abstracts the code for calling the hash function. Example 4-3: Hasher class // hasher.hasher: Sub hasher(size as integer, f as hashFuncPtr) dim i as integer redim itsArray(size) for i = 0 to size self.itsarray(i) = new myCArray(-1) itsHashFunction = f End Sub // hasher.getHashIndex: Function getHashIndex(s as string) As integer return self.itsHashFunction.hashFunc(s) End Function // hasher.insertIfAbsent: Sub insertIfAbsent(s as string) dim i as integer i = self.getHashIndex(s) if self.itsArray(i).contains(s) then // nothing to do return end self.itsArray(i).append s End Sub // hasher.displayDump: Function displayDump() As string // for debugging dim result as string dim i, j, u, uu as integer uu = ubound(itsArray) for i = 0 to uu u = self.itsArray(i).ubound for j = 0 to u result = result + "," + self.itsArray(i).get(j) return result End Function Here is some code to test Hasher. Since the maximum numeric equivalent of a letter is 255, the largest hash value possible is 510. dim h as hasher h = new hasher(510, new primitiveHashFunc) // simple test just to show that it really is hashing h.insertIfAbsent("this") h.insertIfAbsent("is") h.insertIfAbsent("a") h.insertIfAbsent("test") h.insertIfAbsent("of") h.insertIfAbsent("this") h.insertIfAbsent("and") h.insertIfAbsent("this") h.insertIfAbsent("is") h.insertIfAbsent("not") msgbox h.displayDump // expect to see each distinct word once Map Class A map is a collection of name-value pairs, a way of storing a value keyed by an arbitrary name (as opposed to an array, which stores a value keyed by an index number). Given a name, you can add it and a corresponding value to the collection, or, if the name is already defined in the collection, you can retrieve or modify its value. REALbasic provides a class which behaves this way (the Collection class, discussed in Chapter 5), but it's inefficient, and besides, it's good exercise to create one for ourselves. Let's pose a problem like that of the previous section: we wish to store every individual word of a text, without storing any duplicates; but this time we will attach to each word a count of how many times it occurs in the text. We are thus pairing a name (the word) with an integer (the count). So we create a class, Pair, to express this pairing, with two properties, Key and Value. Key, obviously, is a string. You might think Value would be an integer, but for reasons that will be clear later on, we choose to make it an IntPtr, which is a class consisting of a single integer property named V (for "value"). Pair will have a constructor: Sub pair(s as string) self.key = s self.value = new intPtr End Sub This initializes the Key to the string value passed in, and the Value to an IntPtr whose V will be autoinitialized to zero without any help from us. The pairs will be maintained as an array. Since arrays aren't classes, we will use a CArray, the array wrapper class developed earlier. Unfortunately, CArray is a string array wrapper; now we need it to be a Pair array wrapper. This means we'll have to rewrite CArray, running through every method and property declaration and all the code, making the appropriate changes. (We will also have to disable the Sort method, because we presently have no way to sort an array of objects.) The full implementation is left as an exercise for the reader, but as an example, here's CArray's revised Set method: Sub set(index as integer, s as pair) self.theArray(index) = s End Sub Are we really going to have to go through this maddening exercise with CArray every time we change the datatype of the array? No, thank heavens; in Chapter 5 we'll meet the variant datatype, and with its help we will rewrite CArray one last time, so that it wraps an array of anything without further rewriting. The sorting problem is easily solved with the help of Thomas Tempelmann's Sort class, mentioned earlier in this chapter. We next face the problem of how the CArray will be maintained for efficient access. Given a key, we need to know quickly whether the CArray contains a Pair with that key. Clearly the worst solution is to keep the pairs in any old order; they should be kept sorted. This sounds like a job for binary search, the algorithm for rapid access to a sorted array described under . For the sake of clarity and flexibility, we'll package up the binary search algorithm as a class, which we'll call BinarySearch. Give it one property, a CArray which we'll just call A. Things are a little trickier than they were back in Chapter 2 because a search now needs to hand back two pieces of information: First, was the desired key found? If so, at what index? If not, at what index should it be inserted so as to maintain the array in sorted order? So we'd like BinarySearch to have a Find method that will return two values, Index (an integer) and Found (a boolean). But a subroutine can't return two values, so we'll package up this returned value as a class called IndexAndFound, consisting of two properties, Index (an integer) and Found (a boolean), and with a constructor that sets them: Sub indexAndFound(index as integer, found as boolean) self.index = index self.found = found End Sub The BinarySearch class is shown as Example 4-4. The constructor points A at the correct CArray. FindBinaryRecursive is identically the algorithm from Chapter 2, except that we are now looking at strings, not integers, and we must change slightly the way we speak in order to access an element of the array because it is now a CArray of Pairs. The public interface to BinarySearch is the Find method: it wraps the call to FindBinaryRecursive in a test for boundary conditions and in an adjustment of the Index in case we don't find the desired key (in that case, remember, the Index must say where to insert a new Pair), and hands back the result as an IndexAndFound. Example 4-4: BinarySearch class // binarySearch.binarySearch: Sub binarySearch(a as cArray) self.a = a End Sub // binarySearch.find: Function find(whatStr as string) As indexAndFound dim index as integer dim found as boolean dim foundStr as string if a.ubound < 0 or whatStr < a.get(0).key then return new indexAndFound(0, false) end index = findBinaryRecursive(0, a.ubound, whatStr) foundStr = a.get(index).key found = (foundStr = whatStr) if not found then if whatStr > foundStr then index = index + 1 end end return new indexAndFound(index, found) End Function // binarySearch.findBinaryRecursive: Function findBinaryRecursive(low as integer, hi as integer, whatStr as string) As integer dim i as integer dim s as string i = (low + hi) / 2 s = a.get(i).key if s = whatStr then return i end if hi <= low then return hi end if s > whatStr then return findBinaryRecursive(low, i-1, whatStr) else return findBinaryRecursive(i+1, hi, whatStr) end End Function We come at last to the Map class itself. It has two properties. One is Pairs, the CArray in which the name-value pairs will be kept. The other is B, the BinarySearch instance that will be used to access the binary search algorithm. Both properties exemplify typical uses of the "Has-A" relationship between classes. Map has a CArray because that's its data. Map has a BinarySearch because the binary search algorithm is Map's servant and no one else's, and because this particular BinarySearch instance is to be pointed only at this particular CArray. The Map class is shown as Example 4-5. A constructor initializes the Pairs CArray, and creates the BinarySearch instance, pointing the latter's CArray pointer at the former; the two properties are thus correctly hooked up at the outset. The Get and Size methods are very simple and are thrown in mostly for debugging purposes. The Refer method is the workhorse of the class; given a key, it either returns its corresponding Value, or else (if no Pair with that key exists) it creates a new Pair with that key at the appropriate index and returns its corresponding Value. You should now see why a Pair's Value is an IntPtr, not a mere integer. It isn't Map's job to understand how it is being used; it's the job of whoever creates and manipulates the Map instance. Therefore, we don't wish merely to tell the caller of Refer what the value is; we wish to give the caller complete access to that value, so that the caller can modify it as desired. Ideally we'd like to return an integer ByRef, but a function can't return a ByRefresult; so we return an instance of a class that wraps the integer and behaves as a pointer to it. This is a REALbasic example of what C++ would call "returning a reference." Example 4-5: Map class // map.map: Sub map() pairs = new cArray b = new binarySearch(pairs) End Sub // map.refer: Function refer(s as string) As intptr dim result as indexAndFound result = b.find(s) if not result.found then if result.index > pairs.ubound then pairs.append new pair(s) else pairs.insert result.index, new pair(s) end end return pairs.get(result.index).value End Function // map.get: Function get(i as integer) As intptr return pairs.get(i).value End Function // map.size: Function size() As integer return pairs.ubound End Function To test Map for the purposes of our original problem, we make a utility subroutine Increment which accepts an IntPtr and increments the integer it points to: Sub increment(ip as intptr) ip.v = ip.v + 1 End Sub Here's a test routine: dim m as map dim i,u as integer m = new map increment(m.refer("this")) increment(m.refer("is")) increment(m.refer("a")) increment(m.refer("test")) increment(m.refer("of")) increment(m.refer("this")) increment(m.refer("and")) increment(m.refer("this")) increment(m.refer("is")) increment(m.refer("not")) u = m.size for i = 0 to u msgbox m.get(i).key + ":" + str(m.get(i).value.v) Sure enough, we are counting the word occurrences; we get a:1, and:1, is:2, and so forth, as expected. Aside from exemplifying many powerful REALbasic class techniques, the Map class is of great practical utility; I really did write it originally to handle the task of counting the occurrences of distinct words in a document, and on my machine it can accumulate the data for a 4500-word document, containing about 1400 distinct words, in slightly over a second. 1. For a list of what built-in classes can be subclassed, see "The Class Hierarchy" later in this chapter. 2. Insert your own Bill Clinton joke here. 3. I have coined this term based on a suggestion by Quinn. 4. But if you try this with a DoubleScoreBox in the window, the DoubleScoreBox will misbehave: it will execute ScoreBox's Increase handler, not DoubleScoreBox's Increase handler. That's because ScoreBox is declared as an Increaser and DoubleScoreBox isn't. It's as if class interfaces had the power to disable the virtual method mechanism between superclass and subclass. The workaround is to declare DoubleScoreBox, too, as implementing Increaser; but I regard the fact that this is necessary as a bug. 5. This example should help immigrants from the C++ world, seeking the REALbasic equivalent of such devices as multiple inheritance, class templates and pointer-to-function. The example is lifted entirely from Thomas Tempelmann's Sort class,, which is well worth downloading and studying; it certainly opened my eyes as to what class interfaces could do. 6. I owe this observation to Steve Fyfe. 7. The built-in objects are variously treated in the chapters devoted to their functionality. For example, for the Keyboard object, see Chapter 19. 8. For local constants, see Chapter 2. 9. Dates are discussed in Chapter 5. 10. It happens that in this example there's another workaround, namely, to cast the instance as a MyClass. This hides from REALbasic the existence of the method that takes the wrong parameters. So, if cis the MySubClass instance, you can say myClass(c).greetMe(3). However, this use of casting up, effectively delivering a class-directed message from outside the instance, is probably to be deprecated (though it was quite standard in Version 1 of REALbasic). 11. Insert your own Monica Lewinski joke here. Back to: Sample Chapter Index Back to: REALbasic: The Definitive Guide, 2nd Edition © 2001, O'Reilly & Associates, Inc. webmaster@oreilly.com
http://oreilly.com/catalog/realbasic2/chapter/ch04.html
crawl-002
refinedweb
18,759
60.24
Are you sure? This action might not be possible to undo. Are you sure you want to continue? There is neither an industry wide definition nor a universal meaning for “Hedge Fund”. For general purposes, hedge funds, including fund of funds, can be considered as unregistered private investment partnerships, funds or pools that may invest and trade in many different markets, strategies and instruments (including securities, nonsecurities and derivatives) and are not subject to the same regulatory requirements as mutual funds. The first hedge fund was set up by Alfred W. Jones in 1949, though the term did not gain popularity until 1960s. Jones wanted to eliminate a part of the market risk involved in holding long stock positions by short-selling other stocks. He thereby shifted most of his exposure from market timing to stock picking. Jones was the first to use short sales, leverage and incentive fees. During the 10-year period from 1955-1965 Jones' fund returned 670 percent. Apparently, the fund had outperformed all the mutual funds of its time, even after accounting for a hefty 20% incentive fee. The first rush into hedge funds followed and the number of hedge funds increased from a handful to over a hundred within a few years. Thus has started the hedge fund industry, which though initially faced a lot of problems with inexperienced newcomers gradually picked up its admiration and is now considered to be the fastest growing segment of the financial markets. The industry with an average fund size of US $ 125 mm and growing at an astounding growth rate not just while reaching out to new customer segments – institutions, pension 1 funds and endowments – has now attracted even the retail investor through its fund of funds. But there are also claims that with the increase in the craze for the hedge funds, a lot of new managers have come in with insufficient experience and are misusing the term ‘hedge funds’. How true is this? Is the outstanding return generating ability of the hedge funds really gone down? Is it the trend that everyone needs to catch up to or the beginning of the end of a legend? There arises the necessity to analyse the industry performance and to understand how the industry had performed and is performing in the current times. The project employs sophisticated techniques, the results of a lot of academic works, to analyse the performance of the hedge funds industry in an efficient manner, escaping any underlying biases, under the constraints of limited resources of time, information and of course the ability of comprehension. The next section deals with understanding the growth of the hedge fund industry and the probable reasons for the same. Section 3 explains the process and path behind the performance analysis. Section 4 details the performance analysis using different observations. Section 5 concludes the report talking of the worries about the future. Four appendixes attached help understand some background information about the hedge funds, the indexes involved in the analysis and the mathematics behind skewness and kurtosis. 2 2. GROWTH OF HEDGE FUNDS The term “hedge funds” first came into use in the 1950s to describe any investment fund that used incentive fees, short selling, and leverage. Since then, the industry of hedge funds kept growing. Over time, hedge funds began to diversify their investment portfolios to include other financial instruments and engage in a wider variety of investment strategies. Today, in addition to trading equities, hedge funds may trade fixed income securities, convertible securities, currencies, exchange – traded futures, over the counter derivatives, futures contracts, commodity options and other non-securities investments. However, hedge funds today may or may not utilize the hedging and arbitrage strategies that hedge funds historically employed, and many engage in relatively traditional, long only equity strategies. This must make it clear as to why most of the discussions about hedge funds, including this, start with the statement ‘There is no exact definition to the term “Hedge Fund”’*. 2.1. Number of Hedge Funds and Total Assets As the hedge remains it to is fund nonvery $900.00 $800.00 $700.00 $600.00 $500.00 $592.00 $546.00 $408.00 $400.00 $300.00 $200.00 $100.00 $0.00 $35.00 $0.50 $1.00 $3.00 $2.00 $20.00 $76.00 $324.00 $221.00 $130.00 $795.00 industry regulated, difficult Hedge Fund Assets (US $Billions) quantify exactly the size of the industry. But there are a few industry information sources market that help the the estimate statistics of the industry. One such source helps understand the evolution. Jan-50 Jan-60 Jan-71 Jan-74 Jan-87 Jan-92 Jan-95 Jan-97 Jan-99 Jan-00 Jan-01 Jan-02 Jan-03 Jan-04 Figure 2.1: Growth in the Hedge Fund Assets from Jan 50 to Jan 04 * For more details about Hedge Funds refer Appendix A. 3 Very clearly, from the above graph, hedge funds have attracted significant capital over the last decade, apparently triggered by successful track records. The global hedge funds volume has increased at an astonishing growth rate, from US $20 billion in 1987 to US $795 billion in 2004. Estimates of new assets flowing into hedge funds exceed US $25 billion on average for the last few years. By 2008, hedge fund assets have been predicted to reach the US $ 2 trillion mark. This makes hedge funds as the sector showing the most growth in the entire financial services arena. Number of Hedge Funds The global hedge fund volume accounts for about 1% of the combined global equity and bond market. The number of hedge funds has also increased rapidly from 100 to about 7000 between 1987 and 2004. 8000 7000 7000 6000 5000 4000 3000 2080 2000 1000 0 1 30 140 30 100 880 3000 4000 3500 4800 5500 5700 Jan-50 Jan-60 Jan-71 Jan-74 Jan-87 Jan-92 Jan-95 Jan-97 Jan-99 Jan-00 Jan-01 Jan-02 Jan-03 Jan-04 Figure 2.2: Growth in the Number of Hedge Funds from Jan 50 to Jan 04 4 2.2. Prospective Markets for More Growth In Europe the overall hedge fund volume is still small with about US $ 80 billion in 2003, which accounts for about 11% of the global hedge fund volume. The number of hedge funds in Europe is about 600. Within Europe, hedge funds have become particularly popular in France and Switzerland where already 35% and 30% of all institutional investors have allocated funds into hedge funds. In 2003, Italy’s hedge fund industry nearly tripled in size as assets grew from Euro 2.2 billion to Euro 6.2 billion. Germany is at the lower end with only 7% of the institutional investors using hedge funds. But the Investment Modernization Act, may well trigger rising interest from German investors. Overall, hedge fund assets are estimated to increase ten fold in Europe over the next 10 years. The acceptance of hedge funds seems to be growing through out Europe, as investors have sought alternatives that are perceived as less risky during the last three years equity bear market. This trend is also evident in Asia, where hedge funds are starting to take off. According to AsiaHedge magazine, some 150 hedge funds operate in Asia, till year 2002 which together managed assets estimated at around US $ 15 billion. In Japan, too hedge funds are becoming the focus of more attention. Recently, Japan’s Government Pension Fund one of the world’s largest pension fund with US $ 300 billion has announced plans to start allocating money to hedge funds. Industry participants believe that Asia could be the next region of growth for the hedge fund industry. The potential of Asian hedge funds is well supported by fundamentals. From an investment perspective, the volatility in the Asian markets in recent years has allowed long-short and other strategic players to outperform regional indices. The relative inefficiency of the regional markets also presents arbitrage opportunities from a demand stand point US and European investors are expected to turn to alternatives in Asia as capacity in their home markets diminish. Further, the improving economic climate in South East Asia should help foreign fund managers and investors to refocus their attention on the region. Overall, hedge funds look set to play a larger role in Asia. 5 2.3. Probable Reasons for Growth There are a number of factors behind the rising demand for hedge funds. While high net worth individuals remain the main source of capital, hedge funds are becoming more popular among. The unprecedented bull run in the US equity markets during the 1990s swelled investment portfolios this lead both fund managers and investors to become more keenly aware of the need for diversification. Hedge funds are seen as a natural “hedge” for controlling downside risk because they employ exotic investments strategies believed to generate returns that are uncorrelated to asset classes. Until recently, the bursting of the technology and telecommunications bubbles, the wave of scandals that hit corporate America and the uncertainties in the US economy have lead to a general decline in the stock markets worldwide. This in turn provided fresh impetus for hedge funds as investors searched for absolute returns. The growing demand for hedge fund products has brought changes on the supply side of the market. The prospect of untold riches has spurred on many former fund managers and proprietary trades to strike out on their own and set up new hedge funds. With hedge funds entering the main stream and becoming ‘respectable’, an increasing number of banks, insurance companies, pension funds, are investing in them. There is also a clear desire among this investor base to be more focused on absolute-return strategies rather than relative return. Given the current level of allocations most of these large long-term investors have moved towards alternative investments, and their professed long-term target allocation, the flow of funds to these asset classes will remain strong. 6 3. BEHIND THE PERFORMANCE ANALYSIS 3.1.Data One of the most common ways to comprehend and appreciate the performance of something is to consider its historical data. As our subject of concentration here is the hedge fund industry as a whole, it would be brilliant if the historical data involves all the hedge funds in the industry. But unfortunately that is not possible because of lack of easy availability of all the data. Since hedge funds do not register with SEC their actual data cannot be independently followed and thus the only way one can expect the data is through self-reporting by the hedge fund which would further lead to non-uniformity in the database. These hedge funds though, for varied reasons, report their performance to many information sources, which disseminate the same to the market, helping a big lot of investors and researchers. Most of these sources also calculate indexes to indicate the movement of the industry as a whole. Thus, for our purpose, these hedge fund indexes are chosen as the most logical way to understand the industry performance. Seven such hedge fund indexes have been obtained from their respective websites. They include Altvest, CSFB/Tremont, EACM-100 Onshore, HFR, Hennessee, MSCI Hedge and VAN*. But the very fact that there are so many hedge fund indexes leaves us with the problem of ‘which to use’. Added to that, many research reports indicate that these hedge fund indexes have biases, making them, individually, unreliable. * For more details about Hedge Fund Indexes refer Appendix B. 7 3.2.Biases in Hedge Fund Indexes The hedge fund indexes have been set up to provide the rigorous data and analytics that both managers and investors increasingly demand for measuring performance and risk in this rapidly growing asset class. However, there are inherent problems in compiling a benchmark for the hedge fund industry, specifically including the presence of various biases in the databases. There are three main sources of difference between the performance of hedge funds in the database and the performance of hedge funds in the population (see Fung and Hsieh (2001a)). Survivorship bias: This occurs when unsuccessful managers leave the industry, and their successful counterparts remain, leading to successful managers only being counted in the database. The inherent problem is that a database overestimates the true returns in a strategy, because it only contains the returns of those that were successful, or at least of those that are currently in existence. Selection bias: This occurs if the hedge funds in the database are not representative of those in the universe. Information on hedge funds is not easily available. This is because hedge funds are often offered as a means of private placement, and no obligation of disclosure is imposed in the US. As a result, database vendors collect information on those hedge fund managers who cooperate only. Besides, when a hedge fund enters a vendor database, the fund history is generally backfilled. This gives rise to an instant history bias (Park (1995)). Since we expect hedge funds with good records to report their performance to data vendors, this may result in upwardly biased estimates of returns for newly introduced funds Researches indicate that these biases are not insignificant and cannot be neglected. Fung and Hsieh (2000), using the TASS database, find that the surviving portfolio had an average return of 13.2 % from 1994 to 1998, while the observable portfolio had an average return of 10.2 % during this time, from which, there is a 3% survivorship bias per year for hedge funds (a similar number is obtained in Park et al. (1999)). The attrition rate, defined as the percentage of dead funds in the total number of funds has been reported by Agarwal and Naik (2000b) as 3.62%, 2.10% and 2.22% using quarterly, half 8 yearly and yearly returns, which is consistent with an average annual attrition rate of 2.17% in the HFR database reported by Liang (1999) for 1993-97. These attrition rates are much lower than the annual attrition rate of about 14% for offshore hedge funds in 1987-96 reported by Brown, Goetzmann and Ibbotson (1999) and 8.3% in the TASS database in 1994-98 as reported by Liang (1999). Overall, it is probably a safe assumption to consider that these biases account for a total approaching at least 4.5% annually (see Park, Brown and Goetzmann (1999) and Fung and Hsieh (2000)) Figure 3.1: Survivorship, Selection Biases in Hedge Fund Returns 9 3.3.Pure Style Index As the above researches indicate, in the presence of many different competing indexes, one may be at a loss to decide which one to use for benchmarking the performance of active or passive managers. There are essentially two possible approaches to the problem. One approach involves carefully studying the methods and data used by each index provider, and coming up with a qualitative assessment of which is doing the best job. The problem is that there is no clear and definitive judgement that one can make on the subject. All existing indexes have both advantages and drawbacks. Second approach. Given that it is impossible to come up with an objective judgement on what is the best existing index, a natural idea consists of using some combination of competing indexes, a pure style index, to reach a better understanding of what the common information would be. In other words, searching for some notion of “intersection” of competing indexes. One straightforward method for obtaining a composite index based on various competing indexes would involve computing an equally weighted portfolio of all competing indexes. This would obviously provide investors with a convenient one-dimensional summary of the contrasted information contained in competing indexes. In particular, because competing hedge fund indexes are based on different sets of hedge funds, the resulting portfolio of indexes would be more exhaustive than any of the competing indexes it is extracted from. For our purposes, this method has been considered efficient. The pure style index thus calculated then requires some base index to understand and compare the performance with. As hedge funds are considered alternative investments (non-traditional investment with potential economic value), it has been considered logical to compare the index with a more traditional investment – equity. In the process, it can also be understood if the high fees charged by the hedge funds are worth it. The equity market provides the investor base once again with a range of indexes, creating the problem of ‘which to use’. But unlike earlier, we do not calculate an index that 10 represents all the market indexes. Instead the pure style average index is compared to the different market indexes individually. The equity market indexes thus chosen and what they specifically represent are shown in the table below. Equity Market Index Dow Jones Industrial Average (DJIA) MSCI Europe, Australasia and Far East Standard & Poor’s (S&P) 500 Wilshire 5000 Russell Midcap Russell 2000 Russell Microcap Represents 30 large frequently traded stocks 21 developed markets outside North America Top 500 US corporations by market capitalisation Entire US stock market – all public companies Mid-cap segment of US equity market Small-cap segment of US equity market Microcap segment of US equity market Table 3.1: Market Indexes considered in the Analysis and their respective segments As can be seen from the above table, the market indexes* have been carefully chosen to help compare the hedge fund universe with different segments from large-cap to microcap and ultimately the major equity markets. * For more details about Market Indexes refer Appendix C. 11 3.4.Methodology For the hedge fund indexes, the problem of ‘which to use’ was given a solution of calculating a pure style index to rectify the biases discussed earlier. The pure style index has been created in its simplest form – average. Six different hedge fund indexes – Altvest, CSFB/Tremont, EACM-100 Onshore, HFR, Hennessee and VAN – involving monthly returns from January 1996 to December 2005 have been used to calculate the pure style average index. Sharpe Ratio, the most widely used traditional risk-adjusted performance measure, has been considered as to compare the performance of the pure style index against the broad equity market indexes. The Sharpe ratio can be defined as a risk-adjusted measure developed by William F. Sharpe, calculated using standard deviation and excess return to determine reward per unit of risk. The higher the Sharpe ratio, the better the fund's historical risk-adjusted performance. The above formula gives monthly Sharpe ratio, which can be annualized by multiplying the result with square root of 12. The risk-free return for our purposes has been considered as the return of 90 day T-bill return. This Sharpe ratio has many desirable properties. However it is not flawless. It is leverage invariant; it does not account for correlations; nor can it handle iceberg risks lurking in the higher moments. Worse yet, it can be ‘gamed’ by truncating the right tail of the returns distribution at the expense of a fat left tail (the periodic crashes). It has also been researched and proved that high Sharpe ratios in hedge funds often represent a trade-off for higher moment risk. 12 One line of thought is to salvage the Sharpe ratio’s relevance while retaining the familiar form by replacing standard deviation in the denominator with an enhanced risk measure such as VaR. This is parametric VaR at 99% confidence level, which assumes normality of the distribution. To remove this assumption that constrains to normality, a modification of the above, Cornish Fisher VaR, can be used. This modified VaR includes the impact of the skewness and kurtosis. Where Where (1-α) is the confidence level, z(α) the critical value under normality, S is skewness, and K is excess kurtosis*. The above VaR has been used in the denominator to calculate the enhanced Sharpe Ratio. This enhanced Sharpe ratio has been used as the final risk adjusted performance measure to compare the pure style index against the equity market indexes. * For more details about Skewness and Kurtosis refer Appendix D. 13 4. PERFORMANCE ANALYSIS 4.1. Raw Returns The figure below compares the growth of US $1 invested in January 1996 in pure style average (PSA) index and that invested in developed markets index (MSCI EAFE), 500 large corporations index (S&P 500) and the small-cap index (Russell 2000) of the US equity market. Growth of $1 invested in Jan 1996 3.5 3 2.5 2 1.5 1 0.5 0 1995 1996 1996 1997 1997 1998 1998 1999 1999 2000 2000 2001 2001 2002 2002 2003 2003 2004 2004 2005 2005 Pure Style Average MSCI EAFE S&P 500 Russell 2000 Figure 4.1: Comparison of Cumulative Returns of PSA from 1996 to 2005 with those of MSCI EAFE, S&P 500 and Russell 2000 One can observe that the PSA and the Russell 2000 outperformed the other two (MSCI EAFE and S&P 500) from the beginning. The PSA and Russell 2000 grow on quite similar lines till somewhere in the beginning of 1998, when Russell 2000 loses its momentum leaving the PSA to lead. Major event that had happened during this time and that can be expected to have caused the impact is the collapse of a huge hedge fund – LTCM. Since then the PSA had outperformed the other three. The MSCI EAFE, S&P 500 and the Russell 2000 grow the initial investment of $1 in 1996 to $2.5 in 2005; the PSA grows it to $3. 14 The figure below compares US $1 invested in January 1996 in the PSA to the traditional, favorite index to many, involving 30 large and frequently traded stocks (DJIA) and a very broad market index, which is often considered to represent the whole of US equity market index. Growth of $1 Invested in Jan 1996 3.5 3 2.5 2 1.5 1 0.5 0 Pure Style Average DJIA Wilshire 5000 04 04 05 96 97 97 98 98 99 00 00 01 01 02 02 03 20 20 20 19 19 19 19 19 20 20 20 20 20 20 20 20 05 19 Figure 4.2: Comparison of Cumulative Returns of PSA from 1996 to 2005 with those of DJIA and Wilshire 5000 PSA had initially faced problems in outperforming the market indexes – DJIA and Wilshire 5000. The DJIA and the Wilshire 5000 both outperformed the PSA and appeared to move at a better angle to the PSA. But something happened; sometime during third quarter of 2000 that has changed the scenario. The market indexes fell. A major market event that had happened during this period is the crash of the dot-com bubble, which could have triggered the above consequences. The $1 invested in DJIA and Wilshire 5000 reached a high of $2.35 and $2.24 respectively during 1999-2000 and finally by December 2005 made it to just above $2 whereas the PSA reached the $3 mark. 19 95 15 In the next graph the growth of US $1 invested in PSA is compared to that invested in the mid-cap segment index of the US equity markets. Growth of $1 invested in Jan 1996 3.5 3 2.5 2 1.5 1 0.5 0 95 96 97 97 98 98 99 00 00 01 01 02 03 02 04 19 19 19 19 19 19 19 20 20 20 20 20 20 20 20 Pure Style Average Russell Midcap 04 05 20 20 20 05 Figure 4.3: Comparison of Cumulative Returns of PSA from 1996 to 2005 with those of Russell Midcap The story of the growth of the investment remains the same as that discussed above till the first quarter of 2003. But after that, unlike the DJIA and the Wilshire 5000, the Russell Midcap picks up momentum, reaches the PSA and crosses it over to finish with some lead. By the end of 2005, the US $1 invested in the PSA reached the $3 mark whereas that invested in the Russell Midcap reached $3.24. For the reasons of availability of data, the PSA is compared to Russell Microcap by investing US $1 in July 2000. The graph below shows the path of the PSA and that of the micro-cap segment index of the US equity market. 16 Growth of $1 Invested in July 2000 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 03 03 04 04 01 04 02 03 05 05 00 00 01 01 02 02 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 05 Pure Style Average Russell Microcap Figure 4.4: Comparison of Cumulative Returns of PSA from 2000 to 2005 with those of Russell Microcap The growth of $1 investment in the PSA was very dull till the beginning of the third quarter of 2003 while the Microcap index most of the time projected negative returns during the same period. Growing from there, by the end of 2003, the Microcap index crosses the PSA and continues to outperform till the end of the period of consideration, December 2005. By the end, the $1 invested in the PSA returns $1.45 while that invested in Russell Microcap index returns $1.66. 17 4.2. Correlations The table below shows the correlations between the PSA and the market indexes. Index DJIA MSCI EAFE S&P 500 Wilshire 5000 Russell Midcap Russell 2000 Russell Microcap Correlation with PSA 0.57 0.66 -0.03 0.74 0.79 0.82 0.88 Table 4.1: Market Indexes and respective correlation coefficients with PSA The performance graphs we discussed earlier showed how the markets swung. But the PSA showed very little fluctuations with respect to the market indexes. The same is reflected in the low correlations in the above table with the large cap indexes – DJIA, MSCI EAFE and S&P 500. In fact the PSA is negatively correlated with the S&P 500. This clearly confirms with the researches that indicate the ability of hedge funds to provide diversification benefits to traditional equity investment. In fact this low correlation between hedge funds' performance and the market's ups and downs is the main reason why such funds are valued as alternate investment vehicles. They essentially exploit market inefficiencies, using long or short positions to offset market risks Interesting part of these statistics is the high correlations with the Russell’s small-cap, mid-cap and micro-cap indexes. Researches indicate that the vast majority of equity hedge funds remain focused on the large capitalization end of the market, while the small and micro cap segments, particularly the growth sectors of those segments, have been largely ignored. As of August 2003, less than 2% of the active hedge funds tracked by HedgeFund.net were categorized as small/micro cap funds. Considering this to be the case, the high correlations with the small, mid and micro-cap indexes must indicate that most of the hedge funds though do not fully focus on these segments, their positions definitely have some exposure to these segments and effectively hedged too. 18 4.3. Volatility Observe the performance graphs above. In all of them, the market indexes wobbled, in fact staggered at a few instances in their path to December 2005. The PSA though, appeared calm and grew its investment quite swiftly. This indicates that the market indexes when compared to the PSA are highly volatile. The following table speaks the same in a different language. Index DJIA MSCI EAFE S&P 500 Wilshire 5000 Russell Midcap Russell 2000 Russell Microcap Pure Style Average (PSA) Annual Standard Deviation 15.82 % 14.85 % 14.92 % 15.87 % 16.27 % 20.17 % 20.83 % 6.53 % Table 4.2: Indexes and respective annualized standard deviations The Annual Standard Deviations above talk in real harsh words. Even the Russell Midcap and the Russell Microcap indexes, the only two that outperformed the PSA, have been destroyed when it came to comparing the standard deviation. But the standard deviations in themselves cannot be used as a basis of performance comparison. Standard Deviation relies on the assumption that the return distribution is symmetric around its mean and implies that the sensitivity of the investor is the same on the upside as on the downside. The problem is that hedge fund returns do not follow the symmetrical return paths implied by traditional volatility. Instead, hedge fund returns tend to be skewed. Figure 4.5: Graph showing an example of a fat tail (negative skewness) Specifically, they tend to be negatively skewed, which means they bear the dreaded "fat tails", which are mostly characterized by positive returns but a few cases of extreme losses. 19 4.4 Sharpe Ratios For the reasons discussed above, to salvage the problem of fat tails, academic studies propose that measures of downside risk can be more useful than volatility or Sharpe ratio as a scale of performance. In order to take the asymmetry in the return distribution into account, the use of downside deviation as risk measure has been frequently advocated (see e.g. Sortino and Price (1994), Bacmann and Pache (2003)). In such a context, the Value-at-Risk, designed to capture the maximum loss over a target time horizon with a given degree of confidence is far better suited. VaR has been getting a very wide acceptance throughout the financial community as it translates a complex risk notion into a simple and synthetic monetary amount. Considering the above reasons and as already discussed earlier in ‘methodology’, modified VaR has been used to calculate an enhanced Sharpe ratio for measuring the performance of the PSA. The table below indicates the enhanced Sharpe ratios of the PSA and the market indexes. Index DJIA MSCI EAFE S&P 500 Wilshire 5000 Russell Midcap Russell 2000 Russell Microcap Pure Style Average (PSA) Enhanced Sharpe Ratio 0.15 0.03 0.22 0.17 0.34 0.18 0.20 0.70 Table 4.3: Indexes and respective Sharpe Ratios, enhanced using VaR Higher the Sharpe ratio, the better it is. It is common to read that in absolute terms a Sharpe ratio greater than 1 is good. But one cannot judge the performance of an index, which in itself constitutes many funds in absolute terms. Following the same in using the table above, the PSA shows an enhanced Sharpe ratio of 0.70 whereas the market indexes average an enhanced Sharpe ratio of 0.19. This clearly indicates that the PSA, in terms of the enhanced Sharpe ratio, has outperformed the other market indexes by a large margin. 20 More interesting to observe is how the Russell Midcap stands out of the crowd. But the PSA is still far from it. In the calculation of the Sharpe ratio above, the data made use of starts from January 1996. This means, any decision based on just the ratios above would take into account any event that had happened ten years back too. In other words the Sharpe ratios calculated above do not present us with enough information to understand the current performance of the hedge fund industry. For this reason, the time period of ten years has been divided into four groups – 19961997, 1998-1999, 2000-2002 and 2003-2005. The groups have been formed as to help understand the performance of the PSA with major events in the US stock market during that time. The period of 1996-1997 would represent a period with high interest rates, 1998-1999 would represent how the great disaster of the hedge fund industry – collapse of the Long Term Capital Management – affected the enhanced Sharpe ratio of the PSA, 2000-2002 is the period following burst of the dot-com bubble that changed the opinions of so many bulls, and finally 2003-2005 represents the most recent period. The table below indicates the enhanced Sharpe ratios calculated separately during the four periods of consideration for the PSA and the equity market indexes. Time Period 1996-1997 1998-1999 2000-2002 2003-2005 DJIA 1.12 0.63 -0.22 0.24 MSCI EAFE -0.09 0.74 -0.54 0.75 S&P 500 1.35 1.56 -0.32 0.50 Wilshire 5000 1.11 1.33 -0.35 0.52 Russell Midcap 0.95 0.43 -0.11 0.95 Russell 2000 0.52 0.21 -0.11 0.64 PSA 2.32 0.58 0.14 1.50 Table 4.4: Enhanced Sharpe Ratios of the Indexes during the four sub-periods In the high interest rates period (1996-1997), most of the market indexes performed well. MSCI EAFE and the small cap segment index, Russell 2000 are the ones that had 21 received the beating compared to the rest of the market indexes. But still, the PSA beats the market by a huge margin, projecting an astonishing Sharpe ratio. During the period when one of the biggest hedge funds in America, Long Term Capital Management, collapsed, (1998-1999), the pride of PSA’s outstanding performances got shattered. Except the small and mid-cap segment indexes, the rest four outperformed the PSA when it projects a mild Sharpe ratio of 0.58. The next period: 2000-2002. The bull market run by the dot-coms crashed. All the market indexes projected negative Sharpe ratios. In the midst of the market’s tears though, the PSA stood tall. Though the Sharpe ratio was not absolutely astonishing, the very fact that it projected a positive ratio when the whole market collapsed gained the hedge funds their fame back. The fame that the hedge funds can perform better irrespective of the market direction. The PSA maintains its respect high in the recent period (2003-2005). It continues to beat the market indexes by a considerable margin. Interesting to observe: sudden growth in the Sharpe ratio of the small-cap index. The following graph might help better understand the movement of the enhanced Sharpe ratio of the indexes with time. 22 4.5. Correlations with Time The following table indicates how the correlation between the PSA and the market indexes changed during the four periods considered. Time Period 1996-1997 1998-1999 2000-2002 2003-2005 DJIA 0.66 0.73 0.37 0.68 MSCI EAFE 0.55 0.68 0.68 0.83 S&P 500 0.75 0.75 0.55 0.76 Wilshire 5000 0.82 0.81 0.64 0.79 Russell Midcap 0.83 0.81 0.77 0.83 Russell 2000 0.74 0.86 0.86 0.80 Table 4.5: Correlations of the Market Indexes with the PSA during the four sub-periods One can observe that the correlation between the PSA and the small and mid-cap segment indexes have been consistently high. It was highly correlated with the Russell indexes when the market on the whole had destroyed the PSA (1998-1999) and it was highly correlated even when the PSA gains back its fame (2000-2002). The correlation of the PSA with the MSCI EAFE, which represents developed markets other than, US had been slowly increasing during 1996-2002 and had suddenly jumped to high levels in the most recent period (2003-2005). This might indicate that the hedge funds realized that the US markets have saturated and exploiting inefficiencies in valuation has become difficult and thus moved to the other developed markets to gulp the new potential. 23 5. THE DECLINE Although the PSA shows that the hedge fund industry as such is being successful in outperforming the market indexes, one must also accept that the average returns generating ability of a hedge fund has gone down with the growth in the number of hedge funds.; and as the research indicates, these returns have consistently declined dropping to as low as 5 per cent in 2001-03 and have dropped further since.. Hedge funds are clearly here to stay, and continue to attract the best talent because of their payout structures; however, their ability to continue to command a premium fee 24 structure will eventually be limited by their ability to differentiate themselves from their long-only counterparts. Researches (Fung, Hsieh, Naik, and Ramadorai, 2005) also indicate. Conditioning incentives on risk-adjusted performance may be a better way to go. In other words, a fund manager must be provided with his incentive fees, which instead of basing on total returns must be based on the generated alpha (excess returns). If such a scenario develops in the hedge fund industry, the new players entering the industry would be more careful in applying sophisticated strategies, generating better returns. 25 6. CONCLUSIONS Hedge funds have attracted significant capital over the last decade triggered by successful track records. The number of hedge funds has also increased rapidly from 100 to about 7000 between 1987 and 2004. The capital inflows into the industry have got the extra push with institutions like pension funds and endowments showing interest in hedge funds. Most of the growth in the hedge funds is concentrated in just US market. In recent times, though, the acceptance of hedge funds seems to be growing through out other markets – Europe and Asia with Japan growing as the new potentially big center. The previously enjoyed by the high net worth individuals, hedge funds, are making their way to other segments of the market too -, in expectation of higher returns. The cumulative return performance of the Pure Style Hedge Fund Index calculated through simple average using a returns data of 10 years (1996-2005) shows the hedge fund industry to grow its investment at better rates than most of the market indexes. In the more recent times, the Russell’s Midcap and Microcap indexes outperformed the Pure Style Index. In terms of Standard Deviation, the PSA had made itself a place at a high elevation, far from the market indexes. The performance graphs indicate that the volatility of the hedge funds on an average are quite low compared to the market indexes and thus can be considered to as ‘living up to their name – Hedge Funds”. 26 The enhanced Sharpe ratio calculated using the Cornish-Fisher VaR indicates superior performance by the Pure Style Average and thus the hedge fund industry on an average. The PSA outperforms the market indexes in the overall period of consideration (10 years) and also thrice in four sub-divided periods. Correlation between the PSA and the small and mid-cap segment indexes has been consistently high during the sub-divided periods. The slow and sudden rises of the correlation between the pure style index and the MSCI EAFE indicates that the hedge funds realized that the US markets have saturated and exploiting inefficiencies in valuation has become difficult and thus moved to the other developed markets to gulp the new potential. Though the pure style index outperforms the market by a big margin, the absolute returns of the hedge funds have got diluted with the increase in the number of new players. One solution to this could be charging incentive fees based on alphas instead of the total absolute returns. Also the alphas have to be measured with respect to a benchmark that assumes similar risk profile. 27 REFERENCES Jeffrey P. James, ‘Exploiting the Inefficiencies of The Small Cap Market Through Hedge Funds’, AIMA Journal, December 2003. GFS Induction Programme, ‘Hedge Fund Strategies’, M Allen, Aug 2004. Martin Eling, ‘Autocorrelation, Bias and Fat Tails – Are Hedge Funds Really Attractive Investements? ’, Working Papers on Risk Management and Insurance No. 8, Universitat St. Gallen. Alexander M. Ineichen, ‘Hedge Funds: Bubble or New Paradigm? ‘, Journal of Global Financial Markets, Vol.2, No. 4, 21 November 2001. The President’s Working Group on Financial Markets, ‘Hedge Funds, Leverage, and the Lessons of Long-Term Capital Management’, April 1999. William Reichenstein, ‘What Are You Really Getting When You Invest in a Hedge Fund?‘, July 2004. LJH Global Investments, ‘Why Invest in Hedge Funds Anyway? ‘, June 2001. Investor Force, ‘Hedge Fund Survey’, January 2003. Harry M Kat, Sa Lu, ‘An Excursion into the Statistical Properties of Hedge Funds’, ISMA Discussion Papers in Finance 2002-12, 1 May 2002. Francis Koh, Winston T. H. Koh, and Melvyn Teo, ‘Asian Hedge Funds: Return Persistence, Style, and Fund Characteristics’, June 2003. Standard & Poor’s, ‘December Rally Adds to Hedge Fund Returns for 2005’, 11 January 2006. Jean Francois Bacmann and Gregor Gawron, ‘Fat Tail Risk in Portfolios of Hedge Funds and Traditional Investements’, RMF Investment Group, January 2004. Mark Chambers, ‘Hedge Fund Indices’, Man Investment Products, AIMA Journal, December 2002. Thomas Della Casa and Mark Rechsteiner, ‘Hedge Fund Indices’, RMF Hedge Fund Research, December 2004. Walter Gehin, ‘Hedge Fund Returns: An Overview of Return-Based and Asset-Based Style Factors’, January 2006. 28 Roger W. Merritt, Fitch Ratings, and Ian C. Linnell, ‘Hedge Funds: An Emerging Force in the Global Credit Markets’, 28 February 2006. David Setters, ‘Hedge Funds and Derivatives: A Maturing Relationship’. Francois-Serge Lhabitant, ‘Hedge Funds Investing: A Quantitative Look Inside the Black Box’, August 2001. Staff Report to the United States Securities and Exchange Commission, ‘Implications of Growth of Hedge Funds’, September 2003. Eurekahedge, ‘Key Trends in Asian Hedge Funds’, AIMA Journal, April 2004. Beng Liang, ‘On the Performance of Hedge Funds’, May 1998. Hedgequest, ‘Searching for the perfect risk-adjusted performance measure’, Summer 2005. Noel Amanc and Lionel Martellini, ‘The Brave New World of Hedge Fund Indexes’, 28 January 2002. Vadim Zlotnikov and Guillermo Maclean, ‘Hedge Fund Industry Update – One Year Later, The Song Remains the Same’, Bernstein Research Call, 28 July 2004. David Gordon, ‘What Goes Up, Comes Down’, 2002 AIC Conference. Harry M. Kat, ’10 Things That Investors Should Know About Hedge Funds’, Spring 2003. CISDM Research Department, ‘Benefits of Hedge Funds’, 2005 Update. Capocci Daniel, ‘An Analysis of Hedge Fund Performance 1984-2000’, November 2001. Capocci Daniel, ‘Comparitive Analysis of Hedge Fund Returns’, January 2006. William Fung, David A. Hsieh, Narayan Y. Naik and Tarun Ramadorai, ‘Hedge Funds: Performance, Risk and Capital Formation’, September 2005. Kevin Dowd, ‘Too Big To Fail? Long Term Capital Management and the Federal Reserve’, 23 September 1999. Jenny Corbett and David Vines, ‘East Asian Currency and Financial Crises: Lessons from Vulnerability, Crisis and Collapse’, Asia Pacific Press, 1999. Lindsay I Smith, ‘A Tutorial on Principal Components Analysis’, 26 February 2002. 29 Graham Bird and Ramkishen S. Rajan, ‘Recovery or Recession? Post-Devaluation Output Performance: The Thai Experience’, Center for International Economic Studies, November 2000. nd.pdf l=EN&b=10 30 nfo&id=21 31 APPENDIX A: ABCs of Hedge Funds What is a Hedge Fund?. Key Characteristics of Hedge Funds Hedge funds utilize a variety of financial instruments to reduce risk, enhance returns and minimize the correlation with equity and bond markets. Many hedge funds are flexible in their investment options (can use short selling, leverage, derivatives such as puts, calls, options, futures, etc.). 32 business. In addition, hedge fund managers usually have their own money invested in their fund. Facts About the Hedge Fund Industry Includes a variety of investment strategies, some of which use leverage and derivatives while others are more conservative and employ little or no leverage. Many hedge fund strategies seek to reduce market risk specifically by shorting equities or through the use of derivatives. Many hedge fund strategies, particularly arbitrage strategies, are limited as to how much capital they can successfully employ before returns diminish. As a result, many successful hedge fund managers limit the amount of capital they will accept. Hedge fund managers are generally highly professional, disciplined and diligent.. 33 Investing in anticipation of a specific event - merger transaction, hostile takeover, spin-off, exiting of bankruptcy proceedings, etc. Investing in deeply discounted securities - of companies about to enter or exit financial distress or bankruptcy, often below liquidation value. Hedge Fund Styles The predictability of future results shows a strong correlation with the volatility of each strategy. Future performance of strategies with high volatility is far less predictable than future performance from strategies experiencing low or moderate volatility. Aggressive Growth: Invests in equities expected to experience acceleration in growth of earnings per share. Generally high P/E ratios, low or no dividends; often smaller and micro cap stocks 34: 35 add re-buy them at a future date at a lower price due to the manager's assessment of the overvaluation of the securities, or the market, or in anticipation of earnings disappointments often due to 36outs. Value: Invests in securities perceived to be selling at deep discounts to their intrinsic or potential worth. Analysts may out of favor or under follow such securities. Long-term holding, patience, and strong discipline are often required until the market recognizes the ultimate value. Expected Volatility: Low – Moderate. 37 Benefits. How does a hedge fund "hedge" against risk? Some funds that are called hedge funds don't actually hedge against risk. Because the term is applied to a wide range of alternative funds, it also encompasses funds, however, do seek to hedge against risk in one way or another, making consistency and stability of return, rather than magnitude, their key priority. (In fact, less than 5 percent of hedge funds are global macro funds). Event-driven strategies, for example, such as investing in distressed or special situations reduce risk by being uncorrelated to the markets. They may buy interest-paying bonds or trade claims of companies undergoing reorganization, 38. 39 What is the difference between a hedge fund and a mutual fund? There are five key distinctions: 1. Mutual funds are measured on relative performance - that is, their performance is compared to a relevant index such as the S&P 500 Index or to other mutual funds in their sector. Hedge funds are expected to deliver absolute returns - they attempt to make profits under all circumstances, even when the relative indices are down. 2. Mutual funds are highly regulated, restricting the use of short selling and derivatives. These regulations serve as handcuffs, making it more difficult to outperform the market or to protect the assets of the fund in a downturn. Hedge funds, on the other hand, are unregulated and therefore unrestricted - they allow for short selling and other strategies designed to accelerate performance or reduce volatility.. Mutual funds generally remunerate management based on a percent of assets under management. Hedge funds always remunerate managers with performancerelated incentive fees as well as a fixed fee. Investing for absolute returns is more demanding than simply seeking relative returns and requires greater skill, knowledge, and talent. Not surprisingly, the incentive-based performance fees tend to attract the most talented investment managers to the hedge fund industry. 4. Mutual funds are not able to effectively protect portfolios against declining markets other than by going into cash or by shorting a limited amount of stock index futures. Hedge funds, on the other hand, are often able to protect against 40. The future performance of mutual funds is dependent on the direction of the equity markets. It can be compared to putting a cork on the surface of. 41 What is a Fund of Hedge Funds? A diversified portfolio of generally uncorrelated hedge funds. May be widely diversified, or sector or geographically focused. Seeks to deliver more consistent returns than stock portfolios, mutual funds, unit trusts or individual hedge funds. Preferred investment of choice for many pension funds, endowments, insurance companies, private banks and high-net-worth families and individuals. Provides access to a broad range of investment styles, strategies and hedge fund managers for one easy-to-administer investment. Provides more predictable returns than traditional investment funds. Provides effective diversification for investment portfolios. Benefits of a Hedge Fund of Funds Provides an investment portfolio with lower levels of risk and can deliver returns uncorrelated with the performance of the stock market. Delivers more stable returns under most market conditions due to the fund-offund manager’s ability and understanding of the various hedge strategies. Significantly reduces individual fund and manager risk. Eliminates the need for time-consuming due diligence otherwise required for making hedge fund investment decisions. Allows for easier administration of widely diversified investments across a large variety of hedge funds. Allows access to a broader spectrum of leading hedge funds that may otherwise be unavailable due to high minimum investment requirements. Is an ideal way to gain access to a wide variety of hedge fund strategies, managed by many of the world’s premier investment professionals, for a relatively modest investment. 42 APPENDIX B: About Hedge Fund Indexes ALTVEST InvestorForce produces a family of 14 hedge fund indices, the master index (Altvest hedge fund index) and 13 sub-indices. The indices are equal weighted for all included hedge funds that have complete performance records since trading commencement. The master index and sub-indices commence January 1993. The number of funds included in the indices will continue to grow as new funds meet the specified criteria. In the following, the "current performance month" corresponds to previous calendar month (for example, if March 2000 is the current performance month, April 2000 is the current calendar month). Current performance month index returns will be updated everyday of the current calendar month as InvestorForce funds report and new funds are added on InvestorForce. InvestorForce will freeze the current performance month return at 2300 hours on the last day of the current calendar month. Monthly historical numbers will never be rebalanced to account for new funds added or funds removed. Calculation Methodology For each month the universe of funds that meet the specified criteria are grouped according to master and as well as sub-index. For each grouping an arithmetic average is calculated for the given month. Simple average of monthly rates of return: Where Ri = rate of return (net of all fees) for the ith month since inception, and s = # of months since inception 43 Master Index: The master index is comprised of all funds on the InvestorForce database that have reported performance for the current month and provided complete performance records since the time of trading commencement. All sub indices are comprised of funds that are included in the master index. 44 CSFB / TREMONT Index Characteristics The Credit Suisse/Tremont Hedge Fund Index is the largest asset-weighted hedge fund index. Unlike even-weighted indices, it does not underweight top performers and overweight decliners. The Credit Suisse/Tremont Hedge Fund Index is broadly diversified, encompassing 424 funds (April 2006) across ten style-based sectors, and representative of the entire hedge fund industry. Index construction is fully transparent, with unbiased, rules - based selection criteria and published constituents. Rigorous reporting standards are required of member funds, including monthly performance disclosure and audited financial statements. Credit Suisse / Tremont Hedge Fund Index Specifications The industry’s first asset-weighted hedge fund index. Provides investors with the first suite of indices designed from the ground up to provide meaningful performance measurement—not built around an investable product. Composite Index comprised of ten style-based sector indices Funds drawn from the Credit Suisse/Tremont database of approximately 4500 funds Selection universe consists of funds meeting Credit Suisse/Tremont minimum criteria: Timely and accurate NAV reporting—every month Audited financial statements At least $50 million under management One-year track record (discretionary exceptions for funds 45 With more than $500 million under management) Index represents at least 85% of assets under management in selection universe for each sector—total composite Index membership of 424 funds (April 2006). 46 EACM-100 ONSHORE The EACM100® Index is an equally-weighted composite of 100 hedge funds selected by EACM Advisors LLC (EACM) to be a representative sample of the various hedge fund strategies available to investors. EACM100® is designed to be an intelligent benchmark, with manager participation based on both quantitative and qualitative factors. The EACM100® Index is not a database. It is an investable group of 100 investment managers, chosen by the investment professionals at EACM to effectively sample the universe of hedge fund styles. Managers are grouped into 5 broad investment strategies, which are further cut into 12 sub-strategies. Performance results are calculated at each level to provide a better understanding of strategy and sub-strategy performance. The Index is useful as a diversified composite of hedge fund styles or as selected strategies and/or sub-strategies. The EACM100® Index is designed to mimic a reasonably constructed fund of hedge funds product. Specific strategy weights are set by EACM, using an investment-driven approach. Individual managers are always equally weighted at the beginning of each calendar year, with each manager representing 1% of the total index. This approach provides a useful framework for analysis and avoids the pitfalls of a haphazard grouping of managers whose strategy weights are solely determined by availability of data. Manager Selection Performance is not the key factor to manager selection. The investment professionals at EACM carefully review the investment discipline of each manager and select only funds, which are representative of a specific trading style. Inclusion in the Index is not contingent on any prior business arrangement with EACM, nor do investment managers 47 pay any fees to participate. However, managers do agree to a number of terms and conditions. Each manager must have at least a two-year live performance record, a minimum of $20 million in the investment strategy, be open for new investments, and provide at least annual liquidity. Only managers with both an onshore and offshore version of the same product can be considered. Managers also agree to provide monthly performance results on a timely basis, including any and all revisions, and to provide fund documentation, such as offering memoranda and audited financials. Performance is also not a cause for removal from the EACM100®. Funds are dropped from the Index for a variety of reasons, but poor performance will result in removal only where a blow-up situation occurs and the fund dissolves. During a blow-up situation, the complete results of the ailing fund are reflected in the Index totals. More common reasons for manager replacements include straying from the investment strategy, a lack of adequate assets under management, not being open to new investments or simply closing-up shop. The manager line-up is reviewed on an annual basis and substitutions are made at the beginning of each calendar year. In the case of managers dropping out during the year, a special rebalancing is performed at mid-year. Manager selection or removal is not a capricious act; it is a well thought out decision, made only after careful review. 48 HFR HEDGE FUND INDEX (HFRI) Methodology The HFRI Monthly Indices (HFRI) are equally weighted performance indexes, utilized by numerous hedge fund managers as a benchmark for their own hedge funds. The HFRI are broken down into 37 different categories by strategy, including the HFRI Fund Weighted Composite, which accounts for over 1600 funds listed on the internal HFR Database. Due to mutual agreements with the hedge fund managers listed in the HFR Database, we are not at liberty to disclose the particular funds behind any index to nondatabase subscribers. Funds included in the HFRI Monthly Indices must: • • • Report monthly returns Report Net of All Fees Returns Report assets in USD Indices Notes: • • • All HFRI Indices are fund weighted (equal weighted). There is no required asset-size minimum for fund inclusion in the HFRI. There is no required length of time a fund must be actively trading before inclusion in the HFRI. The HFRI are updated three times a month: Flash Update (5th business day of the month), Mid Update (15th of the month), and End Update (1st business day of following month) • • The current month and the prior three months are left as estimates and are subject to change. All performance prior to that is locked and is no longer subject to change. 49 • If a fund liquidates/closes, that fund's performance will be included in the HFRI as of that fund's last reported performance update. The HFRI Fund of Funds Index is not included in the HFRI Fund Weighted Composite Index. Both domestic and offshore funds are included in the HFRI. In cases where a manager lists mirrored-performance funds, only the fund with the larger asset size is included in the HFRI. • • • 50 HENNESSEE HEDGE FUND INDEX The Hennessee Hedge Fund Indices® are calculated from performance data supplied by a diversified group of hedge funds monitored by the Hennessee Group LLC. The Hennessee Hedge Fund Index® is believed to represent over half of the capital in the industry and is an equally-weighted average of the funds in the Hennessee Hedge Fund Indices®. The funds in the Hennessee Hedge Fund Index® are believed to be statistically representative of the larger Hennessee Universe of over 3,000 hedge funds and are net of fees and unaudited. Past performance is no guarantee of future returns. Conditions for inclusion in the index are as follows: • • The firm should have at least $10 million in hedge fund assets. The fund should have at least 12 months of track record. An exception will be extended to any firm, which has over $100 million in hedge fund assets. The fund should satisfy the Hennessee Group LLC reporting requirements. Funds are not eliminated from the index unless they are liquidated or fail to satisfy the inclusion criteria, as set forth above. If eliminated, the fund's past performance will remain in the index in order to avoid survivorship bias. • • Rebalancing of the index will take place on an annual basis. 51 MSCI HEDGE INVEST INDEX The MSCI Hedge Invest Index is designed to be both investable and to reflect the overall structure and composition of the hedge fund universe. The MSCI Hedge Invest Index aims to reflect the overall structure and composition of the hedge fund universe from the funds available on a hedge fund platform managed by Lyxor Asset Management. The funds on the platform available for the index offer weekly liquidity. To replicate the characteristics of the overall hedge fund universe, MSCI starts with the MSCI Hedge Fund Composite Index, constructed independently of any hedge fund platform, and identifies the most liquid and significant investment segments. The investable index is rebalanced quarterly and the indicative index performance is published daily on and Bloomberg. At the October 2005 Quarterly Index Review, the number of funds in the MSCI Hedge Invest Index increased to 125, with the investment segment weights set forth below. Note that the number of funds included in the index is expected to grow as the platform expands. 52 VAN GLOBAL HEDGE FUND INDEX About VAN Founded in 1992, Van was the first to collect data and perform large-scale research on the broad universe of hedge funds. Today, Van is recognized as an alternative investment expert with one of the most comprehensive hedge fund platforms available and is the provider of the Van Global Hedge Fund Index, an industry-standard benchmark of the hedge fund asset class. In addition to the index publishing and research division, Van is a leading hedge fund investment advisor and provider of index-linked hedge fund products and services to investors worldwide. Several of the Van Companies are registered as Investment Advisors with the SEC. Van has one of the oldest and most extensive hedge fund management platforms in existence, tracking more than 6,700 funds worldwide. This number includes only hedge funds as they are generally defined, and excludes fund-of-funds as well as certain other types of investments such as private equity funds, venture capital funds, separately managed account strategy composites, etc. Both quantitative and qualitative information is included for each fund in the database. Institutional investors, hedge fund managers and media worldwide recognize Van as an authoritative source for hedge fund indexing. Van’s flagship indices, the Van Global Hedge Fund Indices, are considered among the financial industry’s oldest and most widely utilized composite benchmarks of global hedge fund performance. Initially compiled in 1994 and published in 1995, the Van Global Hedge Fund Indices provide more than 17 years of aggregate risk and return history that represent the average performance of hedge funds around the world, tracking the performance of the overall hedge fund universe. The indices and strategy sub-indices, updated monthly, are based on underlying hedge funds returns, are net of underlying manager fees, and are simple averages (not dollar-weighted averages) and do not include Fund-of-funds. 53 A minimum of 100 funds is required for the preliminary index, a minimum of 800 funds is required for the mid-month index, and a minimum of 1,000 funds is required for the month-end index. The actual number of constituents for these indices often greatly exceeds the minimum requirements, especially for the preliminary and month-end indices. 54 APPENDIX C: About Market Indexes DOW JONES INDUSTRIAL AVERAGE Overview When Charles H.Dow first unveiled his industrial stock average on May 26, 1896,'s college tuition bills and their own retirements. Information to guide them in their investment decisions is abundantly available., which were 55, Mr.. 56 Using such large, frequently traded stocks the most-quoted market indicator in newspapers, on TV and on the Internet. Because of its longevity, it became the first to be quoted by other publications. This practice became habit when Wall Street earned at least a mention in the general news each day, and habit became tradition when the post-World War II bull market galvanized the nation's attention. The Industrial Average became the indicator to cite if you were citing only one. Besides longevity, two other factors play a role in its widespread popularity: It is understandable to most people, and it reliably indicates the market's basic trend. 57 MSCI EAFE INDEX MSCI has been the world’s leading benchmark provider since 1969, providing global, regional and sector products and services to international investors. In North America, MSCI’s market share of the international equity indexing industry is over 90%. MSCI has achieved this preeminent position by constructing precise benchmarks that consistently reflect the business activities of equity markets worldwide. MSCI’s Equity Indices are developed and maintained by an experienced staff of researchers based in Europe, the United States and Asia. The MSCI EAFE Index® is recognized as the pre-eminent benchmark in the United States to measure international equity performance. It comprises 21 MSCI country indices, representing the developed markes outside of North America: Europe, Australasia and the Far East. Since inception in, the MSCI EAFE Index has had an average gross annual return of 11.2%. MSCI aims to include in its international indices 85% of the free fl oat-adjusted market capitalization in each industry group, within each country. As of Dec 30, 2005 the MSCI EAFE Index contained 1,137 securities with a total market capitalization of over USD 10.2 trillion. 58 S&P 500 The S&P 500 is a list of 500 US corporations, ordered by market capitalization.. The index was previously market-value weighted; that is, movements in price of companies whose total market valuation (share price times the number of outstanding shares) is larger will have a greater effect on the index than companies whose market valuation is smaller. The index has since been converted to float weighted; that is, only shares available for public trading ("float") are counted. The transition was made in two tranches, the first on March 18, 2005 and the second on September 16, 2005. 59 WILSHIRE 5000 The Dow Jones Wilshire 5000 Total Stock Market Index, also known as the Dow Jones Wilshire 5000 Composite Index or simply the Wilshire 5000 is a broad base stock market index often used to represent the entire United States stock market. It measures the performance of all public companies based in the United States with "readily available price data"; that is, the value of common stock, real estate investment trusts (REITs), and limited partnerships of companies whose primary stock market listing is on the New York Stock Exchange, NASDAQ, or American Stock Exchange. The Wilshire 5000 is a market capitalization-weighted index, meaning price change in its components are factored against the total market capitalization of those components. Dow Jones publishes both an index based on full market capitalization and also one based on a float-adjusted market capitalization, reflecting the number of shares actually available to trade. The list of securities is updated monthly to add new listings for corporate spin-offs and initial public offerings, and to remove companies which move to the pink sheets or stop trading for ten days. The index was created by Wilshire Associates in 1974 and named for the approximate number of issues it included at that time. It was renamed the "Dow Jones Wilshire 5000" after the Dow Jones & Company took over responsibility for its calculation and maintenance in April 2004. 60 RUSSELL MIDCAP larger stocks do not distort the performance and characteristics of the true midcap opportunity set. The Russell Midcap Index includes the smallest 800 securities in the Russell 1000. RUSSELL 2000. RUSSELL MICROCAP The Russell Microcap Index offers investors access to the Microcap segment of the U.S. equity market. It makes up less than 3% of the U.S. equity market and is represented by the smallest 1,000 securities in the small-cap Russell 2000 Index plus the next 1,000 securities. Index is completely reconstituted annually to ensure larger stocks do not distort performance and characteristics of the true Microcap opportunity set. 61 APPENDIX D: About Skewness and Kurtosis A fundamental task in many statistical analyses is to characterize the location and variability of a data set. A further characterization of the data includes skewness and kurtosis. Ske. For univariate data Y1, Y2, ..., YN, the formula for skewness is: Where is the mean, s is the standard deviation, and N is the number of data points. The skewness for a normal distribution is zero, and any symmetric data should have skewness near to zero. Negative values for the skewness indicate data that are skewed left and positive values for the skewness indicate data that are skewed right. By skewed left, we mean that the left tail is long relative to the right tail. Similarly, skewed right means that 62 the right tail is long relative to the left tail. Some measurements have a lower bound and are skewed right. For example, in reliability studies, failure times cannot be negative. For univariate data Y1, Y2, ..., YN, the formula for kurtosis is: Where is the mean, s is the standard deviation, and N is the number of data points. The kurtosis for a standard normal distribution is three. For this reason, excess kurtosis is defined as so that the standard normal distribution has a kurtosis of zero. Positive kurtosis indicates a "peaked" distribution and negative kurtosis indicates a "flat" distribution. The following example shows histograms for 10,000 random numbers generated from a normal, a double exponential, a Cauchy, and a Weibull distribution. 63 The first histogram is a sample from a normal distribution. The normal distribution is a symmetric distribution with well-behaved tails. This is indicated by the skewness of 0.03. The kurtosis of 2.96 is near the expected value of 3. The histogram verifies the symmetry. The second histogram is a sample from a double exponential distribution. The double exponential is a symmetric distribution. Compared to the normal, it has a stronger peak, more rapid decay, and heavier tails. That is, we would expect a skewness near zero and a kurtosis higher than 3. The skewness is 0.06 and the kurtosis is 5.9. The third histogram is a sample from a Cauchy distribution. skewness to. The fourth histogram is a sample from a Weibull distribution with shape parameter 1.5. The Weibull distribution is a skewed distribution with the amount of skewness depending on the value of the shape parameter. The degree of decay as we move away from the center also depends on the value of the shape parameter. For this data set, the skewness is 1.08 and the kurtosis is 4.46, which indicates moderate skewness and kurtosis. Many classical statistical tests and intervals depend on normality assumptions. Significant skewness and kurtosis clearly indicate that data are not normal. If a data set exhibits significant skewness or kurtosis (as indicated by a histogram or the numerical measures), what can we do about it? 64. 65
https://www.scribd.com/document/31797206/A-Performance-Analysis-of-the-Hedge-Funds-Industry
CC-MAIN-2018-05
refinedweb
11,456
51.99
Am 24.03.2019 um 15:59 schrieb William F Pokorny: > That was painful to run down, but changing Parser::IsEndOfInvokedMacro() > in the file source/parser/parser_tokenizer.cpp as in the attached file > fixes the issue. Well, that's a lead I might investigate. But as you don't seem to be confidently understanding what's going on, I'll be going with the presumption that this is not a proper fix but just a patch. > Have to admit I'm not sure even now I completely understand what was > happening (how it could happen perhaps). We seem to have been comparing > structures in the original - which I don't think c++ does by default? > > return (Cond_Stack.back().PMac->endPosition == > CurrentFilePosition()) && ... > > If this true though, why no compile warning or error? That's easily explained: The data is of type `LexemePosition`, which is a struct defining an overloaded comparison operator. (In C++, structs and classes are the same category of beasts. The only semantic difference between `struct` and `class` is the default member and base class access: `struct` implies `public` by default, while `class` implies `private` by default. According to the language standard, it is even perfectly valid to pre-declare a type as `struct` and later define it as `class` or vice versa.) The `LexemePosition` type holds a file offset, as well as a line and column number. Comparison is done by file offset, but the code also features an assertion, which tests whether the comparison by line and column yields the same result. If the assertion fails, debug builds bomb (i.e. core dump or break into a debugger), and non-debug builds throw an exception to trigger a parse error. (Usually such assertion tests are only enabled in debug builds, but in the parser they're currently deliberately enabled in all builds, because I don't fully trust my current understanding of the parser and the refactoring work I have based on that understanding.) If `Cond_Stack.back().PMac->endPosition == CurrentFilePosition()` throws an exception, it means either of two things: (A) The positions compared are from two different files, which means the calling code has failed to check whether it is even in the right file. (B) The line/column tracking is buggy, and yields different positions in different situations. > In any case, once > in a while on the structure compare some error got thrown and caught by > the parser code and no message was set which is why we get the unhelpful > generic something wrong message. This is the default error message for failed assertions. Unfortunately somewhere along the chain of error message handling the location information for the assertion seems to be lost, the original exception should include the nformation that it was triggered in `parsertypes.h` line 62 (in my version of the file at any rate), maybe even that it was in function `LexemePosition::operator==` (depending on compiler). Maybe it's worth investigating and fixing that.
http://news.povray.org/povray.bugreports/message/%3C5c995c61%241%40news.povray.org%3E/
CC-MAIN-2019-26
refinedweb
492
51.07
The same page in french: MathsAuLycee The developpement page: HighSchoolDesign Organisation of the week: MathsAuLyceeOrganisation Projet We aim to develop a version of the notebook dedicated to high school usage. A dedicated (french) notebook is available on internet: The main needed features are - reduction of the global namespace - add dedicated functions - translations (in french) of the notebook - translation of the documentation - translation of function docstrings For the two first, a patch is available. High School Sage Math.org HSSageMath.org is a website designed for High School and Middle School level students and educators. We hope to use this wiki and Sage notebook server to share helpful Sage tools and information with the world. There is a tremendous amount of helpful and essential information about Sage at. However, the information there is overwhelming in size and sophistication - it is primarily aimed at university level users. The objective of High School Sage Math is to provide bite-size Sage resources and lessons that would be useful in a high school or middle school course. The site is quite simple. Have you been working on cool things in Sage? Did you learn something new today that you think would be helpful to other students or educators? All you need to do is share your Sage interactive worksheet on this website. The site is completely free and all that we ask is that you respect our goal to educate and share resources in High School and Middle School environments. You are also welcome to improve on any existing contributions - please feel free to make your changes or additions and put in your replacement. We encourage everyone to browse the site, contribute to the wealth of information, and help us in continuing to grow the Sage Math community. Visit [] to view the resources and make contributions!
http://www.sagemath.org:9001/SageForHighSchool
crawl-003
refinedweb
302
60.95
The problem so far is how to get the second row of data out of the file and put it in an array. For example if my file look like this: 1. 20 2. 15 3. 77 4. 15 5. 29 6. 77 How would I get the 20, 15, etc. into an array without reading the 1. , 2. , 3. ? I have more questions but I need to figure this out first before I get into that. Thanks in advance!!! Here's my code so far: #include <iostream> #include <iomanip> #include <fstream> using namespace std; int main (int argc, char * const argv[]) { std::cout << "Hello, World!\n"; double time_to_accesstrack, sort_coeficient; string input_filename; cout << "Welcome to the I/O reordering analyzer" << endl; cout << "\n"<< endl; cout << "How long in milleseconds does the drive need to access a track?"; cin >> time_to_accesstrack; cout << "What is the CPU's sort coefficient (in milliseconds)?"; cin >> sort_coeficient; cout << "What is the name of the test data file?"; cin >> input_filename; cout << "\n" << endl; cout << "Analyzing..."<< endl; while (true) { string infilename; getline( cin, input_filename ); infile.open( input_filename.c_str() ); if(!inFile) { cout << endl << "Failed to open file " << input_filename; return 1; } long n = 0; while(!inFile.eof()) { inFile >> n; cout << std::setw(15) << n; } cout << endl; return 0; } }
http://www.dreamincode.net/forums/topic/309590-c-problem-how-to-read-certain-data-from-text-file-into-array/page__p__1791628
CC-MAIN-2013-20
refinedweb
208
74.19
I am having a problem with the loop. I have added the comments to this code as I understand it. It appears that the code should loop after the if test to see if the answer is yes. If the answer is yes it should go back into the loop, if no then it should exit via the return. The code runs, takes the values and displays them correctly, But when I enter Y or y it takes the bvalues and then exits. Why is the loop not working when the value of more is y or Y? #include <stdio.h> void main() { int start=0; /* integer for the starting number */ int end=0; /* integer for the ending number */ int i =0; /* increment counter to the value of &end */ char more; /* variable to continue with the loop y||Y */ printf(" Please enter the starting and ending values\n"); /* statement to user */ scanf(" %d" " %d", &start, &end); /* getting the values */ printf(" dec hex char \n"); /* printing the display header */ for( i = start; i <= end; ++i) /* looping through the numbers adding 1 to end */ { printf(" %d %x %c \n", i, i, i); /*printing the values of each type */ } printf("Would you like to try some more values? (N/Y)?: \n"); /* ask for more input values */ scanf(" %c", &more); /* look for value to be yes or no */ printf("Please enter the values.\n"); /* ask for values */ scanf(" %d" " %d", &start, &end); /* take input values */ if(more == 'y' || more == 'Y' ); /* if test for yes loop */ else {(more == 'n' || more == 'N' ); /* else exit program */ return;} }
https://cboard.cprogramming.com/c-programming/33823-loop.html
CC-MAIN-2017-43
refinedweb
260
68.64
#include <ParagraphLayout.h>. Note that the ICU layout engine has been deprecated and removed. You may use this class with the HarfBuzz icu-le-hb wrapper, see See for special build instructions. Definition at line 51 of file ParagraphLayout. ICU "poor man's RTTI", returns a UClassID for the actual class. Reimplemented from icu::UObject. Definition at line 553 of file ParagraphLayout.h. Return the resolved paragraph level. This is useful for those cases where the bidi analysis has determined the level based on the first strong character in the paragraph. Definition at line 648 of file ParagraphLayout.h. References ubidi_getParaLevel(). ICU "poor man's RTTI", returns a UClassID for this class. Definition at line 546 of file ParagraphLayout.h. Return the directionality of the text in the paragraph. UBIDI_LTRif the text is all left to right, UBIDI_RTLif the text is all right to left, or UBIDI_MIXEDif the text has mixed direction. Definition at line 653 of file ParagraphLayout.h. References ubidi_getDirection(). Return a ParagraphLayout:. Reset line breaking to start from the beginning of the paragraph. Definition at line 658 of file ParagraphLayout.h.
https://unicode-org.github.io/icu-docs/apidoc/released/icu4c/classicu_1_1ParagraphLayout.html
CC-MAIN-2021-39
refinedweb
184
61.12
Just before the holidays I was working on a .NET Core project that needed data available from some web services. I’ve done this a bunch of times previously, and always seem to spend a couple of hours writing code using the HttpClient object before remembering there are libraries out there that have done the heavy lifting for me. So I thought I’d do a little write up of a couple of popular library options that I’ve used – RestSharp and Flurl. I find that learn quickest from reading example code, so I’ve written sample code showing how to use both of these libraries with a few different publically available APIs. I’ll look at three different services in this post: - api.postcodes.io – no authentication required, uses GET and POST verbs - api.nasa.gov – authentication via an API key passed in the query string - api.github.com – Basic Authentication required to access private repo information And as an architect, I’m sometimes asked how to get started (and sometimes ‘why did you chose library X instead of library Y?’), so I’ve wrapped up with a comparison and which library I like best right now. Reading data using RestSharp This is a very mature and well documented open source project (released under the Apache 2.0 licence), with the code available on Github. You can install the nuget package in your project using package manager with the command: Install-Package RestSharp First – using the GET verb with RestSharp. Using HTTP GET to return data from a web service Using Postcodes.io I’ve been working with mapping software recently – some of my data sources don’t have latitude and longitude for locations, and instead they only have a UK postcode. Fortunately I can use the free Postcodes.io RESTful web API to determine a latitude and longitude for each of the postcode values. I can either just send a postcode using a GET request to get the corresponding geocode (latitude and longitude) back, or I can use a POST request to send a list of postcodes and get a list of geocodes back, which speeds things up a bit with bulk processing. Let’ start with a simple example – using the GET verb for a single postcode. I can request a geocode corresponding to a postcode from the Postcodes.io service through a browser with a URL like the one below: 3JR This service doesn’t require any authentication, and the code below shows how to use RestSharp and C# to get data using a GET request. // instantiate the RestClient with the base API url var client = new RestClient(""); // specify the resource, e.g. 3JR var getRequest = new RestRequest("postcodes/{postcode}"); getRequest.AddUrlSegment("postcode", "IP1 3JR"); // send the GET request and return an object which contains the API's JSON response var singleGeocodeResponseContainer = client.Execute(getRequest); // get the API's JSON response var singleGeocodeResponse = singleGeocodeResponseContainer.Content; The example above returns raw JSON content, which I can deserialise into a custom POCO, such as the one below. public class GeocodeResponse { public string Status { get; set; } public Result Result { get; set; } } public class Result { public string Postcode { get; set; } public string Longitude { get; set; } public string Latitude { get; set; } } But I can do better than the code above – if I specify the GeocodeResponse type in the Execute method (as shown below), RestSharp uses the classes above and intelligently hydrates the POCO from the raw JSON content returned: // instantiate the RestClient with the base API url var client = new RestClient(""); // specify the resource, e.g. var getRequest = new RestRequest("postcodes/{postcode}"); getRequest.AddUrlSegment("postcode", "OX495NU"); // send the GET request and return an object which contains a strongly typed response var singleGeocodeResponseContainer = client.Execute<GeocodeResponse>(getRequest); // get the strongly typed response var singleGeocodeResponse = singleGeocodeResponseContainer.Data; Of course, not APIs all work in the same way, so here are another couple of examples of how to return data from different publically available APIs. NASA Astronomy Picture of the Day This NASA API is also freely available, but slightly different from the Postcodes.io API in that it requires an API subscription key. NASA requires that the key is passed as a query string parameter, and RestSharp facilitates this with the AddQueryParameter method (as shown below). This method of securing a service isn’t that unusual – goodreads.com/api also uses this method. // instantiate the RestClient with the base API url var client = new RestClient(""); // specify the resource, e.g. var getRequest = new RestRequest("planetary/apod"); // Add the authentication key which NASA expects to be passed as a parameter // This gives getRequest.AddQueryParameter("api_key", "DEMO_KEY"); // send the GET request and return an object which contains the API's JSON response var pictureOfTheDayResponseContainer = client.Execute(getRequest); // get the API's JSON response var pictureOfTheDayJson = pictureOfTheDayResponseContainer.Content; Again, I could create a custom POCO corresponding to the JSON structure and populate an instance of this by passing the type with the Execute method. Github’s API The Github API will return public data any authentication, but if I provide Basic Authentication data it will also return extra information relevant to me about my profile, such as information about my private repositories. RestSharp allows us to set an Authenticator property to specify the userid and password. // instantiate the RestClient with the base API url var client = new RestClient(""); // pass in user id and password client.Authenticator = new HttpBasicAuthenticator("jeremylindsayni", "[[my password]]"); // specify the resource that requires authentication // e.g. var getRequest = new RestRequest("users/jeremylindsayni"); // send the GET request and return an object which contains the API's JSON response var response = client.Execute(getRequest); Obviously you shouldn’t hard code your password into your code – these are just examples of how to return data, they’re not meant to be best practices. You might want to store your password in an environment variable, or you could do even better and use Azure Key Vault – I’ve written about how to do that here and here. Using the POST verb to obtain data from a web service The code in the previous example refers to GET requests – a POST request is slightly more complex. The api.postcodes.io service has a few different endpoints – the one I described earlier only finds geocode information for a single postcode – but I’m also able to post a JSON list of up to 100 postcodes, and get corresponding geocode information back as a JSON list. The JSON needs to be in the format below: { "postcodes" : ["IP1 3JR", "M32 0JG"] } Normally I prefer to manipulate data in C# structures, so I can add my list of postcodes to the object below. public class PostCodeCollection { public List<string> postcodes { get; set; } } I’m able to create a POCO object with the data I want to post to the body of the POST request, and RestSharp will automatically convert it to JSON when I pass the object into the AddJsonBody method. // instantiate the ResttClient with the base API url var client = new RestClient(""); // specify the resource, e.g. var postRequest = new RestRequest("postcodes", Method.POST, DataFormat.Json); // instantiate and hydrate a POCO object with the list postcodes we want geocode data for var postcodes = new PostCodeCollection { postcodes = new List<string> { "IP1 3JR", "M32 0JG" } }; // add this POCO object to the request body, RestSharp automatically serialises it to JSON postRequest.AddJsonBody(postcodes); // send the POST request and return an object which contains JSON var bulkGeocodeResponseContainer = client.Execute(postRequest); One gotcha – RestSharp Serialization and Deserialization One aspect of RestSharp that I don’t like is how the JSON serialisation and deserialisation works. RestSharp uses its own engine for processing JSON, but basically I prefer Json.NET for this. For example, if I use the default JSON processing engine in RestSharp, then my PostcodeCollection POCO needs to have property names which exactly match the JSON property names (including case sensitivity). I’m used to working with Json.NET and decorating properties with attributes describing how to serialise into JSON, but this won’t work with RestSharp by default. // THIS DOESN'T WORK WITH RESTSHARP UNLESS YOU ALSO USE **AND REGISTER** JSON.NET public class PostCodeCollection { [JsonProperty(PropertyName = "postcodes")] public List<string> Postcodes { get; set; } } Instead I need to override the default RestSharp serializer and instruct it to use Json.NET. The RestSharp maintainers have written about their reasons here and also here – and helped out by writing the code to show how to override the default RestSharp serializer. But personally I’d rather just use Json.NET the way I normally do, and not have to jump through an extra hoop to use it. Reading Data using Flurl Flurl is newer than RestSharp, but it’s still a reasonably mature and well documented open source project (released under the MIT licence). Again, the code is on Github. Flurl is different from RestSharp in that it allows you to consume the web service by building a fluent chain of instructions. You can install the nuget package in your project using package manager with the command: Install-Package Flurl.Http Using HTTP GET to return data from a web service Let’s look at how to use the GET verb to read data from the api.postcodes.io. api.nasa.gov. and api.github.com. First, using Flurl with api.postcodes.io The code below searches for geocode data from the specified postcode, and returns the raw JSON response. There’s no need to instantiate a client, and I’ve written much less code than I wrote with RestSharp. var singleGeocodeResponse = await "" .AppendPathSegment("postcodes") .AppendPathSegment("IP1 3JR") .GetJsonAsync(); I also find using the POST method with postcodes.io easier with Flurl. Even though Flurl doesn’t have a build in JSON serialiser, it’s easy for me to install the Json.NET package – this means I can now use a POCO like the one below… public class PostCodeCollection { [JsonProperty(PropertyName = "postcodes")] public List<string> Postcodes { get; set; } } … to fluently build up a post request like the one below. I can also createmy own custom POCO – GeocodeResponseCollection – which Flurl will automatically populate with the JSON fields. var postcodes = new PostCodeCollection { Postcodes = new List<string> { "OX49 5NU", "M32 0JG" } }; var url = await "" .AppendPathSegment("postcodes") .PostJsonAsync(postcodes) .ReceiveJson<GeocodeResponseCollection>(); Next, using Flurl with api.nasa.gov As mentioned previously, NASA’s astronomy picture of the day requires a demo key passed in the query string – I can do this with Flurl using the code below: var astronomyPictureOfTheDayJsonResponse = await "" .AppendPathSegments("planetary", "apod") .SetQueryParam("api_key", "DEMO_KEY") .GetJsonAsync(); Again, it’s a very concise way of retrieving data from a web service. Finally using Flurl with api.github.com Lastly for this post, the code below show how to use Flurl with Basic Authentication and the Github API. var singleGeocodeResponse = await "" .AppendPathSegments("users", "jeremylindsayni") .WithBasicAuth("jeremylindsayni", "[[my password]]") .WithHeader("user-agent", "csharp-console-app") .GetJsonAsync(); One interesting difference in this example between RestSharp and Flurl is that I had to send user-agent information to the Github API with Flurl – I didn’t need to do this with RestSharp. Wrapping up Both RestSharp and Flurl are great options for consuming Restful web services – they’re both stable, source for both is on Github, and there’s great documentation. They let me write less code and do the thing I want to do quickly, rather than spending ages writing my own code and tests. Right now, I prefer working with Flurl, though the choice comes down to personal preference. Things I like are: - Flurl’s MIT licence - I can achieve the same results with less code, and - I can integrate Json.NET with Flurl out of the box, with no extra classes needed. About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks! 7 thoughts on “Comparing RestSharp and Flurl.Http while consuming a web service in .NET Core” Hey! I’m a big fan of Polly, and it looks like the net core team are too. They seem to have done a really good job with the httpclientfactory: Hello! Thank you for this, that’s a good link – practical ways of improving overall system quality is something I want to understand more deeply, Polly and Flurl might be a pretty awesome combination (I think that Flurl tries to address the socket issue the article describes), I’ll have to look into it to see what’s happening under the hood. You should check out It’s a light wrapper around HttpClient that provides the same fluent / convenience helpers in a more standard way since it’s not replacing the standard way of doing HTTP requests in .NET. Nice – thank you, I’ll check it out! Hi Jeremy, Nice article, Thank you for that. Have you already checked Refit for this purpose? Might make your life a bit easier. I hadn’t heard of Refit () – it looks good too, thank you for sharing. I’ve already planned my next post on Polly and Flurl but I definitely want to explore Refit and FluentRest as alternative tools for the job!
https://jeremylindsayni.wordpress.com/2018/12/27/comparing-restsharp-and-flurl-http-while-consuming-a-web-service-in-net-core/
CC-MAIN-2019-30
refinedweb
2,193
52.6
Everyone is welcome to search for "Technical Notes of Little Monkey" and follow my official account. If you have any questions, you can communicate with me in time. A thread will go through its own life cycle from creation to death. We may often see the "Running" state of the thread in blogs, but you can't find the "Running" state by reading the source code of the Thread class, then in the Java thread Is there a Running state? First of all, by looking at the source code provided by "Thread", we can see that the life cycle of a thread will go through the following states (note that at the same time, a thread can only be in one of these states): public enum State { // The state when the thread has not been started NEW, // Indicates that the "start()" method is called, which can also be subdivided into "ready" and "running" states RUNNABLE, // The thread waits to enter the synchronized method or waits to enter the synchronized block BLOCKED, // Indicates that the thread enters a waiting state, such as calling "wait()", "join()", etc. WAITING, // with timeout TIMED_WAITING, // The thread ends normally, or is forced to stop, or the state after abnormal termination is encountered TERMINATED; } NEW: You can understand by reading the source code comments provided by the official, "NEW" indicates the state when the thread has not been started. So what is it that hasn't been activated yet? Let's assume there is a thread class that implements the Runnable interface. public class ThreadState implements Runnable { @Override public void run() { } } A thread is constructed below, but the "start()" method has not been called. Before the "start" method is called, the thread is in the "NEW" state. public class ThreadStateTest { public static void main(String[] args) { Thread thread = new Thread(new ThreadState()); } } RUNNABLE: Indicates that the thread called the "start()" method. In fact, the Runnable state here can be divided into two other states, namely the ready state and the running state. public class ThreadStateTest { public static void main(String[] args) { Thread thread = new Thread(new ThreadState()); thread.start(); } } Because the startup of the thread needs to be selected by the CPU before it can run, so for the ready state: it means that the thread has been called, that is, the "start()" method has been called, but it has not really run, and it needs to wait for the CPU to schedule and execute it. , enter the "Running" state, that is, the thread is executing. Blocked: The thread waits to enter the synchronized method or wait to enter the synchronized block. WAITING: Waiting state, indicating that the thread enters the waiting state, such as calling "wait()", "join()" and other methods without timeout. The waiting thread at this moment can only be awakened by other threads, and the thread in the waiting state must first enter the "ready (ready)" state before entering the "Running" state (because this involves the thread waking up the waiting thread, yes Random) For example, in the example shown in the previous article introducing the basic usage of wait and notify, calling "this.wait()" is to make the current thread enter the waiting state. public void addMoney(int money) throws InterruptedException { synchronized (this) { while (balance <= money) { balance += money; System.out.println("Mom: Deposited into the account:" + money + "Yuan, the total account amount is:" + balance + "Yuan"); this.notify(); this.wait(); } } } TIMED_WAITING: The timeout waiting state, compared with the WAITING state, it can return by itself within a specified time. For example, calling "sleep(long time)", "wait(long)", "join(long)" and other methods with timeout, they will automatically return within a period of time. public final native void wait(long timeout) throws InterruptedException; Then the thread will return from the waiting state and continue to execute. TERMINATED: The thread is in a state of death. For example, after executing the "run()" method, it ends normally. Forced termination, calling "stop()" or "destory()", etc. Abnormal termination: An exception occurred during execution. By reading the source code, we can see that Java does not explicitly give "Running state", but merges "Running" (running) and "Ready (ready)" in the operating system into "Runable (running state)" Here is a very classic thread life cycle diagram for everyone, yours! You carefully taste! Finally, the book "The Art of Java Concurrent Programming" is highly recommended! ! !
https://algorithm.zone/blogs/describe-the-life-cycle-of-java-threads-in-detail.html
CC-MAIN-2022-21
refinedweb
722
56.89
A Heaping Helping Of Python Goodness05 Dec 2014 I really enjoy solving problems quickly and thoroughly. I especially enjoy solving annoying, repetitive problems that invite human error. The icing on the cake is when I learn some new tricks in the process. This last few days was a flurry of problem-solving and trick-learning. My favorite tricks are small bits of learning that make code easier to write, read and use. A couple of these are Windows-specific but most are general. 1. Turn a Python Program into a Windows Batch File I’m lazy and I don’t like typing more than I have to. Even better is to just double-click on a thing from the Windows Explorer. I’ve tried doing this with Windows .BAT files in the past but it was ugly and I always had to fiddle around with it, so when I found a universal solution on the Internet I was quite pleased. So far, putting the first line at the top of a batch file has worked everywhere I’ve tried it: @setlocal enabledelayedexpansion && python -x "%~f0" %* & exit /b !ERRORLEVEL! #start python code here (tested on Python 2.7.4) It has the additional benefit that generic scripts are easily adaptable for non-Windows machines — just take out the first line. 2. with and the Context Manager The with keyword has been in Python for awhile now; the simplest way of thinking about it is that it sets up a try-finally block for you. One of the things I love most about Python is that the language designers pay attention to the little things that people do over and over and think “hey, maybe we can make this better!” So sure, you can write your own try-finally blocks to do this, but if it gets too messy you won’t. Here, I just wanted to visit a directory, do something, and come back. I found myself doing this in numerous places in the program, so to prevent errors and make the code clearer, I decided to try to do the repetitive activity in one place. You can write your own context manager class, but in many cases you can do it much more simply using the contextmanager decorator on a function. The yield in the middle of the function is where you’d normally have all your actions in the try block, and the code after the yield is what would happen in the finally block: import os from glob import glob from contextlib import contextmanager @contextmanager def visitDir(d): old = os.getcwd() os.chdir(d) yield d os.chdir(old) paths = [os.path.join('.', p[0:-1]) for p in glob('*/')] for p in paths: with visitDir(p): print p + ": " for f in glob('*'): print " ", f The program just visits each directory one level down from the current one and prints the contents. Notice that old is held through the yield and used to restore the old directory. Because of the simplicity of the contextmanager decorator, I’m going to be using with statements a lot more now. 3. Command-line Arguments with argparse In the past, I’ve tried to use optparse but it always ended up feeling too messy and complicated, so I’d just punt and pick the arguments off the command line myself. Apparently powers that be observed this happening enough that someone decided to create a simpler, better command-line parsing module. I finally reached for argparse this week, and I’m now a convert — argument parsing has become easy, and I won’t hesitate to put it into future programs. Here’s a simple example that only uses optional flags, which can come in a single-hyphen short form or a double-hyphen long form: import argparse parser = argparse.ArgumentParser() parser.add_argument("-r", "--run", action='store_true', help="Run all the scala scripts and capture any errors") parser.add_argument("-s", "--simplify", action='store_true', help="Remove unimportant trace files & show non-empty error files") parser.add_argument("-c", "--clean", action='store_true', help="Remove all 'run' artifacts") parser.add_argument("-p", "--prerequisites", action='store_true', help="Compile prerequisites") parser.add_argument("-u", "--unusedfiles", action='store_true', help="Display non 'Solution-' and non 'Starter-' scala files") parser.add_argument("-t", "--test", action='store_true', help="Test") args = parser.parse_args() if not any(vars(args).values()): parser.print_help() if args.test: print "test" if args.clean: print "clean" if args.prerequisites: print "prerequisites" if args.run: print "run" if args.simplify: print "simplify" if args.unusedfiles: print "unusedfiles" If you have a dash or double-dash in front of an argument, that argument is automatically optional. The default is that arguments can be in any order. It’s also possible to have argument parameters and pretty much any other configuration you need; see the above link for details. You create an ArgumentParser, then add arguments. Note that I use both the short and long form of arguments, followed by action=’store_true’ which puts a Boolean in that argument’s location when it finds the flag. Then you can just perform tests such as if args.clean in your code. The help text is displayed when you ask for help with the flag -h or –help, and I’ve also explicitly called parser.print_help() if no flags at all are present; that way if you just run the command you get the help text: c:\tmp>python argparse-example.py -t -u -r test run unusedfiles c:\tmp>python argparse-example.py usage: argparse-example.py [-h] [-r] [-s] [-c] [-p] [-u] [-t] optional arguments: -h, --help show this help message and exit -r, --run Run all the scala scripts and capture any errors -s, --simplify Remove unimportant trace files & show non-empty error files -c, --clean Remove all 'run' artifacts -p, --prerequisites Compile prerequisites -u, --unusedfiles Display non 'Solution-' and non 'Starter-' scala files -t, --test Test You’ll notice that the arguments make this look like a kind of build program, which it is. I seem to reinvent make-like build tools on a regular basis, but argparse has me thinking that, with enough built-in tools like Python provides, I might not need a build framework — perhaps all I need to do is use the tools to create a custom builder for each need. 4. Creating Standalone Executables with PyInstaller Here’s another problem that I’ve poked at for years — well, looked at and decided it was too much trouble. And this week, discovered that someone has made it easy with PyInstaller. My friend James Ward asked me to help him create a tool installer. He had written the Mac/Linux version as a bash script, and he needed the Windows version of the installer to be dead simple — the tool is currently messy and fiddly to install, and he wants to use it in a classroom situation and elsewhere, so the out-of-the-box install process must be non-obtrusive, otherwise people won’t want to bother with it. I like making things simple and I’m annoyed when they are stupidly difficult, so I decided to step up and help. His initial thought was to make a Windows BAT file, but when I saw what he was trying to do I suggested that might produce too much hair pulling (James did write a support script with BAT and getting that to work wasted a lot of time). We briefly considered Visual Basic, which I’ve had some experience with, but then I wondered if the tools for creating Windows .EXE files from Python might have progressed to the point of ease. And indeed they have — PyInstaller worked the first time and every time. It has a –onefile flag to produce a single standalone executable with no support files. James was able to create a continuous-integration build for our project (on Appveyor), which fires every time we do a checkin and starts from a blank virtual machine on the cloud, loads all the necessary tools and builds everything, then runs tests. One caveat: we were able to do everything using Python’s “batteries included” libraries, so I haven’t experimented with bringing in external libraries, but I don’t expect problems there. They have a list of packages that they explicitly support, and I suspect that ordinary Python libraries will work just fine. But wait, there’s more! PyInstaller does this magic for different operating systems! Not just Windows, but Linux and Mac OSX. You can read more here. 5. Simplifying Configuration, and format() Because James started with a bash script and he hasn’t done a lot of bash programming, he followed the bad bash practice of using global variables all over the place, which you see as all uppercase identifiers. I took his script and translated it to Python which we then built into a Windows .EXE file. I’m reading Writing Idiomatic Python which suggests using the format() function for string interpolation, so I gave it a try and quickly grew to like it. One thing that started to annoy me was passing it arguments; to use James’ global variables and keep it all consistent, I was writing things like this: "My string with {THING_TO_SUBSTITUTE} in it".format(THING_TO_SUBSTITUTE=THING_TO_SUBSTITUTE) Then I discovered that format() could take a dictionary and parse through it to find your particular arguments, and substitute those. So, if I put everything in a single dictionary, I could just pass that in to any format() like this: "My string with {THING_TO_SUBSTITUTE} in it".format(**cf) Where cf is the configuration dictionary. (It turns out that I could also have left everything as global variables and passed **globals()). This worked OK for awhile, but then I started to get bothered by having to write: cf["THING_TO_SUBSTITUTE"] Everywhere. Too many characters, too noisy to write and read. What I wanted was the much simpler: cf.THING_TO_SUBSTITUTE I discovered that by writing a quick subclass of dict I could accomplish this, and the results are: import os, platform, pprint class Configuration(dict): def __getattr__(self, attr): return self[attr] def __setattr__(self, attr, val): self[attr] = val cf = Configuration( CLEANUP = os.getenv("LAUNCHER_CLEANUP"), TRACE = os.getenv("LAUNCHER_TRACE"), VERSION = None, RAW_VERSION = None, DOWNLOAD_PATH = None, HOST="something.org", DEFAULT_VERSION = "0.10.33", OS = platform.system(), ARCHITECTURE = platform.architecture()[0], # 64bit or 32bit BASE_LOCAL_DIR = "{APPDATA}\\launcher".format(APPDATA=os.getenv("APPDATA")), BIN = os.path.normpath("modules/bin/mod"), ) cf.RAW_VERSION = "11.0.1" if cf.TRACE: pprint.pprint(cf) print("host: {HOST}, OS: {OS}, arch: {ARCHITECTURE}".format(**cf)) Note that reading and writing the configuration variables is much nicer, and using them in a format() statement is quite elegant. Here’s the output on one machine: c:\tmp>set LAUNCHER_TRACE=1 c:\tmp>python config.py {'ARCHITECTURE': '32bit', 'BASE_LOCAL_DIR': 'C:\\Users\\Bruce Eckel\\AppData\\Roaming\\launcher', 'BIN': 'modules\\bin\\mod', 'CLEANUP': None, 'DEFAULT_VERSION': '0.10.33', 'DOWNLOAD_PATH': None, 'HOST': 'something.org', 'OS': 'Windows', 'RAW_VERSION': '11.0.1', 'TRACE': '1 ', 'VERSION': None} host: something.org, OS: Windows, arch: 32bit It might seem like a small thing, but I find that easier writing and reading of the code makes it worth it. 6. pathlib is Excellent I’ve made plenty of use of the os.path library. It’s very helpful, but it isn’t intuitive. The new pathlib which was added in Python 3.4 abstracts paths to the point where they are intuitive, which was no simple design feat. Here’s a simple example which renames a group of files scattered throughout subdirectories: #! py -3 from pathlib import Path for old in Path('.').glob("**/oldname.foo"): old.rename(old.parent / "newname.bar") I’m pretty sure that’s the simplest solution to that problem that I’ve seen. Note the use of ’/’ to combine parts of paths. I’ve always wanted to do it that way.
http://bruceeckel.github.io/2014/12/05/a-heaping-helping-of-python-goodness/
CC-MAIN-2017-09
refinedweb
1,978
63.29
. equals and of Integer== This is a basic question. People who master some java may be able to say why. But a closer look will reveal why An ordinary code public class Demo { public static void main(String[] args) { Integer i1 = 100; Integer i2 = 100; System.out.println(i1 == i2); //Output true i1 = 1000; i2 = 1000; System.out.println(i1 == i2); //Output false } } After JDK1.5, the compiler provides the function of automatic unpacking and boxing. Integer i = 100 is the compiler that helps us automatically boxing. Look at the decompiled code public class Demo { public static void main (String [] args){ Integer i1 = Integer.valueOf(100); Integer i2 = Integer.valueOf(100); System.out.println(i1==i2); i1 = Integer.valueOf(1000); i2 = Integer.valueOf(1000); System.out.println(i1==i2); } } This is the process of automatic packing. Next, let's look at what happened in valueOf and paste the source code directly /** *); } Let's look at the logic in the method. This method will judge whether the incoming int value is between the highest value and the lowest value of IntegerCache. If it is between, the Integer object will be obtained from the cache of IntegerCache, otherwise a new Integer object will be created. Before that, there is a static code block; } It can be seen from here that Integer makes an array between - 128 and 127 The elements in IntegerCache.cache are arranged from - 128 to 127 in the order of 0-256 of the array. Assuming that the I passed in is = - 128, the subscript is calculated through the addressing formula i + (-IntegerCache.low) = -128 + (128) = 0 Think about the initial question At this step, we can understand why when the value of Integer is [- 128, 127], using = = to judge two integers with equal values can get true, but beyond this range, using = = to judge two integers with equal values can get false? Because when the value is between [- 128, 127], the valueOf method will not create a new Integer object, but get it from the cache. In this way, the calculated subscripts of int with the same value are the same, and naturally the same Integer object will be obtained. If it exceeds this range, it will not be obtained from the cache. Each time, a new Integer object is new. The memory addresses of two different objects are different. If = = is used, it will naturally get false. Static inner class IntegerCache After understanding the principle of boxing, we might as well take a look at how the cache of IntegerCache is initialized. The process of initializing the cache is as follows Get the parameter java.lang.Integer.IntegerCache.high in the JVM and convert it to type int as the value of high Calculate the difference between high and low and + 1 as the length of the cache array Incrementing continuously from - 128, set an Integer object at each position of the cache array.
https://programmer.help/blogs/what-happens-to-integer-between-128-127.html
CC-MAIN-2021-49
refinedweb
487
55.03
The first parameter, linearRatio, determines the proportion of the ease during which the rate of change will be linear (steady pace). This should be a number between 0 and 1. For example, 0.5 would be half, so the first 25% of the ease would be easing out (decelerating), then 50% would be linear, then the final 25% would be easing in (accelerating). If you choose 0.8, that would mean 80% of the ease would be linear, leaving 10% on each end to ease.. public static var ease:SlowMo The default ease instance which can be reused many times in various tweens in order to conserve memory and improve performance slightly compared to creating a new instance each time. public function SlowMo(linearRatio:Number = 0.7, power:Number = 0.7, yoyoMode:Boolean = false) ConstructorParameters public function config(linearRatio:Number = 0.7, power:Number = 0.7, yoyoMode:Boolean = false):SlowMo Permits customization of the ease with various parameters. ParametersReturns override public function getRatio(p:Number):Number Translates the tween's progress ratio into the corresponding ease ratio. This is the heart of the Ease, where it does all its work. ParametersReturns import com.greensock.*; import com.greensock.easing.*; / alpha tween that syncs with the above positional tween, fading it in at the beginning and out at the end myText.alpha = 0; TweenLite.to(myText, 5, {alpha:1, ease:SlowMo.ease.config(0.5, 0.8, true)}); Copyright 2013, GreenSock. All rights reserved. This work is subject to the terms in or for Club GreenSock members, the software agreement that was issued with the membership.
http://www.greensock.com/asdocs/com/greensock/easing/SlowMo.html
CC-MAIN-2022-40
refinedweb
264
51.85
Python Imaging Library/Getting PIL Python is obviously a prerequisite for using PIL. The current version of PIL is 1.1.7, and this supports Python up to v.2.6. PIL is available from PythonWare at this page. Source code can be built for any platform, and Windows binaries are available. Installing PIL[edit] Windows[edit] To install on Windows machines, go to the page given above and download the appropriate binary executable for the version of Python that you have. Run this executable and follow the instructions given by the installer. Linux[edit] On Linux, you can either compile the source yourself, or install using your package manager. For Debian-based systems, apt-get can be used: sudo apt-get install python-imaging In Gentoo: emerge imaging Mac OS X[edit] To install on a Mac OS X system , visit and download the relevant .dmg file and install as any other application. Using PIL[edit] Once install you need to import the PIL modules you want to use. Basic functions are found in the Image module, so the following will work: import Image You can then access functions as usual, e.g. Image.load(filename). Uses of the other modules available are given in the overview section, and these are imported in exactly the same way.
https://en.wikibooks.org/wiki/Python_Imaging_Library/Getting_PIL
CC-MAIN-2017-43
refinedweb
218
64.2
The standard Python traceback module provides very useful functions to produce useful information about where and why an error occurred. Traceback objects actually contain a great deal more information than the traceback module displays, however. That information can greatly assist in detecting the cause of your error. Here's an example of an extended traceback printer you might use, followed by a usage example. Discussion I use a technique very similar to this in the application I develop. Unexpected errors get logged in a format like this, which makes it a lot easier to figure out what's gone wrong. This can be used to create a generic debug script. I wrote something similar to this, but this is nicer. I wrapped it in a separate script called debug.py. debug.py takes a parameter which is the script you wish to debug: import sys, traceback def print_exc_plus(): try: ....execfile(sys.argv[1]) except: ....print_exc_plus() Then you can run: debug.py myscript.py This is useful for extending editors. e.g. I assign this to F7 in SciTE. Code to get frames is poor. The code that gets the frames is poor and will fail for certain types of frames. It is also way too complicated and long. Start the function like this and it will on all versions of Python with all types of frame.
http://code.activestate.com/recipes/52215/
crawl-002
refinedweb
226
67.04
Hi, im a beginner on MIPS field, I need to write two similar MIPS programs. First one is get 5 signed integers from user inputs, store them in an array, then print them out in a line with comma and space. Next count the number of odd integers in a loop(zero to be considered as even), stop and exit the loop if the number is XXXXX (eg. input: 3 2 0 -5 2, the count is 3), then print the number of odd integers. Next one, is reversing an array. First i need to get 6 positive integers from the user, stop taking the input if it is negative, then i need to put the inputs into an array, print them out on a line, each number separated by comma and a space, reverse the array and print them out with commas and spaces too. if you could provide some comments to explain the code, i'll be very appreaciated, thanks alot yes sir yes, please sir, I really need your helps with this assignment asap yas, it's due in two hours, sorri, it's so embarrased, i just recognized this extra credit project last night, but i would pay bonus for the rush, i was almost desperated, please sir In the first part, are the integers just single digits? Do you have any external libraries or other code that your are using / allowed to use? yes should I sent you what i have so far? like pseudo codes.. #include<iostream> using namespace std; int main(){ int num[8]; cout<<"Enter integers: "; //get user inputs for(int c=0; c<8; c++){ cin>>num[c]; } //print out in single line for(int i=0; i<8; i++){ cout<<num<<", "; } cout<<endl; //count odd elements int count=0; while(count<=5){ if (num [count] <0) break; //exit the loop if it is neagative number. else if( (num[count] != 0) || (num[count]%2 != 2) ) count ++; } cout<<"the count is: "<<count<<endl; cout<<num; return 0; } im sorri, there's no much limits or external libraries are required to use, it's just simple beginners' course if you have 1 or 2mins, you could take look here, similarily everything here we've already learned. is two separeted programs, basically they are the same thing, just the second one need to have an extra reverse procedure ohh, for the second part, once a negative number is XXXXX stop receive it and exit so as you saying, the 1 and 2 should be one program, and the third one should be an individual one cool, one more thing, sir, is there comments next to each line? thanks a lot for the quick response, but can i please just have plain text(like note pad or .s file) instead, im so confused with the zip can i please just have plain text, cuz for some reason i cannt open it thanks a lot, sorry for that much bothering i got it sir, thanks a lot for the helps, thank uuuuuuuuu!!!!!
http://www.justanswer.com/homework/7rcoq-mips-beginner-needs-help-code.html
CC-MAIN-2016-40
refinedweb
506
66.61
09 June 2010 17:49 [Source: ICIS news] LONDON (ICIS news)--Polyethylene terephthalate (PET) customers are targeting June decreases in line with lower production costs and lagging demand, they said on Wednesday. “A rollover is not our expectation even if it’s producers’ target. Customers want at least the raw material decrease of around €30/tonne ($36/tonne),” a buyer said. A producer acknowledged that customers were putting pressure on them to lower their prices in June, but said it was refusing to budge from May’s levels of around €1,200/tonne FD (free delivered) Europe. The price of PET raw materials fell in June. Paraxylene (PX) which feeds into PET’s main feedstock purified terephthalic acid (PTA), fell by €35/tonne in June to €835/tonne. This was putting pressure on PTA to drop by €24/tonne in the same month. Mono ethylene glycol (MEG), PET’s other feedstock, saw its June contract price drop by €36/tonne to €820/tonne FD NWE (northwest ?xml:namespace> Poor weather conditions during the summer peak season for PET bottlers resulted in lacklustre demand so far in June, buyers and sellers said. This situation was compounded by earlier bouts of pre-buying by customers in preparation for what is usually a period of peak demand, they added. Demand has fallen because of bad weather and customers’ inventories are high, according to a reseller. Film-grade packaging customers said their sector was relatively healthy, but still they were being offered reductions in the price of PET for June deliveries. One buyer reported receiving offers for spot material in June at decreases of €60-70/tonne compared with May. Suppliers recently said shutdowns planned later in June were also supporting a firm market. “Producers are all sold out,” one of them commented. Earlier pre-buying may be causing some customers to hold back at the moment, but by the end of June or July the market could become tight as a result of there being less imports, a buyer speculated. In May, PET contract values rose by around €60/tonne to €1,160-1,200/tonne FD (free delivered) NWE (northwest ($1 = €0.84) ICIS will be developing a prototype report for
http://www.icis.com/Articles/2010/06/09/9366532/europe-pet-customers-target-june-decreases.html
CC-MAIN-2014-42
refinedweb
369
60.65
In my last blog post, you got Up and Running with Cocos2d-x library on BlackBerry 10. Assuming you are well ahead in porting your game, may I suggest a wonderful idea to make your game go viral? Make use of Scoreloop and the power of our Social Gaming hub. BlackBerry 10 comes with a lot of surprises for gamers including the introduction of the BlackBerry Social Gaming Hub via the Games app. Gamers can now meet new friends, discover popular games and share their experience through the Games app. It doesn’t stop there. The Games app enables Scoreloop-integrated games to automatically post activities such as achievements, challenge results and high scores to a gamer’s timeline. With over 250 Million users already playing Scoreloop-powered games across platforms, this gives any game a great chance make a viral impact. There are also some good featuring opportunities in the “Games” section of the games app. In this post, I will walk you through the process of integrating Scoreloop into the cocos2d-x BBTemplate project we setup in my last blog post. The instructions are generic enough and can applied to any other open library with minor modifications. Set up Scoreloop account First, create a Scoreloop developer account at. If you already are a BlackBerry user and have a BlackBerry ID, there is no need to create a new account. Simply choose “Login with BlackBerry ID” and enter your BlackBerry ID credentials. After logging in, you will see three options. You can skip the first option as the BlackBerry 10 Native SDK comes preloaded with the Scoreloop SDK and there is no need to download any external libraries. Now create a game title and choose “BlackBerry” option. If the game’s name is not already taken, a profile will be created. Once the game account is created, you will see various options to customize your game’s social gaming options. If you already have an account and have already published titles using Scoreloop, you can choose the newly created game from the drop down menu. Getting started Follow the instructions below to include the Scoreloop library to the BBTemplateProject. - Right click the game project settings - Select C/C++ Build -> Settings -> Tool Settings -> QCC Linker - Add “scoreloopcore” library to the list of Libraries To be able to make use of Scoreloop’s features, the infrastructure need to be setup and it all starts with creating an instance of Scoreloop client. Add Scoreloop’s main header file in HelloWorldScene.h and add the below mentioned Scoreloop variables in the “HelloWorld” class. #include <Scoreloop/Scoreloopcore.h> In your Scoreloop account, fetch the application data corresponding to your game title from the “Game Overview” section. It should look something like this: Platforms BlackBerry Game ID xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx Game Secret Xxxxxxxxxx Game Currency Code IBI Add this information to your code along with the game version and language: static const char SCORELOOP_GAME_ID[] = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"; static const char SCORELOOP_GAME_SECRET[] = "xxxxxxxxxx"; static const char SCORELOOP_GAME_VERSION[] = "1.0"; static const char SCORELOOP_GAME_CURRENCY[] = "IBI"; static const char SCORELOOP_GAME_LANGUAGE[] = "en"; In the init() method, initialize the Scoreloop client and its data components as shown below: SC_InitData_Init(&scData); SC_Error_t errCode = SC_Client_New(&client, &scData, SCORELOOP_GAME_ID, SCORELOOP_GAME_SECRET, SCORELOOP_GAME_VERSION, SCORELOOP_GAME_CURRENCY, SCORELOOP_GAME_LANGUAGE); Note: If the user had not already setup his Social Gaming Hub account, he may be prompted to sign in with his BlackBerry ID and credentials. Fetching information from the Scoreloop server To fetch user information from the Scoreloop server, a Scoreloop user controller needs to be created. Queries receive asynchronous response and needs a callback function to be registered. In the snippet below, fetchUserComplete will be called upon a response from the server. SC_Client_CreateUserController(client, &userController, fetchUserComplete, this); For example: if the game needs to query the current session user’s information call SC_UserController_LoadUser(userController); Once a user controller is successfully created, it can be used to query information. There are two approaches to get the desired information from the server. - Via BlackBerry Platform Services (BPS) events. - Using a custom events Approach 1: Via BlackBerry Platform Services (BPS) events Cocos2d-x absorbs all the BPS events in its implementation and you can modify the cocos2d-x source code directly and have it call back whenever a BPS event arrives. CCEGLView::handleEvents() is where the event handling takes place and it calls the callback function that was registered through the “setEventCallback()”. The advantage of this approach is that it does not have event-polling overhead. The code snippets below show the changes made to the cocos2d-x library and the BBTemplate project to accomplish Scoreloop event handling via BPS events. CCEGLView.h void * m_eventCookie; CEGLViewEventCallback m_eventCallback; void setEventCallback(CCEGLViewEventCallback callback, void *cookie); CCEGLView.cpp void CCEGLView::setEventCallback(CCEGLViewEventCallback callback, void *cookie) { m_eventCookie = cookie; m_eventCallback = callback; } bool CCEGLView::handleEvents() { ... for (;;) { if (m_eventCallback != NULL) m_eventCallback(m_eventCookie, event); ... } } HelloWorldScene.h void fetchUserComplete(SC_Error_t result); static void fetchUserComplete(void* userData, SC_Error_t result) { (reinterpret_cast(userData))->fetchUserComplete(result); } HelloWorldScene.cpp bool HelloWorld::init() { ... scData.runLoopType = SC_RUN_LOOP_TYPE_BPS; SC_Error_t errCode = SC_Client_New(&client, &scData, SCORELOOP_GAME_ID, SCORELOOP_GAME_SECRET, SCORELOOP_GAME_VERSION, SCORELOOP_GAME_CURRENCY, SCORELOOP_GAME_LANGUAGE); CCEGLView::sharedOpenGLView()->setEventCallback(StaticOnScoreloopUIEvent, (&_scInitData); ... } static void OnScoreloopUIEvent(void *cookie, void *event) { SC_HandleBPSEvent((SC_InitData_t *)cookie, (bps_event_t *)event); } void HelloWorld::fetchUserComplete(SC_Error_t result) { ... } Approach 2: Via custom events An alternative approach to handle Scoreloop events is to use the Scoreloop custom event queue. The difference here is that incoming Scoreloop responses are processed by periodic polling. The good news is that cocos2d-x already has an infrastructure to support scheduling and the disadvantage of this approach is that polling takes place periodically and sometimes even when no Scoreloop event is expected. The custom event handler can be called in game’s update() function or scheduled via cocos2d-x schedule() function. HelloWorldScene.h void OnScoreloopCustomEvent(float o); void fetchUserComplete(SC_Error_t result); static void fetchUserComplete(void* userData, SC_Error_t result) { (reinterpret_cast(userData))->fetchUserComplete(result); } HelloWorldScene.cpp bool HelloWorld::init() { ... scData.runLoopType = SC_RUN_LOOP_TYPE_CUSTOM; SC_Error_t errCode = SC_Client_New(&client, &scData, SCORELOOP_GAME_ID, SCORELOOP_GAME_SECRET, SCORELOOP_GAME_VERSION, SCORELOOP_GAME_CURRENCY, SCORELOOP_GAME_LANGUAGE); // schedule to call this method with every frame this->schedule(schedule_selector(HelloWorld::OnScoreloopCustomEvent)); ... } void HelloWorld::OnScoreloopCustomEvent(float o) { SC_HandleCustomEvent(&scData, SC_FALSE); } void HelloWorld::fetchUserComplete(SC_Error_t result) { ... } Well, there you have it – you are up and running with cocos2d-x and Scoreloop. Here are some great resources available to you to get more out of Scoreloop SDK: Scoreloop Integration Sample -101 Scoreloop Integration sample Belligerent Blocks – A Sample 1 level game that showcases great use cases of Scoreloop
http://devblog.blackberry.com/2013/02/scoreloop/?relatedposts_to=16163&relatedposts_order=4
CC-MAIN-2017-39
refinedweb
1,069
53.21
Object Integrity & Security: Duplicating Objects The second issue deals with casting. When the clone() method returns the object, the object in this case is of type Dog. Thus, you need to cast the return to the type Dog. This is illustrated in Listing 05. // Class Duplicate public class Duplicate { public static void main(String[] args) { Dog fido = new Dog("fido", "retriever"); Dog spot; spot = (Dog) fido.clone(); System.out.println("name = " + fido.getName()); System.out.println("name = " + spot.getName()); } } Listing 05: The Duplicate Class with the clone() method With both of the changes in place, you can run the code that produces the output in Figure 4. Figure 1.Cloning an Object Now, this output looks identical to the output in Figure 02, without cloning the object—and this is in fact the case. The primary difference, and the important difference, is that there are now two totally separate objects. As you did with the previous two examples, you can represent this graphically to illustrate how the objects are represented in memory. Diagram 03 shows that in this case you not only have two references, you also have two objects. Diagram 03: An Object with Two References to Two Objects. When you look at Diagram 03, you may notice one inconsistency; the spot object still has the name defined as fido. In fact, the output in Figure 04 confirms this. This makes sense because you did clone the object, and this implies that all of the information is duplicated. Given this fact, how do you prove that the objects are indeed separate? Change the value of one, but not the other, as seen in Listing 06. Logically, you need to change the name of the spot object to "spot". You use the setName() method that you added earlier. // Class Duplicate public class Duplicate { public static void main(String[] args) { Dog fido = new Dog("fido", "retriever"); Dog spot; spot = (Dog) fido.clone(); spot.setName("Spot"); System.out.println("name = " + fido.getName()); System.out.println("name = " + spot.getName()); } } Listing 06: Change the name for the spot reference Figure 05: Cloning an Object At this point, the graphical representation is in a logical state, the fido references and object with the name of fido, and the spot references and object with the name of spot, as seen in Diagram 04. Diagram 04: An Object with Two References to Two Objects. Conclusion At this point, you have covered some of the basic mechanics of duplicating objects. As you can see, duplicating objects is not really as simple as it might seem. Copying primitives, such as a double, can be accomplished by a simple variable declaration and an assignment as seen below. // Code to create a duplicate double data type double a = 5; double b = 0; b = a; This works because the double represents only a single address location (however many bytes that a double consumes). Because objects potentially contain other primitives and other objects, the number of address references is not limited to a single address reference. With objects, duplication is much more involved. You will explore more of these details in future articles. This is where the concept of shallow and deep copies comes into play. You will take a look at these concepts as you continue to explore the cloning process. 4 of 4
http://www.developer.com/design/article.php/10925_3675326_4/Object-Integrity-amp-Security-Duplicating-Objectsop.htm
CC-MAIN-2015-22
refinedweb
552
54.22
What is constructor?A special method of the class that will be automatically invoked when an instance of the class is created is called a constructor. The main use of constructors is to initialize private fields of the class while creating an instance for the class. When you have not created a constructor in the class, the compiler will automatically create a default constructor in the class. The default constructor initializes all numeric fields in the class to zero and all string and object fields to null. Some of the key points regarding the Constructor are: Constructors can be divided into 5 types: Default ConstructorA: Example Now run the application, the output will be as in the following: Now run the application, the output will be as in the following: Copy ConstructorThe constructor which creates an object by copying variables from another object is called a copy constructor. The purpose of a copy constructor is to initialize a new instance to the values of an existing instance.Syntaxpublic employee(employee emp){name=emp.name;age=emp.age;}The copy constructor is invoked by instantiating an object of type employee and ing it the object to be copied.Example Now, emp1 is a copy of emp2. So let us see its practical implementation. Now run the program, the output will be as follows: Static ConstructorWhen a constructor is created as static, it will be invoked only once for all of instances of the class and it is invoked during the creation of the first instance of the class or the first reference to a static member in the class. A static constructor is used to initialize static fields of the class and to write the code that needs to be executed only once.Some key points of a static constructor is: Syntaxclass employee {// Static constructor static employee(){} } Now let us see it with practically using System; namespace staticConstractor { public class employee static employee() // Static constructor declaration{Console.WriteLine("The static constructor "); } public static void Salary() { Console.WriteLine(); Console.WriteLine("The Salary method"); } class details static void Main() { Console.WriteLine("----------Static constrctor example by vithal wadje------------------"); Console.WriteLine(); employee.Salary(); Console.ReadLine(); } } Now run the program the output will be look as in the following: Private Constructor When a constructor is created with a private specifier, it is not possible for other classes to derive from this class, neither is it possible to create an instance of this class. They are usually used in classes that contain static members only. Some key points of a private constructor are: Now let us see it practically. Now run the application; the output is: If you uncomment the preceding statement that is commented in the above program then it will generate an error because the constructor is inaccessible (private). Summary I hope this article is useful for all readers, if you have any suggestion then please contact me. ©2016 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/UploadFile/0c1bb2/constructors-and-its-types-in-C-Sharp/
CC-MAIN-2016-18
refinedweb
493
51.58
Introduction to Pandas Interview Questions Pandas Interview Questions and Answers helps many to crack interviews during their selection and helps you to learn the basic and advanced concepts of this Python programming language. It makes your calculations easier and robust. Do go through the questions and answers and become a professional. Part 1 – Pandas Interview Questions (Basics) This first part covers the basic Interview Questions 1. Explain and Define Python Pandas? Answer: Pandas is characterized as an open-source library that gives superior information control in Python. The name of Pandas is gotten from the word Panel Data, which implies an Econometrics from Multidimensional information. It tends to be utilized for information investigation in Python and created by Wes McKinney in 2008. It can perform five huge advances that are required for handling and examination of information independent of the cause of the information, i.e., load, control, plan, model, and dissect. 2. Characterize DataFrame in Pandas? Answer: A DataFrame is a generally utilized information structure of pandas and works with a two-dimensional exhibit with marked tomahawks (rows and columns). DataFrame is characterized as a standard method to store information and has two distinctive indices, i.e., row index and column index. It comprises of the accompanying properties: The columns are heterogenous like int and bool. It tends to be viewed as a word reference of Series structure where both the rows and columns are recorded. It is signified as “columns” on account of columns and “index” if there should arise an occurrence of lines. The syntax is: import pandas as pd df=pd.Dataframe() 3. Clarify Series In pandas. How to Create Copy of Series In pandas? Answer: Series is a one-dimensional array fit for holding any information type such as integers, floating-point numbers, strings, Python objects, and many more. The pivot names are on the whole alluded to as the index. By utilizing the ‘series’ strategy, we can without much of a stretch proselyte the list, tuple, and dictionary into series. A Series cannot contain numerous columns. The syntax is: s = pd.Series(data, index=index) where, data = integer, floating point number, string, or dictionary. 4. Characterize Categorical Data in Pandas? Answer: Categoricals are a pandas information type comparing to all-out factors in measurements. A clear cut variable takes on a constrained and generally fixed, number of potential qualities (classes; levels in R). Models are sexual orientation, social class, blood classification, nation connection, perception time, or rating through Likert scales. All estimations of straight out information are either in classes or np.nan. The straight-out information type is helpful in the accompanying cases: A string variable comprising of just a couple of various qualities. Changing over such a string variable to a straight out factor will spare some memory, see here. The lexical request of a variable is not equivalent to the consistent request (“one”, “two”, “three”). By changing over to a straight out and determining a request on the classes, arranging and min/max will utilize the sensible request rather than the lexical request, see here. As a sign to other Python libraries that this segment ought to be treated as a downright factor (for example to utilize reasonable measurable strategies or plot types). 5. Explain the different procedures where Dataframe can be created in Pandas? Answer: Dataframe can be created in 3 different methods: - By making use of Lists: d = [[‘a’, 5], [‘b’, 6], [‘c’, 7]] - Creating Pandas Dataframe: df = pd.DataFrame(d, columns = [‘Strings’, ‘Integer’]) print(df) - By making use of a dictionary of lists: To make DataFrame from the dictionary of a list, all the array must be of the same length. In the case, the list is passed, by then the length list should be proportionate to the length of shows. If no document is passed, by then as per normal procedure, the record will be a range(n) where n is the display length. - By using arrays: import pandas as pd d = {‘Name’:[‘Span’, ‘Vet’, ‘Such’, ‘Sri’], ‘marks’:[85, 80, 75, 70]} df = pd.DataFrame(d, index =[‘first’, ‘second’, ‘third’, ‘fourth’]) print(df) 6. Explain the Pandas Time Series? Answer: A Time Series is an arranged grouping of information that essentially speaks to how some amount changes after some time. Pandas contain broad capacities and highlights for working with time arrangement information for all areas. Pandas support the following properties, - Parsing time-series data from different sources and organizations. - Create successions of fixed-recurrence dates and time ranges. - Controlling and changing over date time with time zone data. - Resampling or changing over a period arrangement to a specific recurrence. - Performing date and time math with outright or relative time increases. Part 2 – Pandas Interview Questions (Advanced) Let us now have a look at the advanced Interview Questions 7. How to create a series copy in Pandas? Answer: The standard syntax to create copy series is, pandas.Series.copy Series.copy(deep=True) If deep is set to False, the command does not copy the data. Instead, it deletes it. 8. How to create an empty Dataframe in Pandas? Answer: import pandas as pd empty=pd.Dataframe() print(empty) 9. Explain adding rows and columns to the Dataframe in Pandas? Answer: We use .loc() and .iloc() functions to add rows and columns to the Dataframe. To display only a specific data in the particular[0] print(df.iloc[1]) To display specific data in a[:,1] print(df.iloc[:,1]) 10. What is multiple Indexing? Answer: Multiple Indexing is characterized as basic indexing since it manages information examination and control, particularly for working with higher dimensional information. It additionally empowers us to store and control information with the discretionary number of measurements in lower-dimensional information structures like Series and DataFrame. Conclusion Since we have studied all the possible important Pandas Interview Questions, it is essential to note that we should always remember these concepts to implement while coding. These concepts make us strong concerning our basics with Python and Pandas. Recommended Articles This has been a guide to the list of Pandas Interview Questions so that the candidate can crackdown these Interview Questions easily. You may also look at the following articles to learn more –
https://www.educba.com/pandas-interview-questions/?source=leftnav
CC-MAIN-2021-21
refinedweb
1,038
57.27