text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
New DataSet Features in ADO.NET 2.0 Jackie Goldstein Renaissance Computer Systems November 2004 Applies to: Microsoft ADO.NET 2.0 Visual Basic programming language Summary: Learn about the new ADO.NET 2.0 features in the DataSet .NET Framework class and the classes that are closely related to it. These changes include both functional and performance enhancements to the DataSet, DataTable, and DataView classes. (17 printed pages) Download the DataSetSamples.exe sample code associated with the article. Contents Introduction Raw Performance The DataTable – More Independent Than Before Stream to Cache, Cache to Stream Conclusion Introduction In the upcoming release of ADO.NET, ADO.NET 2.0, there are many new and improved features that affect many different .NET Framework classes and application development scenarios. This article discusses on the changes and enhancement to the core disconnected mode ADO.NET Framework classes—the DataSet and associated classes such as DataSet, DataTable, and DataView. This article is actually the first of two articles on the DataSet and associated classes in ADO.NET 2.0. Here we will focus on the classes in the .NET Framework. In the subsequent article, we will focus on developing with these and related classes from within the Visual Studio 2005 development environment. Visual Studio 2005 offers several designers and tools that offer tremendous flexibility and productivity for developing the data-centric aspects of your application. As a result, each article will have a different "feel". This article is mainly an overview of new functionality, accompanied by explanations and code samples. In the next article, the focus is more on the development process, as we see how to develop a working application. As I mentioned above, this article only covers a small slice of the new features of ADO.NET 2.0. An overview of some of the other features can be found in ADO.NET 2.0 Feature Matrix. More in depth information on some of the topics mentioned there can be found these articles: - Asynchronous Command Execution in ADO.NET 2.0 - Generic Coding with the ADO.NET 2.0 Base Classes and Factories - Schemas in ADO.NET 2.0 Unless noted otherwise, the contents of this article are based on the Beta 1 release of Visual Studio 2005. The code samples use the Northwind database that comes as a sample database with SQL Server 2000. Raw Performance Software developers are always concerned with performance. Sometimes they get over-concerned and make their code jump through hoops to just trim a little execution time, in places where it ultimately isn't significant—but that is a subject for another article. When it comes to ADO.NET 1.x DataSets, particularly those containing a large amount of data, the performance concerns expressed by developers are indeed justified. Large DataSets are slow—in two different contexts. The first time the sluggish performance is felt is when loading a DataSet (actually, a DataTable) with a large number of rows. As the number of rows in a DataTable increases, the time to load a new row increases almost proportionally to the number of rows in the DataTable. The other time the performance hit is felt is when serializing and remoting a large DataSet. A key feature of the DataSet is the fact that it automatically knows how to serialize itself, especially when we want to pass it between application tiers. However, a close look reveals that this serialization is quite verbose, consuming much memory and network bandwidth. Both of these performance bottlenecks are addressed in ADO.NET 2.0. New Indexing Engine The indexing engine for the DataTable has been completely rewritten in ADO.NET 2.0 and scales much better for large datasets. This results in faster basic inserts, updates, and deletes, and therefore faster Fill and Merge operations. While benchmarks and quantifying performance gains is always an application-specific and often risky affair, these improvements clearly provide more than an order of magnitude improvement in loading a DataTable with a million rows. But don't take my word for it, check it out yourself, with the following simple example. Add the following code as the click event handler for a button on a Windows form: Private Sub LoadButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles LoadButton.Click Dim ds As New DataSet Dim time1 As New Date Dim i As Integer Dim dr As DataRow ds.Tables.Add("BigTable") ds.Tables(0).Columns.Add("ID", Type.GetType("System.Int32")) ds.Tables(0).Columns("ID").Unique = True ds.Tables(0).Columns.Add("Value", Type.GetType("System.Int32")) ' Show status label WaitLabel.Visible = True Me.Cursor = Cursors.WaitCursor Me.Refresh() ' catch start time time1 = DateTime.Now() ' Yes, we are loading a million rows to a DataTable! ' ' If you compile/run this with ADO.NET 1.1, you have time ' to make and enjoy a fresh pot of coffee... Dim rand As New Random Dim value As Integer For i = 1 To 1000000 Try value = rand.Next dr = ds.Tables(0).NewRow() dr("ID") = value dr("Value") = value ds.Tables(0).Rows.Add(dr) Catch ex As Exception ' if there are any duplicate values, an exception ' will be thrown since the ID column was specified ' to be unique End Try Next ' reset cursor and label WaitLabel.Visible = False Me.Cursor = Me.DefaultCursor ' Show elapsed time, in seconds MessageBox.Show("Elapsed Time: " & _ DateDiff(DateInterval.Second, time1, DateTime.Now)) ' verify number of rows in the table ' This number will probably be less that the number ' of loop iterations, since if the same random number ' comes up, it will/can not be added to the table MessageBox.Show("count = " & ds.Tables(0).Rows.Count) End Sub When I ran this code in my environment with ADO.NET 1.1 and Visual Studio 2003, the execution time was about 30 minutes. With ADO.NET 2.0 and Visual Studio 2005, I had an execution time of approximately 40-50 seconds! When I lowered the number of rows to only half a million, the 1.1 version took about 45 seconds and the 2.0 version took about 20 seconds. Your numbers will vary, but I think the point is clear. In fact, this example is a very simple one, since it contains only one index, for the unique column. However, as the number of indices on the specified DataTable increases, such as by adding additional DataViews, UniqueKeys and ForeignKeys, the performance difference will be that much greater. Note The reason the ID value in the sample code is being generated by a random number generator rather than just using the loop counter as the ID, is in order to better represent the real-world scenario. In real applications, accessing the elements of a DataTable for Inserts, Updates, and Deletes is rarely done sequentially. For each operation, the row specified by the unique key must first be located. When inserting and deleting rows, the table's indices must be updated. If we were to just load a million rows with sequentially key values into an empty table, the results would be extremely fast, but misleading. Binary Serialization Option The major performance improvement in loading a DataTable with a lot of data did not require us to make any change at all to our existing ADO.NET 1.x code. In order to benefit from improved performance when serializing the DataSet, we need to work a bit harder—we need to add a single line of code to set the new RemotingFormat property. In ADO.NET 1.x, the DataSet serializes as XML, even when using the binary formatter. In ADO.NET 2.0, in addition to this behavior, we can also specify true binary serialization, by setting the RemotingFormat property to SerializationFormat.Binary rather than (the default) SerializationFormat.XML. Let us take a look at the different outputs resulting from these two different options. In order to maintain backwards compatibility (about which the ADO.NET team was always concerned), the default value of XML serialization will give us the same behavior as in ADO.NET 1.x. The results of this serialization can be seen by running this code: Private Sub XMLButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles XMLButton.Click Dim ds As New DataSet Dim da As New SqlDataAdapter("select * from [order details]", _ GetConnectionString()) da.Fill(ds) Dim bf As New BinaryFormatter Dim fs As New FileStream("..\xml.txt", FileMode.CreateNew) bf.Serialize(fs, ds) End Sub Note that this code is explicitly using the BinaryFormatter class, yet the output in file xml.txt, shown in Figure 1, is clearly XML. Also, in this case, the size of the file is 388 KB. Let us now change the serialization format to binary by adding the line and save the data to a different file by modifying the filename in the FileStream constructor so that the code now looks like this: Private Sub BinaryButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles BinaryButton.Click Dim ds As New DataSet Dim da As New SqlDataAdapter("select * from [order details]", _ GetConnectionString()) da.Fill(ds) Dim bf As New BinaryFormatter Dim fs As New FileStream("..\binary.txt", FileMode.CreateNew) ds.RemotingFormat = SerializationFormat.Binary bf.Serialize(fs, ds) End Sub The output in file binary.txt is shown in Figure 2. Here we see that it is now in fact binary data, pretty unintelligible to the human reader. Moreover, the size of this file is only 59 KB—again, an order of magnitude reduction in the amount of data that needs to be transferred and the CPU, memory, and bandwidth resources required to process it. It should be pointed out that this improvement is relevant when using remoting and not when using Web Services, since Web Services by definition must be passing XML. This means that you will only be able to take advantage of this enhancement when both sides of the communication are .NET-based and not when communicating to non-.NET platforms. More in-depth details about DataSet serialization process can be found in Binary Serialization of DataSets. The DataTable – More Independent Than Before When discussing ADO.NET 1.x and its object model for disconnected data access, the central object was the DataSet. Sure it contained other objects, such the DataTable, DataRelation, DataRow, etc., but the attention generally started and revolved around the DataSet. It is true that most .NET developers were aware and leveraged the fact that the DataTable was quite useful on its own, without being encapsulated inside a DataSet. However, there were some scenarios where we couldn't do what we wanted to do with a DataTable unless we first took it and forced it into a DataSet. The most glaring and often painful example of this is to read and write (load and save) XML data in to and out of the DataTable. In ADO.NET 1.x, we must first add the DataTable to DataSet, just so we could read or write XML, since the methods to do so are only available on the DataSet! One of the objectives of ADO.NET 2.0 was to make the standalone DataTable class far more functional and useful than it is in ADO.NET 1.x. The DataTable now supports the basic methods for XML, just as the DataSet does. This includes the following methods: - ReadXML - ReadXMLSchema - WriteXML - WriteXMLSchema The DataTable is independently serializable and can be used in both web service and remoting scenarios. In addition to now supporting the Merge method, the stand-alone DataTable also supports new ADO.NET 2.0 features added to the DataSet: - RemotingFormat property (discussed previously) - Load method (discussed later in this article) - GetDataReader method (discussed later in this article) Note On the topic of XML, it is worth noting that in ADO.NET 2.0 there is much enhanced XML support—what Microsoft likes to call greater "XML Fidelity". This takes the form of support for the SQL Server 2005 XML data type, extended XSD schema support, an improved XSD schema inference engine, and the elimination of two often troublesome limitations: (i) The DataSet and DataTable classes can now handle multiple in-line schemas and (ii) The DataSet now fully supports namespaces, so that a DataSet can contain multiple DataTables with the same name, but from different namespaces, i.e., tables with the same unqualified names, but with different qualified names. Also, a child table with the same name and namespace that is included in multiple relations can be nested in multiple parent tables. Stream to Cache, Cache to Stream Another one of the main enhancements for the DataSet and DataTable classes in ADO.NET 2.0 is the availability of mechanisms to consume a DataReader (loading data into DataTables) and to expose a DataReader over the contents of DataTables. Sometimes we have/receive our data in the form of a DataReader, but really want to have it in the form of a cached DataTable. The new Load method allows us to take an existing DataReader and use it to fill a DataTable with its contents. Sometimes we have/receive our data in a cached form (DataTable) and need to access it via a DataReader type interface. The new GetTableReader method allows us to take an existing DataTable and access it with a DataReader interface and semantics. In the following sections, we'll take a look at these new methods. The Load Method – Basic Use The Load method is a new method that has been added to the DataSet and the DataTable in ADO.NET 2.0. It loads a DataTable with the contents of a DataReader object. It can actually load multiple tables at one time, if the DataReader contains multiple resultsets. The basic use of the Load method is quite straightforward: A more complete illustration of its use is shown in this sample code: Private Sub LoadButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles LoadButton.Click Try Using connection As New SqlConnection(GetConnectionString()) Using command As New SqlCommand("SELECT * from customers", connection) connection.Open() Using dr As SqlDataReader = command.ExecuteReader() 'Fill table with data from DataReader Dim dt As New DataTable dt.Load(dr, LoadOption.OverwriteRow) ' Display the data DataGridView1.DataSource = dt End Using End Using End Using Catch ex As SqlException MessageBox.Show(ex.Message) Catch ex As InvalidOperationException MessageBox.Show(ex.Message) Catch ex As Exception ' You might want to pass these errors ' back out to the caller. MessageBox.Show(ex.Message) End Try End Sub The code above initializes connection and command objects and then executes the ExecuteReader method to fetch the data from the database. The results of the query are provided as a DataReader, which is then passed to the Load method of the DataTable to fill it with the returned data. Once the DataTable is filled with the data, it can be bound and displayed in the DataGridView. The significance of the OverwriteRow load option for the (optional) LoadOption parameter will be explained in the next section. The Load Method – Why am I loading this data? If all you are doing with your DataSet/DataTable and DataAdapter is filling the DataSet with data from the data source, modifying that data, and then at some later point pushing it back into the data source, that things general move pretty smoothly. A first complication occurs if you are utilizing optimistic concurrency and a concurrency violation is detected (someone else already changed one of the rows you are trying to change). In this case what you normally need to do to resolve the conflict is to resynchronize the DataSet with the data source, so that the original values for the rows match the current database values. This can be accomplished by merging a DataTable with the new values into the original table (in ADO.NET 1.x, the merge method is only available on the DataSet): By matching rows with the same primary key, records in the new table are merged with the records in the original table. Of key significance here is the second parameter, PreserveChanges. This specifies that the merge operation should only update the original values for each row, and not affect the current values for the row. This allows the developer to subsequently execute a DataAdapter.Update that will now succeed in updating the data source with the changes (current values), since the original values now match the current data source values. If PreserveChanges is left at its default value of false, the merge would override both the original and current values of the rows in the original DataTable and all of the changes that were made would be lost. However, sometimes we want to update data in the data source, where the new values don't come from programmatically modifying the values. Perhaps we obtain updated values from another database or from an XML source. In this scenario, we want to update the current values of the rows in the DataTable, but not affect the original values for those rows. There is no easy way to do this in ADO.NET 1.x. It is for this reason that the ADO.NET 2.0 Load method accepts a parameter LoadOption that indicates how to combine the new incoming rows with the same (primary key) rows already in the DataTable. The LoadOption allows us to explicitly specify what our intention is when loading the data (synchronization or aggregation) and how we therefore want to merge the new and existing rows. Figure 3 outlines the various scenarios: Where: - Primary Data Source—DataTable/DataSet synchronizes/updates with only one Primary Data Source. It will track changes to allow for synchronization with primary data sources. - Secondary Data Source—DataTable/DataSet accepts incremental data feeds from one or more Secondary Data Sources. It is not responsible for tracking changes for the purpose of synchronization with secondary data sources. The three cases shown in Figure 3 can be summarized as follows: - Case 1—Initialize DataTable(s) from Primary Data Source. The user wants to initialize an empty DataTable (original values and current values) with values from primary data source and then later, after changes have been made to this data, propagate the changes back to the primary data source. - Case 2—Preserve Changes and Re-Sync from Primary Data Source. The user wants to take the modified DataTable and re-synchronize its contents (original values only) with the primary data source while maintaining the changes made (current values) - Case 3—Aggregate incremental data feeds from one or more Secondary Data Sources. The user wants to accept changes (current values) from one or more secondary data sources and then propagate these changes back to the primary data source. The LoadOption enumeration has three values that respectively represent these three scenarios: - OverwriteRow—Update the current and original versions of the row with the value of the incoming row. - PreserveCurrentValues (default)—Update original version of the row with the value of the incoming row. - UpdateCurrentValues—Update the current version of the row with the value of the incoming row. Note These names will probably change post-Beta 1. Table 1 below summarizes the load semantics. If the incoming row and existing row agree on primary key values, then the row is processed using its existing DataRowState, else use 'Not Present' section (the last row in the table). Table 1. Summary of Load Semantics Example In order to illustrate the behavior specified in Table 1, I offer a simple example. Assume that both the existing DataRow and incoming row have 2 columns with matching names. The first column is the primary key and the second column contains a numeric value. The tables below show the contents of the second column in the data rows. Table 2 represents the contents of a row in all 4 states before invoking Load. The incoming row's second column value is 3. Table 3 shows its contents after load. Table 2. Row State Before Load Incoming Row Table 3. Row State After Load Note You can see the beginnings of this concept already in ADO.NET 1.x. The default behavior of the DataAdapter's Fill method when loading data into a DataTable is to mark all the rows as Unchanged (This can be overridden by setting the AcceptChangesOnFill property to False). However, when using ReadXML to load data into a DataSet, the rows are marked as Added. The rationale for this (which was implemented based on customer feedback) is that this would allow loading new data from an XML source into a DataSet and then using the associated DataAdapter to update the primary data source. If the rows were marked as Unchanged when loaded from ReadXML, the DataAdapter.Update would not detect and changes and would not execute any commands against the data source. In order to provide similar functionality, the FillLoadOptions property has been added to the DataAdapter in order to offer the same semantics and behavior as the Load method described here, while still preserving the same (by default) existing behavior of the Fill method. Another feature (which doesn't exist) that developers always ask about in ADO.NET 1.x, is the ability to manually modify the state of DataRow. While the options offered by the Load method may address most scenarios, you may still want to have finer-grained control over the row state—you may have a need to modify the state of individual rows. To that end, ADO.NET 2.0 introduces two new methods on the DataRow class: SetAdded and SetModified. Before you ask about setting the state to Deleted, or Unchanged, let me remind you that with version 1.x we already have the Delete and AcceptChanges/RejectChanges methods to accomplish this. The GetTableReader Method The GetTableReader method is a new method that has been added to the DataSet and the DataTable in ADO.NET 2.0. It returns the contents of a DataTable as a DataTableReader (derived from DBDataReader) object. If it is invoked on a DataSet that contains multiple tables, the DataReader will contain multiple resultsets. The DataTableReader works pretty much like the other data readers you have worked with, such as the SqlDataReader or OleDbDataReader. The difference is, however, that rather than streaming data from a live database connection, the DataTableReader provides iteration over the rows of a disconnected DataTable. The DataTableReader provides a smart, stable iterator. The cached data may be modified while the DataTableReader is active and the reader will automatically maintain its position appropriately even if one or more rows are deleted or inserted while iterating. A DataTableReader that is created by calling GetDataReader on a DataTable contains one result set with the same data as the DataTable from which it was created. The result set contains only the current column values for each DataRow and rows that are marked for deletion are skipped. A DataTableReader that is created by calling GetDataReader on a DataSet that contains more than one table will contain multiple result sets. The result sets will be in the same sequence as the DataTable objects in the DataSet object's DataTableCollection. In addition to the features outlined above, another great use of the GetDataReader method is to quickly copy data from one DataTable to another: The DataView.ToTable Method Another new method that is somewhat related to the previous ones (in that it provides a new DataTable cache of existing data) and is worth mentioning is the ToTable method of the DataView class. As a reminder, the DataView class provides a logical view of the rows in a DataTable. This view may be filtered by row, row state, and sorted. However, in ADO.NET 1.1, there is no easy way to save or pass on the rows of the view, since the DataView does not have its own copy of the rows—it simply accesses the rows of the underlying DataTable as prescribed by the filter and sort parameters. The DataView's ToTable method returns an actual DataTable object that is populated with rows of the exposed by the current view. Overloaded versions of the ToTable method offer the option of specifying the list of columns to be included in the created table. The generated table will contain the listed columns in the specified sequence, which may differ from the original table/view. This ability to limit the number of columns in a view is a feature that is missing in ADO.NET 1.x and has frustrated many a .NET programmer. You can also specify the name of the created table and whether it should contain all or only distinct rows. Here is some sample code that shows how to use the ToTable method: Private Sub ToTableButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles ToTableButton.Click ' Show only 2 columns in second grid Dim columns As String() = {"CustomerID", "ContactName"} Dim dt As DataTable = _ ds.Tables("customers").DefaultView.ToTable( _ "SmallCustomers", False, columns) DataGridView2.DataSource = dt End Sub Assuming that the contents of the "customers" table in the DataSet ds are displayed in a first grid, this routine displays the newly created DataTable that contains only those rows exposed by the DefaultView (as specified by its filter parameters). The rows in the new table contain only two of the columns of the original DataTable and DataView. An example of this can be seen in Figure 4. Conclusion The ADO.Net 2.0 version of the DataSet (and DataTable) introduces numerous new features and enhancements to existing features. The main features, discussed in the article, include significantly improved performance due to a new index engine and the binary serialization format option, extensive capabilities available to a stand-alone DataTable, and mechanisms for exposing cached data as a stream (DataReader) and loading stream data into a DataTable cache. ADO.NET 2.0 also offers greater control over the state of rows in a DataTable, in order to better address more real-world scenarios. Thanks to Kawarjit S. Bedi, Pablo Castro, Alan Griver, Steve Lasker, and Paul Yuknewicz of Microsoft for their help in preparing this article. Jackie Goldstein is the principal of Renaissance Computer Systems, specializing in consulting, training, and development with Microsoft tools and technologies. Jackie is a Microsoft Regional Director, the founder of the Israel VB User Group, and!
https://msdn.microsoft.com/en-US/library/ms971494.aspx
CC-MAIN-2015-22
refinedweb
4,395
54.32
Tuesday, Nov 10, 2009 DCLG: +1.2% MoM, -4.1%YoY Times: House prices grow for sixth month in a row House prices rose by 1.2 per cent in September from August, the sixth increase in a row, further bolstering evidence that Britain's battered property market is over the worst. The rise pushed the average cost of a UK home to £199,303 — a level not seen since November 2008, according to figures from the Department of Communities and Local Government (DCLG). The annual rate at which prices are falling also slowed to 4.1 per cent in September from 5.6 per cent in August and a peak of 13.6 per cent in March. The DCLG survey is the latest to suggest an upturn in the housing market. A recent survey from the Royal Institution of Chartered Surveyors found that house prices rose in October despite more properties coming on to the market.... Me thinks even 'estate agents' (correction, Property Consultants) are beginning to question these facts and figures. Try telling anyone who has put their house on the market March this year, that the price of their house has now increased into this winter six months in a row! 2. flashman said... The world looks very different to the one predicted by most HPC’ers a year ago. Signs of an economic recovery are everywhere if you care to look. I still think house prices will take a further hit but my long-standing (and rather unpopular) assertion that there will be no depression or economic crash is looking good. Once we’ve got 3 consecutive quarters of growth under our belts, it’ll be very hard for anyone to maintain a permanently bearish stance. Hopefully when the time comes, most people will feel able to admit their mistake and move on. Reading between the lines most people already suspect they’ve got it wrong, which is why the site has become a bit fractious and shifted its focus to social injustice 3. rumble said... Sounds as though people are suggesting something, somewhere has actually been fixed. I must have missed that. 4. hpwatcher said... Signs of an economic recovery are everywhere if you care to look I simple don't accept that. Do the letters QE not mean anything to you? 5. alan said... @ flashman Both Ericsson & EA hacked jobs today. Looking back over the past few weeks job cuts are abounding. Can't think of anyone I know outside financial services that got more than a 1.5% pay rise this year. In my local shopping malls and industrial estates (Essex, bordering M25) there are shops and buildings up for sale. I would love to believe that the country shows "signs of an economic recovery are everywhere if you care to look". I'm just not convinced. What new products are we inventing, and where are the new jobs? Maybe the site is looking more at social injustice because so many bloggers are re-training for jobs as care workers - lots of opportunities there... The UK may not be headed for a soup kitchen depression but it isn't getting better either. Stagflation is still my prognosis. 6. nickb said... Latest GDP figures, supposedly independent of government interference and spin, not exactly like the DCLG I imagine, were showing negative growth, were they not? ie contraction, slump, downturn, recession. 7. sold out said... Flashman ? 8. enuii said... Flsahman, where do you live? Up here in the Northwest it's turning into economic carnage with more boarded up shops and pubs sprouting every week! 9. hpwatcher said... ? Soon, the money that the government is prepared to spend will be exhausted - they are massively in debt. They may like to think there is a bottomless pit of money, to keep the socialist nanny state going...... but get real. Paper money is being created, not wealth. The real question is, how much more money are they going to waste before they pull the plug on RBS? The illusion will then be shattered. 10. mrflibble said... flashman would you care to pass round the blow pipe so we can all see the same recovery you do... 11. Cozza said... Flashman obviously works in the public sector which is still expanding thanks to ponzi Brown and his desire to win the General Election. 12. Puppee said... flashman tell lloyds bank, e.a sports computer games threshers and ericcson mobile phones that we are in recovery and they don't need to make people redundant anymore oh and tell gordon that the interest rates can go back to normal as well, this so called recovery is all smoke and mirrors a result of massive Q.E that will hold back the economy for the next 10 to 20 years 13. smugdog said... Flash, keep your head down for a while, the Flat Earth Society will run you off the edge if you're not careful. 14. quiet guy said... I believe that flashman's post at 4.53pm was a little mischievous. Allthough I'm sure he could make an argument if he wished, I wouldn't be surprised if he had a little wager on to see how many of us bears are goaded into response :D BTW, some personal observations about the economy where I come from include boarded up shops, redundancies and wage freezes. 15. nomad said... By Flashman's own measure, when unemployment hits 2.7m house prices take a plunge - we appear to be moving in that direction very quickly according to some of today's stories. But, of course, he was goading @ 4.53pm. 16. the number cruncher said... I agree with flashman that the signs of a unsustainable policy to re-inflated a bubble economy abound all around us. Look ma I'm on top of the world... 17. yoyo1 said... 'Signs of an economic recovery are everywhere if you care to look' Just an echo from 2007. 18. flashman said... quiet guy: my post was mischievous to the extent that I knew it would irritate a few recidivist bears. I do however see signs of recovery everywhere. I do most of my work with large international corporations. They are, almost without exception, gearing up big time and there is genuine excitement for the first time in a few years (none of my customers work in the financial industry). The activity and investment of these firms will take a while to filter through to the street. All recessions will cause permanent damage to some people but some people will engage with the new economy and progress. We operate in a global economy and the world in aggregate never stopped growing. The worldwide growth curve is set to steepen and the UK economy will benefit. Like I said earlier, some people will not be able benefit. For example, I certainly wouldn’t want to live in a town where a very large portion of the population is currently employed by the government. I could type 20 pages on the signs of recovery but I would be wasting my time. Time will be more persuasive. I don’t mean to upset anyone with my horrible tales of recovery. It’s just a heads up. 19. flashman said... nomad: I have always said that the recovery will be framed by higher interest rates and higher taxation. For that reason, I still believe that house prices will take a further hit. The extent of house price falls will, to a large extent, be determined by how 'jobless' the recovery turns out to be. I have little faith in unemployment forecasts (they are really difficult) but I stick by my 2.7 – 2.8 million ‘tipping point’ I have to go now but I will post a ‘signs of recovery’ list when an opportunity presents itself 20. cat and canary said... flashman, Im not a recidivist bear :) I hope for a recovery, I just dont believe it. mind if I ask what sector these large corporations are in? I work with major international electronics corps, and I can safely say that there is no "gearing up" in them. More to the point, most of them are only just starting the process of job cuts. 21. Charlie White said... Tons of QE reflated the housing bubble, which is the only segment of the economy anyone seems to care about. Is it a good thing that the UK has become a one-industry society, i.e., we sell wildly-overpriced houses to each other? What happens when we have to pay for all that QE? 22. krustyatemyhamster said... Oh flashman, you do make me smile. I'm still waiting for that $35 barrel of oil of yours. Cozza said..."Flashman obviously works in the public sector which is still expanding thanks to ponzi Brown and his desire to win the General Election." correction: Flashman obviously works in the city which is still expanding thanks to ponzi Brown and his desire to win the General Election. 23. hpwatcher said... correction: Flashman obviously works in the city which is still expanding thanks to ponzi Brown and his desire to win the General Election. Yes, I think Flashman misguided in the extreme.....though strangely ''satisfied''; perhaps he has just had his bonus......... 24. rumble said... A few multinationals with a positive outlook doesn't say much to me. I certainly wouldn't describe that as "everywhere". I await your list. 25. sold out said... Flashman said "They are, almost without exception, gearing up big time and there is genuine excitement for the first time in a few years (none of my customers work in the financial industry). The activity and investment of these firms will take a while to filter through to the street." are you sure you are not just seeing normal market behavour that allways happens during a resession, ie some of the bigger stronger companies are just taking advantage as others have fallen by the wayside? anyway look forward to your 20 pages on "signs of recovery" i think we are in for a long wait. 26. icarus said... The 'signs of recovery' are due to increasingly perverse relationships. Unemployment up? Good, that means IRs stay down and that's good for the stock market and other assets. And if unemployment is up that means workers have to do as they're told, so we see productivity rises. And how much of the recovery is due to one-off stimuli? If I trade in my old banger and buy a new car I'm probably just bringing the purchase a year forward - won't be buying next year the one I would have bought. And we know about stock market rises since March being due to a combination of QE and lack of alternative homes for the cash printed. But how much of the US/UK rises are due also to weak currencies? Much of the sales and earnings of companies on the S&P500, for example, takes place overseas, denominated in foreign currencies, but accounted for on the books of US-incorporated firms in dollars, so that sales and earnings rise as the dollar falls. It used to be that the $ and the market went up or down together ("as the $ strengthens investors put their money in $ assets") but apparently not now. So offshoring, outsourcing, de-industrialisation and the hollowing out of our economies is good for the markets, profits etc. A through-the-looking-glass world indeed. Lewis Carroll, Orwell and Kafka would be the seers of the age. 27. estrader said... A recovery? Great! More cheap and easy credit, more house price rises, more equity withdrawals and finally and ultimately more of my money to bail out the collapsing financial system in 10 years time. I will heavily short ALL banks in 8 years time and make a killing! I can't wait!! 28. flashman said... Thought I'd have a quick look at this post. The bitterness is quite unnerving. I said there were signs of recovery... not that I wanted to eat your children. Here's an article about the latest OECD report. The UK gets a mention "The strongest recovery is forecast to be in China, France, Italy and the UK". "OECD data point to strong signs of recovery" Published: November 6 2009 13:36 | Last updated: November 6 2009 13:36 All the world’s big economies are seeing strong signs of recovery from the worst global recession since the Great Depression, according to data from the Organisation for Economic Co-operation and Development. A measure of future economic activity by the Paris-based international organisation points to “expansion” across OECD developed economies as well as six big emerging markets in September for the first month since early 2007. The strongest recovery is forecast to be in China, France, Italy and the UK, with Canada and Germany also showing tentative signs of expansion. Get in step people 29. cat and canary said... I don´t think most people are suggesting you want to eat our children!.... i think they´re just suggesting you´re not right in saying "gearing up big time and there is genuine excitement" From the FT.. The OECD was cautious in recording the improvement in the global situation saying that “these signals should be interpreted with caution, as the expected improvement in economic activity, relative to long term potential levels, can be partly attributed to a decrease in the estimated long term potential level and not solely an improvement in economic activity itself”. The OECD data are intended to provide “early signals of turning points in economic activity”. They are based upon a range of different series on a national level that are then combined into the OECD measure. The figures appear to contradict other data which have shown that the UK is lagging much of the rest of the world in exiting the recession. Official figures showed that the UK contracted by 0.4 per cent in the third quarter. However, some business surveys have been much stronger, and some economists have cast doubt on the reliability of the official statistics. The UK OECD figures are based on the performance of non-financial stocks in the FTSE, several measures of the business climate from the Confederation of British Industry, as well as consumer confidence and interest rate measures. 30. devo said... You certainly are mischievous, flashman. No way would you return to this site day after day if you were genuinely optimistic about our economic future. 31. Donkey2409 said... Flashman @2 Don t take this personally, but you are deluded. Please indicate these "signs of economic growth" you see all around, and perhaps I'm wrong but wasn t GDP negative last quarter: unemployment is on the way up, a huge axe is slowly but inexorably approaching the public sector, wage growth is small or negative and the country is technically and factually broke...rises in house prices at the moment are not a sign of success, prosperity or economic growth, rather a sign of a deeply disfunctional economy. Sorry to have a go, but you're trying to call black white.... 32. icarus said... Flashy - OK, let's take China then. Industrial production, fixed investment, GDP growth, asset prices all strongly upward. Again it's just a government response that postpones the reckoning - $600 bn fiscal stimulus and twice that much in bank loans extended in the first 6 months this year. Flood the economy with cash to paper over the problems. An RBS analysis reckoned that half of those loans fed bubbles in equities, property and other assets. Not much of the private sector got those loans but if they did it put them into bubble assets. Most of the rest of the loans went into state companies, run by politically powerful families, and gov't-backed infrastructure - mostly of the 'bridges to nowhere' type (plans are to have twice as many km of high-speed roads than the US has, despite having less than a quarter as many cars). This all boosts demand for steel, concrete etc. but creates relatively few jobs (another jobless recovery). There's also massive investment in airports that can never pay for itself - lot's of 'toxic assets' in the pipeline. Meanwhile the SME sector, which provides 75% of jobs is declining while the well-funded state sector gobbles it up. There's a lot of corruption and nepotism involved in getting a prized, reasonably-paying state job. Official figures may show income growth and rising consumption but this is heavily weighted towards state salaries and gov't procurement. Incomes and private consumption appears weak and unable to compensate for the loss of exports. So yes, it is possible that suppliers to China are gearing up but this investment won't 'hit the streets' - ever.. 34. icarus said... Some companies will gear up to provide stimulus-induced demand such as I outlined for China @31. There isn't the job creation anywhere, including China, that will underpin and sustain that demand. 35. devo said... 32. flashman said... There is a very real economy in this country that beavers away regardless of all the HPC talk about ponzi schemes and debt fueled whatnots. Very righteous tub-thumping stuff, flashman. Consider this... The banks are on life-support The housing market is on life-support The car industry is on life-support The public sector is on life-support Small business is dying. 36. flashman said... devo: This is a house price site (not a great depression 2 site) and as I have said a thousand times, I think house prices will fall. I do not return to the site day after day. I only return to this site on occasion. I tend to post for a few days running and then I forget about it for a month or so. It's far too repetitive and I don't usually have the time This is not an 'end of days' and revolution site but the site has allowed content of this kind because it is desperate for the traffic. HPC is definitely on the way out 37. icarus said... flashman - so it's OK to say that boom times are ahead but not OK to deny that? 38. devo said... 35. flashman said... devo: This is a house price site And yet we both love s2r1's posts. How queer! 39. timmy t said... Flash - there will always be those who seek to gain from recessions and I'm sure you are talking about these companies, but there really are not enough of them to be able to say that the end is in sight yet. We need 2 consecutive quarters of decline to be in a recession but 20 minutes of growth and everyone claims it's all over. It ain't. Not by a long way. Sure you can sit there and say it is, and then in a couple of years when we do turn the corner you can say you predicted it first. That doesn't make you "in the know" it just means you happen to work in an industry that gets busy when times are bad. Like others here have said, we are all on life support. We are trying unprecendented fiscal policies and we're still going backwards. And when we do finally start moving forward, we will be faced with decades of paying off our deficit. 40. flashman said... hello icarus: It is the 'absolutes' that I have a problem with. Some production and demand is stimulus based and some isn't. Some stimulus money will be wasted and some will assist with the creation of new and genuinely sustainable business. Some businesses continued to function well throughout the recession without a sniff of stimulus. I saw an interesting statistic on the BBC the other day. Apparently 48% of small businesses either maintained their business or actually grew in the recession. 41. flashman said... devo: yes, I like str1. He's an original 42. devo said... 40. flashman said... devo: yes, I like str1. He's an original But he rarely talks about house prices nowadays which negates your post @35. 43. Jayk said... If the majority of the bears on this site were right in their 2007 and 2008 'predictions' we'd have an average house price of under 100k, oil at $250 per barrel and consumer inflation in excess of 25%. But you were wrong. Of course, now it's all "Ooh, wait 'til next year! You'll see!". Whatever..... 44. flashman said... timmy: I would love to have the ability to predict but all I am doing is reading hard data. My 'forecasts' are just a faithful reproduction of the latest thinking amongst forcasters and economists. I don't really rate my own forecasting skills. It is indeed ridiculous that a recovery consists of only one data point but for what it's worth, most people are predicting healthy growth for the next two quarters followed by more tepid growth. Once we've had three consecutive quarters of growth, we will start the slow slog of clearing up some debt. I never said it would be easy 45. clockslinger said... Flashman has a balanced view I would say...there will no doubt be many sectors and many people doing quite well and likely to do better. The jobless recovery /govnt. job cuts will be the big test of HP resistance...theres no public money left to spend on anything worthwhile now the banks have had it all. 46. devo said... 43. clockslinger said... there will no doubt be many sectors and many people doing quite well and likely to do better Care to develop that thought? 47. cat and canary said... flashman, i can´t argue with your subjective experiences of you/your clients, (what sectors do you deal with, from curiosity). I work in very large engineering firms, and believe me, it is not all rosy news. I am not arguing that GDP isn´t going to turn positive shortly. Of course it is. Your view is bullish compared to the FT account on the OECD report that I pasted is it not? Perhaps that is substantiated by your access to fresh data? But when the FT says "as the expected improvement in economic activity, relative to long term potential levels, can be partly attributed to a decrease in the estimated long term potential level and not solely an improvement in economic activity itself" ...then your views still seem litoo bullish. People here find it pretty difficult to believe that this growth we are seeing is not deep rooted, prove me otherwise. 48. flashman said... devo: You missed my point. You claimed that "No way would you return to this site day after day if you were genuinely optimistic about our economic future" I pointed out that this is a house price site (as opposed to an economic pessimism site). If it were an economic pessimism site, I would not return. As it is a house price site and I believe that house prices will fall, I return (occasionally) That does not stop me liking str1. You very rarely talk about house prices. Why do you return? 49. devo said... 42. flashman said... My 'forecasts' are just a faithful reproduction of the latest thinking amongst forecasters and economists. This statement encapsulates all that I find endearing about you... (I hope that doesn't come across as patronising) 50. devo said... 46. flashman said... You very rarely talk about house prices. Why do you return? 'cos it's fun! 51. flashman said... C&C: yes my 'bullishness' is substantiated by my access to fresh data. The data is positive and I can't report it any other way. We might claim that its only because of this and that but after a few quarters you have to stop doubting. By the way I wouldn't consider myself bullish. I just read the data and try to stay as 'Spock like' as possible "as the expected improvement in economic activity, relative to long term potential levels, can be partly attributed to a decrease in the estimated long term potential level and not solely an improvement in economic activity itself" the word 'partly' implies that the expected improvement is also partly real. Its not much of a negative. The rest of the report was very positive. I am not saying that its all gravy from now on. It'll be difficult but things are slowly coming back to life. I suspect that you also believe we'll see several quarters of growth. 52. flashman said... devo: no problem 53. Devon_bloke said... I like reading this site because I think there is a lot of intelligent opinion on the state of the economy. I find it interesting and educational. But for the amount of bright people on here I do wonder when I see people like flashman provoking the response they get. There's an old saying on the internet....don't feed the troll! 54. flashman said... C&C: btw, my views are not based on that OECD article. It was the first consumer available thing I could google 55. mander said... I will see recovery for the economy not for the housing market only when unemplyment stops and jobs are created by the private sector. Since then I have better things to do than listen to estate agents suppositions. 56. cat and canary said... allright flashman, will wait with interest regarding next few Q´s of GDP data. I still have my doubts for now regarding how strong that growth will really be. The forces of unemployment, inflation pressure and govt debt levels etc bearing down for considerable time yet. Whole divisions of my organisation are still not taking orders, irrespective of share price performance. But agree that things are slowly coming back to life at a corporate level. But perhaps not in many people´s pockets for quite some time. 57. rumble said... Flash, retail up does not equal a sign of recovery in the run up to christmas. Increased interest rates does not equal a sign of recovery. 58. quiet guy said... Flashman, Some of the criticisms of your earlier offerings are perhaps a bit too strident in tone but if you're still reading, I invite you to peruse the comments to this: I find it interesting to compare the tone of the comments to the Telegraph article with this blog. It's relatively easy to write us off as a bunch of "recidivist bears" doomers but can comments posted to the Telegraph also be dismissed quite so easily? Maybe I'm just a glass half empty personality but I'm tired of seeing failure being rewarded. Really tired. 59. flashman said... rumble @54: Economists compare like for like retail sales ie Nov 08 - Nov09. A central bank typically puts its interest rates up to cool growth and puts them down to encourage growth. 60. flashman said... Quiet guy: I am genuinely sorry to hear that you are tired of the way things are. I read the article and the comments. It has to be recognised that posters/commentators tend to be angry people. Contented people (the majority) do not generally bother. Many of the opinions expressed on this site would be met with amazement by the general public. Your moderate and reasonable postings are sadly not typical HPC fare, although there are several good posters. The angry reaction to my original post reveals that some posters are very insecure in their bearish beliefs. I have often been struck by the thought that some HPC'ers actually fear a recovery. My guess is that they perceived that an economic crash would level the playing field and wipe away years of frustration. I think bellwether calls it an equalisation fantasy. The recession is technically over (although there's still lots of pain to come) and soon there will be more good news than bad news. This site is on its last legs and the debates will get increasingly bad tempered. Utube clips and 'end of days' stuff has already wrecked the site and it will only get worse. I was initially fascinated by the mindset of the bearish blogger but I now realise that some of them are in serious pain and that parading my optimism might be a little unkind. I think it is, therefore, a good time to slip away. 61. hpwatcher said... Well, enjoy it Flashman, the question is how long will it last? I mean, how much longer will the FLOOD of cheap money that is QE, encourage people to take risks in an artificial economy giving the impression of real recovery. But what happens when the ''heroin'' is withdrawn? For some reason, you have chosen to see some of the effects of QE as a more sustainable recovery - just because your customers have - you should know this far better than anybody else. Yes, we will definitely see what will happen. 62. sold out said... Flashman where is the "signs of recovery list" you promised? seriously i would be interested to see them. btw i think you are wrong to assume that many of the postings here are an "angry" reaction to your bullish forecasts. You did say initially that "signs of recovery are everywhere if you care to look" It was this statement that i believe most here disagree with. later on in the thread you say "My opinions are are based on hard data (obviously we have better access to fresh data than most). The press will catch up in a few weeks" Which is a bit contradictory, don't you think? Anyway i look forward to your future postings as they allways seem to generate a good debate. 63. flashman said... sold out: I do try to stimulate good debate but you guys should self-regulate a bit better. I always tried to reply to posters such as yourself but why do you tolerate the crazy guys? If I were a bear, I would stamp on the nastier fellows because they drive away people who are prepared to make a counter argument. Read through this thread and ask yourself if you would bother? I really don't see how my two comments clash but no matter. I do look at fresh data but the signs have been there for some time. One confirms the other. Btw the signs really are everywhere and I have referenced several of them in this thread. Here's some examples from the article posted by devo this morning (there are some qualifying statements in the article but the jist is unmistakable) "Asia is dancing along as if the recession never happened". ." I'm afraid my promised list will never be posted (I might post on The Economic Voice if you're interested) but there really is no need. The press is full of it and will be for months to come. Cheers 64. cat and canary said... flashman, I appreciate your technical analysis, but disagree with your criticisms of blogger´s "anger" as being unrepresentative of the general public and plain "bad tempered" or becasue of "insecurity about a recovery" 1.) "moral hazard" is something real, aluded to by our BoE governor, and it represents the injustice of using huge amounts of taxpayers money to bail out the feckless. 2.) This was a HPC site, until the govt dumped the losses on the nation, then the issues became wider and deeper than that. Whereas many of us accept the govt had little choice, they do have a choice in tightening regulation and its not happened in any meaningful way according to many here. 3.) "Many of the opinions expressed on this site would be met with amazement by the general public." ... disagree... "a new BBC poll has found widespread dissatisfaction with free-market capitalism." (). With respect, I know that your are a intelligent person, like so many on this site. But if I was to make sweeping statements about bulls, I would say that "they live in crystal castles and have very little idea about the suffering of the rest of the world." But that would also be a sweeping statement. Given that 50 million people worldwide have already lost their job, and about the same number have already slipped below the poverty line as a direct result of the banking crisis, according to the world bank, then I think that the anger expressed here is justified. Very 65. hpwatcher said... "Asia is dancing along as if the recession never happened". This is definitely true, but the UK and the US are not Asia. C'mon then Flashman, admit that you are simply mistaking the effects of QE for a full blown recovery? QE isn't wealth creation. It's an easy mistake to make, lots of people are making it. I fear the electorate of the UK are going to make it too. Well, your mind is obviously as closed as those that you villify on here, so I won't try to convince you. 66. flashman said... C&C: It is important to remember that the vast majority of people have kept their jobs. Unemployment is less that it was in previous recessions, despite better productivity and a higher population. Some perspective is needed. 67. hpwatcher said... C&C: It is important to remember that the vast majority of people have kept their jobs. Unemployment is less that it was in previous recessions, despite better productivity and a higher population. Some perspective is needed. This has not been a 'conventional' recesison. 68. flashman said... hpwatcher: I actually understand QE. With respect I'm pretty sure that you don't. Of course it helped. That's what it was for. 69. flashman said... hp: The only thing unusual about this recession is that it is now fashionable to talk about it on blogs. Even the banking crisis has happened before (over and over throughout history). 70. cat and canary said... I expect that many people were angry in previous recessions also... Of course, many people have scraped through this with minor cuts. But the banking system very nearly did collapse with unknown consequences, which is great cause for concern. That is very real perspective 71. estrader said... flashman, an honest question (If you wouldn't mind answering): Using what you know and what you sense, would you buy a property now? Not asking for a long detailed reply, maybe just in point form as to some reasons why or why not. Many thanks. 72. flashman said... estrader: I just bought a large plot and I bought some land in France this summer. I think house prices will fall (as detailed elsewhere, ad nauseam) but the margin on my finished house should more than compensate. A pragmatic solution. I don't approve of waiting for a situation to suit me 73. smugdog said... When the troops start fighting each other, it's time to worry. Oh what a lovely war it is. 74. flashman said... smugsy: not sure I was ever one of the troops. 75. hpwatcher said... hpwatcher: I actually understand QE. With respect I'm pretty sure that you don't. Of course it helped. That's what it was for. I would not be so arrogant to pretend that I understand QE.....But, if I may ask, do you? I'm not sure even BOE understand what they have done. hp: The only thing unusual about this recession is that it is now fashionable to talk about it on blogs. Even the banking crisis has happened before (over and over throughout history). Yes, but not one quite like this. the margin on my finished house should more than compensate Yes, I understand that you now speculating on housing....and that you now have vested interests in talking things up. 76. sold out said... Thanks flashman for the reply. I am of the opinion that regardless of what we see at the moment the true indicator of where the uk economy is going will only be revealed once QE stops, we will see what happens middle of next year i guess. 77. flashman said... hpwatcher: I actually bought land with the intention of providing my family with a great home. It is not my intention to ever sell or 'speculate' as you put it. I couldn't care less if prices rise or fall "and that you now have vested interests in talking things up". A nonsensical comment. I repeatedly say that house prices will fall and back it up with a reasoned argument. "I would not be so arrogant to pretend that I understand QE.....But, if I may ask, do you? I'm not sure even BOE understand what they have done.". 78. smugdog said... You forecast capitulation, you report job losses, you detest the way that stupid street has been rescued and bailed out. You hope beyond hope of market crashes, disasters and government failure in order to provide that one Holy Grail to you - house price crash! But why? Why oh why do the majority on this site think in this way? I’ll tell you why, greed, the exact same value that you detest so much in the “sheeple” that you write about day after day. The majority are here waiting, hoping for the (don’t make me laugh, it hurts) CRASH So that they too can get “back in” and reap the rewards of buoyant markets after selling – not so high - and waiting, waiting, waiting. You are no different from the very people that you look down on from your ivory desks, where you have that sneaky look at HPC every 5 minutes, but ever so careful just in case the boss catches you. If anyone questions your views, your “precious things”, that’s not allowed in here! On your way. Flashman, don’t waste your time on these selfish self-serving individuals and move on, or if you enjoy a plaything, then do carry on, it’s so very entertaining. Flashman, Techie, Crunchy, S2R1, Sold Out, C& C and a few select more, your views are balanced, valued and well written. 79. flashman said... cheers smugdog. 80. p. doff said... That list of posters of balanced, valued and well written views. Agree for some of the names, but the rest ........ hahahahahhhaaawhaaahahawhaha . ooh my sides hurt! 81. hpwatcher said.... I disagree, I don't think you do know what you are talking about. Nor do I think BOE really know what they are doing either. selfish self-serving individuals Than one is a classic. 82. flashman said... hpwatcher: It doesn't matter what you think. The world continues to spin without your dourness and lack of comprehension. Smugdog is right. Praying for an economic calamity to suit your own ends, is indeed very selfish. 83. cat and canary said... well thanks smugdog, not sure how balanced i am! ..couple of pints down the Cat and Canary should sort out my balance! w.r.t. flashman, respect for standing up for what you believe in the face of fierce bears! Smugdog I'm not in favour of encouraging flashman to leave! We need his opinion, even if it is a bit bullish, haha ;-) w.r.t 'bashing bankers' - i live amongst them, and know a few. For the most part they're a bunch of intelligent 30-somethings, earning a 100K salary and looking out for themselves. But its the ones at the very top, pulling the strings, the Fred Goodwins etc, i'm wonder about, with such enormous power and influence comes great responsibility, and I've not seen much of the latter 84. p. doff said... Flash, Smugsy - Can a couple of months make that much difference? - Or do the conflicting views of a myriad 'knowledgeable' people suggest nobody really knows how/when this fiasco will pan out? Take your point though - it's obvious the doomsters are an infectious and relentless breed, whereas a lot of the more rational types seem to move on. 85. flashman said... Hello p.doff: I had a quick look. He says that “unemployment will still be rising c 100,000 PER MONTH to next year” It only increased by 12,900 in the last release and employment actually increased by 6000. I wouldn’t pay too much attention to this chap. I was quite bearish at the start of the year but you've got to be prepared to analyse data as it comes in. I'm always amazed by how quickly things can change but I'm no longer surprised by it 86. icarus said... flashman - you carried your bat through the innings. Well played, sir. 87. flashman said... Thanks icarus. On the strength of our last debate, I've started re-reading Das Kapital. It's absolutely brilliant. I read it in college but took a completely different meaning from it. 88. hpwatcher said... hpwatcher: It doesn't matter what you think. The world continues to spin without your dourness and lack of comprehension. Smugdog is right. Praying for an economic calamity to suit your own ends, is indeed very selfish And it doesn't matter what you think either. Dourness hasn't got anything to do with it - what I want is honesty. Feel free to throw as many insults as you want. We shall see. 89. sold out said... smugdog I personally detest the way the bail out has rewarded stupid incompetent bankers,stupid idiotic housing "developers", and BTL's etc etc. These risk takers have been saved by the rest of us, the prudent, those that have worked hard and saved. I read somewhere recently that this will eventually cost £24,000 per uk resident to save these suckers. How can that be right or fair? This is not capitalism or free markets as i understand them. I have just bought a house recently so have no personal reason for wanting a HPC, but i also do not wish to live in a country that continues to repeat the same boom and bust cycles regarding housing that are so damaging to all of us and future generations. If my money is being used to prop up the feckless and a HPC is avoided so be it, but in return i would expect regulation on banks, Changes in tax to prevent the BTL profiteers, a massive house building program...If that doesn't happen then sadly and with regret i and i believe many others will leave the UK for good. 90. flashman said... As you wish hpwatcher. Good luck to you 91. icarus said... flashman - you just convinced me. I ought to do the same - along with Ricardo, Smith et al. 92. flashman said... "regulation on banks, Changes in tax to prevent the BTL profiteers, a massive house building program" Sensible stuff. I couldn't agree more 93. hpwatcher said... As you wish hpwatcher. Good luck to you And good luck to you too. also do not wish to live in a country that continues to repeat the same boom and bust cycles regarding housing that are so damaging to all of us and future generations Yes, we seem to be getting into of spirals of asset booms - to keep the party going. It doesn't feel right to me, I can't see it continuing for much longer....so there has been a FLOOD of new money, which has led to artificial growth, but I don't see any real wealth creation....this is the real problem for the UK. 94. smugdog said... Good points - Sold Out. 95. techieman said... Flash - i hope you read this. I think you have a point. Its no good being a bear and banging you head against a brick wall. Personally i said (in Oct 2008) that we would be setting up for a retracement toward the highs. At that point i envisaged the return of the bulls and even some bears becoming bullish on this site. I did say though that then we would have the next move down. I think people need to calm down a bit - my own view is still that we will have the double dip. BUT i can accept being wrong on that, and perhaps we are in a new dawn. A bit of rambling there my point is this: If you are off, will you come back if the bear re-instates? Not for you to eat humble pie or anything am just interested in your views. 96. flashman said... hello techie: I am following this thread to the end and that’s it for me. I am going to take a year off to build a family house. I am also entered into the Etape du Tour next summer and will be in serious training for that (good excuse to spend a few weeks climbing Mount Teide this winter). It finishes with an horrendous climb from 457 meters to 2115 meters, so I have some work to do. I am only telling you this to explain why I wont be blogging. These days, I don’t really do much work in between meetings, so a spot of blogging with my coffee, was sometimes good filler Re the bear reinstating …I am ONLY talking about the actual economy, NOT the markets. I gave up trying to decipher their crazy dance years ago. As you know more than anyone, it's quite possible for the markets to tank because the economic recovery has become established or continue growing or do a triple back flip summersault with twizzle shapes. I remain a property bear. I doubt that will change in the foreseeable future because interest rates and taxation will rise at some point. I wish you well 97. smugdog said... You take care out there Flash. Good luck 98. techieman said... Ok Flash - good luck and have fun. I am sure our paths (as opposed to swords) will cross one day... if they havent already! 99. quiet guy said... Cheers Flashman. I agree with techieman; it would be interesting to compare notes in a year or so about the UK economy. 100. flashman said... Cheers quiet guy I will make a gratuitous post because 100 is a nice looking number. Techie is probably talking about comparing notes on the equity markets. I'm glad he brought that up because it gave me the chance to make it clear that I was only talking about the actual economy. I don't tend to think in terms of the markets but I'm flattered that you would want to compare notes with me and will of course try to oblige sometime in the new year. All the best.
http://www.housepricecrash.co.uk/newsblog/2009/11/blog-dclg-mom-yoy-26322.php
crawl-003
refinedweb
7,823
72.26
OBJ Importer Plugin On 20/03/2013 at 01:58, xxxxxxxx wrote: i have multiple obj files in one folder and want to load them into the scene. i have a plugin script which reads the files and print them to the console but the didn't open. can anybody have a look. import c4d from c4d import gui from c4d import documents from c4d import utils, bitmaps, storage,plugins import collections, os,math #get an ID from the plugincafe PLUGIN_ID = 1000901 plugName = "NEckimporter" def doSomething() : pass class ReadFolder(c4d.plugins.CommandData) : dialog = None def Execute(self, doc) : path=c4d.storage.LoadDialog(c4d.FILESELECTTYPE_ANYTHING, "Please choose the folder", c4d.FILESELECT_DIRECTORY) dirList=os.listdir(path) for fname in dirList: print fname c4d.documents.LoadFile(fname) # the prints works, but not the load return True def RestoreLayout(self, sec_ref) : return True On 20/03/2013 at 04:52, xxxxxxxx wrote: you are not passing a valid path arguments to the LoadFile method. listdir returns a list of the file names for the given folder path, not the file paths contained in in a folder path. please use [KODE]mycode[/KODE] flags for your code (code written with c instead of k). On 20/03/2013 at 05:42, xxxxxxxx wrote: Yeah i figured it out, i have to take the whole file path. Its now working, but it opens each obj in its own document. i want to merge it into the same document like the import function. is there a way without merging documents together? this is the only solution i find in the docs. On 20/03/2013 at 06:37, xxxxxxxx wrote: yeah it is obviously loading each file into seperate document, i thought this was intentional. to merge the file with your current document either use c4d.documents.mergedocument or c4d.documents.loaddocument. the first method just works like the command know from the c4d file menu, while the second method does not do any loading, but returns a BaseDocument. you would have then manually search and insert the data from the returned BaseDocuments On 20/03/2013 at 07:22, xxxxxxxx wrote: hi littledevil, can you point me out how to use the merge documents? i couldn't figure it out, how to merge all into one. On 20/03/2013 at 08:10, xxxxxxxx wrote: Originally posted by xxxxxxxx hi littledevil, can you point me out how to use the merge documents? i couldn't figure it out, how to merge all into one. not sure what is meant with that. just merge the documents in an iterative fashion or use the loaddocument method and do it manually. def Execute(self, doc) : ... res = True for path in pathlist: res = res and documents.MergeDocument(doc, path, c4d.SCENEFILTER_OBJECTS) edit: it might be necessary that you send a message to the hosting basedocument each time you merge a document with the hosting document before you merge the next document (see basedocument.sendinfo). haven't done such mass merging yet. if it still fails to work for multiple documents simply use loaddocument and add the objects/ materials manually. On 21/03/2013 at 08:32, xxxxxxxx wrote: ok i tried the way to read the first doc ad merge it with the active (which is the last opened) with this code but nothing happened. no failure an no merging. def Execute(self, doc) : path=c4d.storage.LoadDialog(c4d.FILESELECTTYPE_ANYTHING, "Please choose the folder", c4d.FILESELECT_DIRECTORY) dirList=os.listdir(path) for fname in dirList: openName = os.path.join(path, fname) print openName r = c4d.documents.LoadFile(openName) firstDoc = c4d.documents.GetFirstDocument() activeDoc = c4d.documents.GetActiveDocument() print firstDoc print activeDoc c4d.documents.MergeDocument(activeDoc, firstDoc, c4d.SCENEFILTER_OBJECTS)# | c4d.SCENEFILTER_MATERIALS) On 21/03/2013 at 09:36, xxxxxxxx wrote: use codetags and read the documenation. MergeDocument accepts a BaseDocument for the hosting document variable and a string path or MemoryFileStruct for the document to merge. you pass two BaseDocuments. also your approach will open multiple docments within c4d, which is apparently not your goal. the reason why you are not getting any result is, that you do not read/print the MergeDocument result. It will be always False, as your second parameter is not in the expected format. On 21/03/2013 at 10:39, xxxxxxxx wrote: Give this a try. It should load all of the .c4d scene files, .obj files, etc. in the selected folder import c4d from c4d import gui from c4d import documents from c4d import utils, bitmaps, storage,plugins import collections, os, math #get an ID from the plugincafe PLUGIN_ID = 1000901 class ReadFolder(c4d.plugins.CommandData) : def Execute(self, doc) : path=c4d.storage.LoadDialog(c4d.FILESELECTTYPE_SCENES, "Please choose the folder", c4d.FILESELECT_DIRECTORY) dirList=os.listdir(path) for fname in dirList: openName = os.path.join(path, fname) #print openName c4d.documents.MergeDocument(doc, openName, 1) c4d.EventAdd() return True if __name__ == "__main__": help = "The text shown at the bottom of C4D when the plugin is selected in the menu" plugins.RegisterCommandPlugin(PLUGIN_ID, "Read Folder", 0, None, help, ReadFolder()) -ScottA On 22/03/2013 at 01:24, xxxxxxxx wrote: Hey ScottA, this code works like a charm :) thanks.
https://plugincafe.maxon.net/topic/7047/7962_obj-importer-plugin
CC-MAIN-2020-40
refinedweb
859
58.69
ASP.Net MVC 3 - An Overview Part 1 ASP.Net MVC 3 is one of the approaches of developing Web Applications using Microsoft Technology.The other approach is using ASP.Net Web Forms which is built on .Net framework.Depending upon the architectural decision we need to choose which approach will best suit with our requirement.Each approaches have its own prons and cons like any other technology.We Explore them in detail in this section. Welcome to ASP.Net MVC What is ASP.Net MVC 3? ASP.Net MVC 3 is a microsoft framework for developing web applications.A framework is nothing but structural piece of software upon which other applications are built.Since its a framework it will automate all the tedious and common task so that development will be easier.This will make life of a progammer quite easy.Moreover it is an open source software. Journey of ASP.Net MVC 1. ASP.Net MVC 1 was released on March 2009. 2. ASP.Net MVC 2 was released on March 2010. 3. ASP.Net MVC 3 was released on January 2011. 4. ASP.Net MVC 4 beta released on February 2012. Out of these, currently ASP.Net MVC 3 is available for testing. MVC Pattern Vs ASP.Net MVC 3 They are not interchangeable.Here MVC refers to Model-View-Controller.Its quite older design pattern.As you know,patterns formalizes best pracices. The only relation between ASP.Net MVC 3 and MVC is that ASP.Net MVC 3 framework uses MVC pattern as their architetural pattern. ASP.Net Web Forms Vs ASP.Net MVC 3 I already mentioned both ASP.Net Web Foms and ASP.Net MVC 3 are different approaches developing web applications.Both are built on top of ASP.Net.Both does not hide stateless nature of web development.But there exists subtle differnces between these two approaches.Few of them are listed here: 1. ASP.Net Web forms built on Page Contoller Pattern where we add functionalty to individual pages where as ASP.Net MVC 3 is build on Model-View-Controller Pattern.Here controller is the core object who manages to render the appropriate view. 2. ASP.Net MVC 3 is designed in such a way that it make unit testing very efficient where in case of ASP.Net Web forms unit testing is quite tedious. 3. ASP.Net MVC 3 does not use postback,viewstate,no page lifecycle,does not have rich server controls,etc,but it provides full control of the rendered HTML.Where as ASP.Net Web forms supports postback,view state,rich controls etc.,but here we don't have full control over the rendered HTML. 4. ASP.Net MVC 3 follows Test Driven Application Development(TDD) where as ASP.Net Web forms not. 5. ASP.Net Web forms provides Rapid Application Development(RAD)where as ASP.Net MVC 3 provides loosely coupled approach where it follows seperation of task. Main Ingredients of ASP.Net MVC The main parts of ASp.Net MVC are: 1. Model : A set of classes which contains the application data and the business logic for how data can be inserted or updated or deleted.The model is accessible by both controller and view.By the help of model, we can keep data object and the logic that operates on the data seperate from rest of the application. 2. View : Responsible for displaying the User Interface. 3. Controller :It is the heart of MVC which is responsible for handling communication from the user and it contains application specific logic. As it name implies it controlls the entire application.Controller can access model class to pass data to the view. Features of ASP.Net MVC 1. seperation of application task in to model,view and contoller.Thus it helps to manage complexity.Also multiple developers can independently work for the same module. 2. It does not use viewstate,postback,web forms and server controls and does not have page life cycle.so it gives developers full control over the rendered HTML. 3. It follows a test driven development approach. 4. Unit testing the application is very easy and efficient compared to its web form counter part. 5. Easy integration with Javascript. What is new in ASP.Net MVC 3 Listed are few important features of ASP.Net MVC 3. 1. Razor view Engine 2. JQuery Validation Plugin. 3. Improved Ajax Support. 4. Nuget 5. Global Action filters 6. Dynamic language support. Requirements for ASP.Net MVC development Operating System : Window XP,Vista,2003,2008 Softwares : 1. Visual Studio 2010 0r web developer 2010 2. ASP.Net MVC 3 Tools Steps for developing ASP.Net MVC 3 Application 1. Open Visual Studio 2010 0r web developer 2010 2. Select ASp.Net MVC 3 Application 3. Select the application template,view engine and check the unit test project checkbox if you wish to geneate testcases. About Application Templates There 3 types of templates available.They are: 1.Internet Application template : This is meant for Web Application 2.Intranet Application template : This is meant for Windows Application 3.Empty template : This is meant for experienced MVC developers who wish to do set up and configuration according to their needs. About View Engine View Engine allows to generate HTML mark up in the ASP.Net application. There are 2 types of view engines are available.They are: 1. ASPX view : This is the default view. Till MVC 2 only ASPX view were available. It uses comples xml like syntax. 2. Razor view : This is a new feature available in MVC 3.It uses raizor's syntax. Files and Directories If you choosen any template other than empty template some files and directories are added to the ASP.Net MVC project automatically.They are : 1. Controllers : This folder contains the controller classes which handle the URL request. 2. Views: This folder contains the view responsible for rendering the HTML.This contains subfolder called Shared which contains all the reusable components. 3. Models : This folder contains classes that repesent and manipulate data or business objects. 4. Script : This folder contains Javascript files. 5. Content : This folder contais CSS and image files. 6. App_Data : This folder contains data files that you want to read/write. 7. Global.asax 8. Web.Config 9. Packages.config etc. What is Controller ? In ASP.Net MVC,controller is a class inherited from System.Web. Mvc. Contoller class. All the incoming request in an ASP.Net application are handled by contollers.It is responsible for flow control logic.Once the request comes to the controller it will communicate with the model and once it got a response from the model it is the contoller who decides which view should be rendered. Sample Controller Class public class ProductController: Controller { public Actionesult index(){ return View(); } } Here index is the controller's action method and action result is a class which allows to returns a view. I will dicuss more about action result later. What is View ? You can see in the above example, that a controller action return a view.A view contains HTML mark up and content that is sent to the browser.View must be created in the right location under the view folder. Rules for creating a view : 1. A sub folder must be created under the Views folder with the same name as your respective controller. 2. Within the above mentioned sub folder create .cshtml/.vbhtml file in case of razor view and .aspx file in the case of ASPX view. 3.The name of the above file must be same as the controller action. Sample View @{ ViewBag.Title = "Index Page"; } Here @{...} is the razor syntax.In prior view engines if you can recollect we have used <%...%> syntax. This is one among the many new features of ASP.Net MVC 3. What is a Model ? Model is a class which contains application business logic,validation logic and data access logic. ASP.Net MVC 3 Applicaton Life Cycle The following are major stages in the life of ASp.Net MVC application. 1. User enters a specific URL in the browser: At this time,in the Global.asax file,route objects are added to the route table collection. 2. Perform routing : The Url routing module uses the first matching route object from the route table collection. 3. Create MVC request handler: The MvcRouteHandler object creates an instance of MvcHandler class and passes the RequestContext instance to the handler. 4. Create Controller: The Mvcandler object user the RequestContext instance to identify the IControllerFactoy object. 5. Execute Controller: The Mvcandler instance call the controller's execute method. 6. Invoke Action : The ControllerActionInvoker object associated with the contoller determines which action method of the controller to call and call the respective method. 7. Execute result: The controller action method communicate with the model to get the response data and execute the result by returning the result type. ASP.Net MVC framework handle all these actions in the back ground.So the developer does not bother about these knitty-gritty details. Welcome to ASP.Net MVC A Good Article presented in an easy understandable way
http://www.dotnetspider.com/resources/43753-ASP-Net-MVC-An-Overview.aspx
CC-MAIN-2018-39
refinedweb
1,524
62.04
- 問題 Problem 42:Coded triangle numbers? - 解答例 def listTriNum(num): list = [1] i = 2 while i * (i + 1) // 2 <= num: list.append(i * (i + 1) // 2) i += 1 return list042_words.txt", "r") name = [] for line in f: text = line.replace('\n', '') text = text.replace('\r', '') text = text.replace('"', '') name = text.split(",") f.close() maxLen = 0 for n in name: if maxLen < len(n): maxLen = len(n) triangleNum = listTriNum(maxLen * 26) count = 0 for n in name: score = 0 temp = list(n) for x in temp: score += num[x] if score in triangleNum: count += 1 print(n, score) print(count) - 用意したファイル(p042_words.txt)
http://blog.muchance.jp/entry/2017/06/29/233000
CC-MAIN-2019-09
refinedweb
102
78.14
. An object whose identifier is declared without the storage-class specifier static and without any external or internal linkage has automatic storage duration, which ends (provided object is not a variable length array) with the end of execution of the block in which it is declared. It may or may not be declared with the specifier auto. For most of local variables, we do not use auto with them. An object whose identifier is declared with storage-class specifier static or with an internal or external linkage has static storage duration. Its lifetime is the entire execution of program and its stored value is initialized only once, prior to program startup. Qualifiers Auto, Register, Extern, and Static In C language, variables are not only the data type but also the storage class that provides information about their location and visibility. Auto: It is a local variable known only to the function or block in which it is declared. Register: A CPU has a number of registers for various purposes. A programmer may use the qualifier register for the declaration of a variable. register int n = 90; The variable value may be loaded on a register so that it is readily (in less time) available to the processor. The time of execution of program may decrease by this specification. However, it is left to the compiler to decide whether the variable can be loaded on a register. Extern: The specifier extern gives the variable an external linkage. Static: The static specifier gives internal linkage to variable. Program illustrates extern, static, auto, and register qualifiers: #include <stdio.h> extern int n = 10; int y= 5; void main() { static int D = 5; register int x =7, m; auto int K ; clrscr(); K = y*y; m = x*x; printf("n * n = %d \t y * y = %d\n", n*n, K); printf ("m = %d\n", m*D ); } The expected output is as given
https://ecomputernotes.com/what-is-c/types-and-variables/c-storage-class-specifiers
CC-MAIN-2019-30
refinedweb
318
51.89
Hello world ! So I'm new in Sublime text plugins and I'm trying to get all the .cs and .dll files in the file directory I have two problems : _The first is I have no feedBack from sublime text when an error occurs (is it normal ?) _The second is from my code (i tested it in the python idle) : import sublime, sublime_plugin import os class OpenCompilerCommand(sublime_plugin.TextCommand) : def run(self, edit) : file_name=self.view.file_name() path=file_name.split("\\") path.pop() command = "csc /out:Main.exe" dllArray = ] CodeFileArray = ] pathToDir = "" for dir in path: pathToDir += dir + "\\" sublime.error_message(str(pathToDir)) don't really know what's wrong...If you could also give a link for a beginner tutorial (I already googled it but... no good results) Thanks ! If you are new to programming, read this: learnpythonthehardway.org/If you are new to Python, read this: docs.python.org/2/tutorial/If you don't know how to google, click this: bit.ly/UXF0p0
https://forum.sublimetext.com/t/problems-with-the-listing-of-the-current-directory/8332
CC-MAIN-2016-40
refinedweb
163
60.11
I have tried on another site to get some input by did not get any help so I am going to try here. I have wrote this for my class and was wondering if there were any additions you could suggest me to improve the working of it.....Beginner here as you can prob tell by the difficulty of this code. Thanks Code:#include <iostream> using namespace std; int main() { cout <<"Thankyou for using CSE 1284's paint estimator. "; cout << "\n\nPlease enter following numbers in feet. "; double roomWidth; double roomLength; double roomHeight; cout << "\nHeight of the room: "; cin >> roomHeight; cout << "Width of the room: "; cin >> roomWidth; cout << "Length of the room: "; cin >> roomLength; double windowHeight; double windowWidth; cout << "Height of the window: "; cin >> windowHeight; cout << "Width of the window: "; cin >> windowWidth; double doorHeight; double doorWidth; cout << "Height of the door: "; cin >> doorHeight; cout << "Width of the door: "; cin >> doorWidth; double doorBoth = doorHeight * doorWidth; double windowBoth = windowHeight * windowWidth; double roomAll = roomWidth * roomHeight; double roomA = roomHeight * roomLength; double roomB = (roomAll + roomA) * 2; double gallons = (roomB - (doorBoth + windowBoth)) / 350; double coat; cout <<"\nHow many coats of paint will be needed? " ; cin >> coat; double exactGallon = coat * gallons; cout << "\n\n\n " <<exactGallon << " are needed to paint the room with " << coat << " coats. "; cout << "\n\n "; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/98473-additions-my-program.html
CC-MAIN-2014-10
refinedweb
213
62.21
Azure Search is a powerful search service available in Microsoft’s Azure cloud. Azure Search provides excellent support for indexing and querying data while at the same time shielding you from the intricacies of deployment, management, and search algorithms. Azure Search gives you an easy way to integrate powerful search capabilities (scalable full-text search, indexing, filtering, geospatial search, etc.) into your web and mobile applications. In this article we’ll look at how we can work with the Azure Search SDK in .Net. How to work with the Azure Search SDK Our journey will take us through the following steps. - Create a new Azure Search service via the Azure Portal if you don’t have one - Create a .Net application to work with Azure Search - Create and initialize a SearchServiceClient - Create an index - Upload documents to the index - Query the Azure Search service The Azure Search SDK comprises the Microsoft.Azure.Search client library. You can use the Azure Search SDK to upload your documents and execute queries without having to deal with the JSON data. The Microsoft.Azure.Search library contains the Index, Field, and Document classes, and it supports operations such as Indexes.Create (to create an Azure Search index) and Documents.Search (to search for documents in an Azure Search index) on the SearchServiceClient and the SearchIndexClient classes respectively so as to enable you to create and manage indexes and documents in the Azure Cloud. Create an Azure Search service Creating an Azure Search service is quite straightforward. Naturally you will need an Azure account. You can create one for free if you don’t have one. Follow this link to create a free Azure account. Sign in to the Azure portal and create a new Azure Search service (assuming you don’t already have one to use). You can follow the steps outlined here to provision the service. Create a .Net console application Create a new Console Application Project in Visual Studio and save it with a name. Select the project in the Solution Explorer window and install the Microsoft.Azure.Search package via the NuGet Package Manager. Once the installation is successful, you are all set to begin working with the Azure Search SDK. Let’s roll our sleeves now and write some code. Create the following POCO class in the Console Application project you just created. We will use it to represent the search document for now. We will upload and search some real documents later in this article. public class Author { public string Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string Address { get; set; } } Create a SearchServiceClient and connect to an Azure Search service The following code snippet can be used to create a SearchServiceClient instance and connect to the Azure Search service using the service name and the API key. static string serviceName ="IDGSearch"; static string apiKey ="Enter your Api key here"; SearchServiceClient serviceClient = new SearchServiceClient(serviceName, new SearchCredentials(apiKey)); Create an index in an Azure Search service When working with Azure Search you should create one or more indexes. An index may be defined as a persistent store of documents and can be created using a REST API or the .Net SDK. The following code snippet illustrates how you can create a new index. Note that this method accepts a reference to the SearchServiceClient and the name of the index to be created as parameters. private static void CreateIndex(SearchServiceClient client, string indexName) { var indexDefinition = new Index() { Name = indexName, Fields = new[] { new Field(“Id”, DataType.String) { IsKey = true}, new Field(“FirstName”, DataType.String) { IsSearchable = true, IsSortable = true, IsFilterable = true}, new Field(“LastName”, DataType.String) { IsSearchable = true, IsSortable = true, IsFilterable = true}, new Field(“Address”, DataType.String) { IsSearchable = true, IsSortable = true}, } }; client.Indexes.Create(indexDefinition); } Delete an index from an Azure Search service You can also delete an index if it is no longer needed. The following code snippet illustrates how to delete an index (if it exists). private static void DeleteIndex(SearchServiceClient serviceClient, string indexName) { if (serviceClient.Indexes.Exists(indexName)) { serviceClient.Indexes.Delete(indexName); } } Upload documents to an Azure Search index There are a couple of ways to populate Azure Search index with data, using either the .Net SDK or the Azure Search REST API. The Upload method given below shows how you can take advantage of the Azure Search Client SDK to upload one or more documents into an Index in a batch. private static void Upload(ISearchIndexClient indexClient, List<Author> authors) { var indexBatch = IndexBatch.Upload(authors); indexClient.Documents.Index(indexBatch); } Search documents using Azure Search Assuming that you have already created and uploaded documents, you can take advantage of the Search method to locate documents in the Azure Search index. The following code illustrates how this can be achieved. private static void Search(ISearchIndexClient indexClient) { SearchParameters parameters = new SearchParameters() { Select = new[] { “FirstName”, “LastName” } }; DocumentSearchResult<Author> searchResults = indexClient.Documents.Search<Author>(“Joydip”, parameters); foreach (SearchResult<Author> result in searchResults.Results) { var document = result.Document; Console.WriteLine(document.FirstName + “\t” + document.LastName); } } Azure Search is a full-featured search engine that provides a simple query syntax and also supports Lucene syntax for more advanced uses like fuzzy matching and regular expressions. It can automatically index Azure data sources including Azure SQL Database, Azure Cosmos DB, and Azure Blob storage, and can be used to index and search data from any source that can be uploaded in JSON.
https://www.infoworld.com/article/3269035/how-to-use-azure-search-in-net.html
CC-MAIN-2021-43
refinedweb
903
54.93
Plan content deployment (SharePoint Server 2010) Applies to: SharePoint Server 2010 Topic Last Modified: 2011-09-28 Content deployment is a feature of Microsoft SharePoint Server 2010 that you can use to copy content from a source site collection to a destination site collection. This article contains general guidance about how to plan to use content deployment with your SharePoint Server 2010 sites. It does not describe the purpose and function of content deployment, explain content deployment paths and jobs, or explain the security options when you deploy content. This article does not explain how the content deployment process works, nor does it explain how to set up and configure content deployment. For more information, see Content deployment overview (SharePoint Server 2010). In this article: About planning content deployment Determine whether to use content deployment Determine how many server farms you need Plan the export and import servers Plan content deployment paths - - Content deployment planning worksheet The planning process that is described in this article starts with helping you determine whether to use content deployment with your SharePoint Server 2010 solution. The remainder of the article describes the steps that are required to plan a content deployment solution: deciding how many server farms are necessary, planning the export and import servers, planning the content deployment paths and jobs, and special considerations for large jobs. You can record this information in the worksheet that is referenced in the Content deployment planning worksheet section. Although content deployment can be useful for copying content from one site collection to another, it is not a requirement for every scenario. The following list contains reasons for why you might want to use content deployment for your solution: The farm topologies are completely different. A common scenario is one in which there are authors publishing content from an internal server farm to an external server farm. The topologies of the server farms can be completely different. However, the content of the sites to be published is the same. The servers require specific performance tuning to optimize performance. If you have a server environment where both authors and readers are viewing content, you can separately configure the object and output caches on the different site collections based on the purpose of the site or user role. There are security concerns about content that is deployed to the destination farm. If you do not want users to have separate accounts on the production server, and you do not want to publish by using only approval policies, content deployment lets you restrict access to the production server. Before you implement a content deployment solution, you should carefully consider whether content deployment is really necessary. The following list contains alternatives to using content deployment: Author on production using an extended Web application If you have a single-farm environment, you can choose to allow users to author content directly on the production farm and use the publishing process to make content available to readers. By using an extended Web application, you have a separate IIS Web site that uses a shared content database to expose the same content to different sets of users. This is typically used for extranet deployments in which different users access content by using different domains. For more information, see Extend a Web application (SharePoint Server 2010). Create a custom solution You can use the Microsoft.SharePoint.Deployment.SPExport and Microsoft.SharePoint.Deployment.SPImport namespaces from the SharePoint Server 2010 API to develop a custom solution to meet your needs. For more information, see How to: Customize Content Deployment for Disconnected Scenarios. Use backup and restore You can use backup and restore to back up a site collection from one location and restore it to another location. For more information, see Back up a site collection in SharePoint Server 2010 and Restore a site collection in SharePoint Server 2010. If you decide that using content deployment in SharePoint Server 2010 is right for your solution, continue reading this article. A typical content deployment scenario includes two separate server farms: a source server farm that is used for authoring, and a destination server farm that is used for production. You can also use content deployment to copy content between two separate site collections within the same server farm, or you can use a three-tier server farm that contains a server for authoring, one for staging and quality assurance, and one for production. If you will be using content deployment, you should also decide how many server farms are necessary for your solution. For more information about topologies for content deployment, see Design content deployment topology (SharePoint Server 2010) After you have decided on a topology for your server farm, you must decide which servers will be the export and import servers. These are the servers in the server farm that are used to run the content deployment jobs. They do not have to be the same as the source or destination servers. However, the servers that are designated as export and import servers must have the Central Administration Web site installed. Decide which servers will be configured to either send or receive content deployment jobs and to record your decisions. In the content deployment planning worksheet, record each server farm in your content deployment topology, and note its purpose. For each server farm, provide the URLs of the export server, the import server, or both. Also record the Active Directory domain that is used by the farm. A content deployment path defines a source site collection from which content deployment can start and a destination site collection to which content is deployed. A path can only be associated with one site collection. To plan the content deployment paths that are needed for your solution, decide which site collections will be deployed and define the source and destination for each path. For more information about paths, see Content deployment overview (SharePoint Server 2010). If you will be using a three-stage farm topology, you must also plan for how content will be deployed across the farms. In general, you should reduce the number of “hops” the content makes as it moves from authoring to staging and then to production. For example, if you want to test content on the staging farm before you push it to production, you can deploy content from the authoring farm to the staging farm first, and then deploy content from the authoring farm to the production farm after the content has been verified. This means that only the authoring farm is responsible for deploying content to all other farms in the environment. Although it is possible to deploy content from authoring to staging, and then from staging to production, it is not necessary to use this approach. When you design content deployment paths for a three-stage farm topology, you must also carefully plan the scheduling of the jobs that will deploy the content to the other farms in the environment. For more information about content deployment topologies, see Design content deployment topology (SharePoint Server 2010). Record each path in the content deployment planning worksheet. For each path, enter the source and destination Web applications and site collections. Also record how much security information to deploy along the path: All, Roles only, or None. After you have defined the paths along which site content will be deployed, you must plan the specific jobs to deploy the content. A content deployment job lets you specify that a whole site collection or only specific sites in a site collection will be deployed for a specific path. Jobs also define the frequency with which they are run and whether to include all content, or only new, changed, or deleted content. You can associate multiple jobs with each path. For each path that you have defined, you must decide whether a job will deploy the whole site collection or will deploy specific sites. As you plan the scope of your content deployment jobs, be sure to think about the order in which the jobs will run. You must deploy a parent site collection or site before you can deploy a site below it in the hierarchy. For example, if you have a site collection with two sites below it, Site A and Site B, and Site A also has two sites below it, Site C and Site D, you must create and run a job that will deploy the top-level site collection, before you can deploy Site A and Site B. You must also deploy Site A before you can deploy Site C and Site D. If you plan to use content deployment jobs that are scoped to specific sites, be sure to schedule the jobs appropriately so that sites higher in the hierarchy are deployed before sites lower in the hierarchy. You must also decide when and how often to run each job. In general, you should schedule jobs to run during times when the source server has the least amount of activity. Content that is checked out for editing by a user when a content deployment job starts will be ignored by the content deployment job, and it will be copied with the next deployment job after it is checked in. You can configure a job to use a database snapshot of the content database in Microsoft SQL Server 2008 Enterprise Edition to minimize risk to the content deployment job. If you will be using a three-stage farm topology, you must also plan for when content is deployed across the farms. For example, if you deploy content from the authoring farm to the staging farm to test and verify content, you should plan to schedule the job that deploys content to the production farm so that there is enough time to resolve any issues that are found on the staging farm. For each path, record each associated job in the content deployment planning worksheet. If there is more than one job for a path, insert a row underneath the path for each job to be added. For each job, enter the scope and frequency with which the job will run. A content deployment job exports all content, as XML and binary files, to the file system on the source server and then packages these files into the default size of 10 MB .cab files. If a single file is larger than 10 MB, such as a 500 MB video file, it will be packaged into its own .cab file, which can be larger than 10 MB. The .cab files are then uploaded by HttpPost to the destination server where they are extracted and imported. If the site collection that will be deployed has a large amount of content, you must make sure that the temporary storage locations for these files on both the source server farms and the destination server farms have sufficient space to store the files. In many cases, you might not know the size or number of .cab files that will be included in the job until you start using content deployment. But if you know that your site is large and will contain lots of content, make sure that you plan for sufficient storage capacity as part of your content deployment topology. Download an Excel version of the Content deployment planning worksheet ().
https://technet.microsoft.com/library/cc263428(office.14).aspx
CC-MAIN-2015-40
refinedweb
1,884
56.79
Property node that defines a color mapping with constant color in each interval. More... #include <MeshVizXLM/mapping/nodes/MoLevelColorMapping.h> This node specifies a color mapping defined by a set of N scalar values (thus N-1 intervals) and N-1 colors representing the constant color used for values located in each interval. Thus, for a given value v, Vk <= v < Vk+1, the associated color is Ck. Notes: MoCombineColorMapping, MoCustomColorMapping, MoLinearColorMapping, MoPredefinedColorMapping MeshVizColorMapping, Legend, MaterialAndDrawStyle Constructor. Initially the color mapping is empty and has no effect. Returns the type identifier for this class. Reimplemented from MoColorMapping. Returns the type identifier for this specific instance. Reimplemented from MoColorMapping. Contains a set of N-1 color values defining the constant color of each level. Each color consists of R, G, B and A values in the range 0..1. Default is empty. Max Threshold color (R, G, B and A values in the range 0..1). Default is transparent black (0,0,0,0). Max Threshold enable flag. When TRUE, values higher than maxThresholdValue are displayed using the maxThresholdColor. Default is FALSE. Max Threshold value. Default is 0. Min Threshold color (R, G, B and A values in the range 0..1). Default is transparent black (0,0,0,0). Min Threshold enable flag. When TRUE, values lower than minThresholdValue are displayed using the minThresholdColor. Default is FALSE. Min Threshold value. Default is 0. Contains a set of N scalar values defining the levels of the colormap. Default is empty.
https://developer.openinventor.com/refmans/latest/RefManCpp/class_mo_level_color_mapping.html
CC-MAIN-2021-25
refinedweb
249
54.59
Bug. One of the points in last year’s The World’s Most Maintainable Programming Language is that it’s impossible for a programming language to enforce all but the most basic coding standards. I much prefer to read poorly-indented code with good identifier names than beautifully-indented code with meaningless identifiers, for example. I have yet to see a programming language that enforces meaningful names. If programming languages can’t even do that, maybe programmer discipline matters more for maintainability than language choice. (I do require a minimum set of features in a programming language. If it weren’t for C’s ubiquity, the lack of namespaces would sink it as a practical language for me.) I also have yet to see a programming language that magically allows barely-competent monkeys to produce good code. Yet somehow people still believe that choosing the right language will sedate their surly simians. (No, the Bugzilla coders aren’t monkeys. Max K-A has my respect. I just wanted to dispense my monkey-related wisdom yet again.) I am the "Perl guy" in a room full of Java devs. It happens that a lot of our Perl code base was written by "barely-competent monkeys" and has only reinforced their notion that Perl is messy. These same monkeys also wrote some sloppy Java code which they attributed to their Perl mentality. Not fair. I agree that aprticular criticism of Perl is daft, but to be fair much idiomatic Perl can be pretty obscure. There are all kinds of 'magic' default behaviours that you have to learn, and actualy that's what I assumed this post would be about from the title. Yes actualy Perl does have magic powers ($_, the polymorphing behaviour of <> in while loops, etc) but this makes it less comprehensible, not more. I'm writing some Perl again after several years using Python and while it's interesting and fun the gotchas are getting me all over again. Come on Perl 6, we need you! @Simon Hibbs: :) Lisp Scheme is the answer Hey chromatic. :-) @Luca, AMEN to that! Due to it's XML I would add XSLT to the mix as well, though with its roots planted firmly in DSSSL, one could argue them to be *very* similar in this regard. Lisp Scheme? Lisp is not Scheme, and Scheme is not Lisp, so are you speaking about some wondrous to-be-invented language then? Announcing the new ultimate obscure novelty language, Thcheme! You're right that no programming language can possibly enforce things like good naming standards. However, programming *communities* can, and features of the language can drive the community.. Good programmers will write good code in any language. Language is just a tool and there's nothing wrong if it happens to be powerful. Perl is powerful because of all the 'shortcuts' and 'magic' variables. Of all professions, only programmers can come up with the absurd argument that they wouldn't use a tool because it's so darn flexible and powerful !! It might be helpful to correlate artificial languages to natural languages.. I used to have a sign over my desk that said: "An Experienced Programmer can write FORTRAN in any language." It's the PERSON who writes good code, the language just provides tools that are either easy or not-so-easy to use. Don't forget the old quote: "the determined Real Programmer can write Fortran programs in any language" - it's still valid! (from "Real Programmers Don't Use Pascal",) I believe Flon's Axiom applies here: "There does not now exist, nor will there ever exist, a programming language in which it is the least bit hard to write bad code." I was going to comment on this article, but then realized that everything I wanted to say I said in comment on your other article. Ruby has magical powers though, just in case you wondered. "Technically, they're APES!"
http://www.oreillynet.com/onlamp/blog/2007/05/does_your_programming_language.html
crawl-002
refinedweb
658
63.19
Hey y'all! I got a little question. I have a 50 sec sound-file that I want to loop in my Flash application. Actionscript 3 is what I'm using. Thanks, Fred I'm not 100% sure on how this works in Flash, but the classes should be the same: import flash.net.URLRequest; import flash.media.Sound; var url:URLRequest = new URLRequest("sound.mp3"); var snd:Sound = new Sound(url); snd.play(0, 0); The Sound.play method takes three arguments. The first two are 1) start time and 2) loops. If you want to have it loop indefinitely you could create an event listener for when the sound stops and call a function to play the sound again.
https://www.sitepoint.com/community/t/how-do-i-loop-a-sound-file-using-actionscript-3/4345
CC-MAIN-2017-09
refinedweb
121
78.75
Suppose we know about one Multiplication Table. But could we find out the k-th smallest number quickly from the multiplication table? So if we have to height m and the length n of a m * n Multiplication Table, and one positive integer k, we have need to find the k-th smallest number in this table. So if m = 3 and n = 3 and k is 6, then the output will be 4., this is because the multiplication table is like − 6th smallest element is 4 as [1,2,2,3,3,4,6,6,9] To solve this, we will follow these steps − Let us see the following implementation to get better understanding − #include <bits/stdc++.h> using namespace std; class Solution { public: int ok(int m, int n, int x){ int ret = 0; for(int i = 1; i <= n; i++){ int temp = min(x / i, m); ret += temp; } return ret; } int findKthNumber(int m, int n, int k) { int ret = -1; int low = 1; int high = m * n ; while(low <= high){ int mid = low + (high - low)/ 2; int cnt = ok(m, n, mid); if(cnt >= k){ high = mid - 1; ret = mid; }else low = mid + 1; } return ret; } }; main(){ Solution ob; cout << (ob.findKthNumber(3,3,6)); } “2*” 4
https://www.tutorialspoint.com/kth-smallest-number-in-multiplication-table-in-cplusplus
CC-MAIN-2021-49
refinedweb
211
61.09
What Is NTFS? Updated: March 28, 2003 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 What Is NTFS? In this section A file system is a part of the operating system that determines how files are named, stored, and organized on a volume. A file system manages files and folders, and the information needed to locate and access these items by local and remote users. Microsoft Windows Server 2003 supports both the FAT and NTFS file systems. NTFS allows you to gain the maximum benefits for the needs of today’s enterprise business environments from Windows Server 2003, such as increased security, more robust and reliable performance, as well as a design for greater storage growth, features not found in FAT. Common NTFS Scenarios This section describes a few scenarios in which NTFS should be used as the file system on a server running Windows Server 2003. Increasing reliability NTFS uses its log file and checkpoint information to restore the consistency of the file system when the computer is restarted in the event of a system failure. In the event of a bad-sector error, NTFS dynamically remaps the cluster containing the bad sector and allocates a new cluster for the data, as well as marking the cluster as bad and no longer using it. For example, by formatting a POP3 mail server with NTFS, the mail store can offer logging and recovery. In the event of a server crash, NTFS can recover data by replaying its log files. Increasing security NTFS allows you to set permissions on a file or folder, and specify the groups and users whose access you want to restrict or allow, and then select the type of access. NTFS also supports the Encrypting File System (EFS) technology used to store encrypted files on NTFS volumes. Any intruder who tries to access your encrypted files is prevented from doing so, even if that intruder has physical access to the computer. For example, a POP3 mail server, when formatted with an NTFS file system, provides increased security for the mail store, security that would not be available should the server be formatted with the FAT file system. Supporting large volumes NTFS allows you to create an NTFS volume up to 16 terabytes using the default cluster size (4 KB) for large volumes. You can create NTFS volumes up to 256 terabytes using the maximum cluster size of 64 KB. NTFS also supports larger files and more files per volume than FAT. In addition, NTFS manages disk space more efficiently than FAT by using smaller cluster sizes. For example, a 30-GB NTFS volume uses 4-KB clusters. The same volume formatted by using FAT32 uses 16-KB clusters. allow you to mount a volume at any empty folder on a local NTFS volume if you run out of drive letters or need to create additional space that is accessible from an existing folder. Using features available only in NTFS NTFS has a number of features that are not available if you are using a FAT file system. These include: - Distributed link tracking. Maintains the integrity of shortcuts and OLE links. You can rename source files, move them to NTFS volumes on different computers within a Windows Server 2003 or Windows 2000 domain, or change the computer name or folder name that stores the target without breaking the shortcut or OLE links. - Sparse files. Large, consecutive areas of zeros. NTFS manages sparse files by tracking the starting and ending point of the sparse file, as well as its useful (non-zero) data. The unused space in a sparse file is made available as free space. - NTFS change journal. Provides a persistent log of changes made to files on a volume. NTFS maintains the change journal by tracking information about added, deleted, and modified files for each volume. - Hard links. of the hard links reference the same file, applications can open any of the hard links and modify the file. Using Windows Server 2003 features that require NTFS Windows Server 2003 includes a number of features that require NTFS as the file system. enables you to group shared folders located on different servers logically by transparently connecting them to one or more hierarchical namespaces. -. If the volume is not formatted with the NTFS file system, these Windows Server 2003 features will not be available. Note - Although NTFS is the preferred file system for hard disks, NTFS cannot be used on removable media. Instead, Windows Server 2003 uses FAT12 for formatting floppy disks, and FAT32 for formatting flash media and DVD-RAM discs. Operating System and NTFS Compatibility NTFS is not supported on versions of Microsoft Windows earlier than Windows NT 4.0 and Windows 2000 Professional or MS-DOS. The table Operating System and NTFS Compatibility shows which operating systems support NTFS. Operating System and NTFS Compatibility Note - Computers running Windows NT 4.0 require Service Pack 4 or later to access NTFS volumes previously mounted by Windows 2000, Windows XP, or Windows Server 2003. Dependencies on Other Technologies NTFS depends on the following technologies: Basic Disks and Volumes Basic disks and basic volumes are the storage types most often used with. Dynamic Disks and Volumes Dynamic disks can use the master boot record (MBR) or GUID partition table (GPT) partitioning scheme. All volumes on dynamic disks are known as dynamic volumes. Dynamic disks were first introduced with Windows 2000 and provide features that basic disks do not. Related Technology NTFS is related to the following technology: FAT file system The File Allocation Table (FAT) file system is an older file system that relies on an allocation table to keep track of files and folders on a volume. Windows Server 2003 supports both FAT16 and FAT32 file systems.
https://technet.microsoft.com/en-us/library/cc778410(v=WS.10).aspx
CC-MAIN-2017-30
refinedweb
970
60.04
Unary folds are used to fold parameter packs over a specific operator. There are 2 kinds of unary folds: Unary Left Fold (... op pack) which expands as follows: ((Pack1 op Pack2) op ...) op PackN Unary Right Fold (pack op ...) which expands as follows: Pack1 op (... (Pack(N-1) op PackN)) Here is an example template<typename... Ts> int sum(Ts... args) { return (... + args); //Unary left fold //return (args + ...); //Unary right fold // The two are equivalent if the operator is associative. // For +, ((1+2)+3) (left fold) == (1+(2+3)) (right fold) // For -, ((1-2)-3) (left fold) != (1-(2-3)) (right fold) } int result = sum(1, 2, 3); // 6
https://riptutorial.com/cplusplus/example/8931/unary-folds
CC-MAIN-2021-39
refinedweb
109
88.02
User Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.1) Gecko/20100101 Firefox/10.0.1 Build ID: 2012020800 Steps to reproduce: With Firefox 10, and possibly before, default applications are selected by Firefox using /usr/share/applications/mimeinfo.cache, however, this list is a system generated list in random order and therefor does not reflect the user's preferences of which application should be used to open a file. Actual results: When I click on a pdf (for example), I want to have the option to open it in my default application, in my case in okular, and not in gimp. However, gimp is presented as the only choice because it happens to be the first in the row in /usr/share/applications/mimeinfo.cache. Selecting an other application is not very user friendly. Expected results: Instead of using the system wide cache Firefox should use $HOME/.local/share/applications/defaults.list or $HOME/.local/share/applications/mimeapps.list to honor the user's preferences, and if nothing is found there, then /usr/share/applications/defaults.list should be used instead of /usr/share/applications/mimeinfo.cache *** Bug 727425 has been marked as a duplicate of this bug. *** *** Bug 568218 has been marked as a duplicate of this bug. *** We don't do anything with mimeinfo.cache directly. We just call into gnome-vfs using gnome_vfs_mime_get_default_application Is that function doing the wrong thing on your machine for some reason? I don't know, I don't use gnome, but some gnome libs are installed (Opensuse). But I use apparmor and when I blocked access to /usr/share/applications/mimeinfo.cache in the hope some other way was used, there was not even a default app shown, so I am quite sure it is used by firefox somehow. Besides, before upgrading from 9 to 10 I didn't have a problem with this behavior, it worked ok, although in thunderbird, which is of the same version all the time, it started in 8 or so that I suddenly had chromium as the only option to open url's which at the time I didn't trace back to this problem. As for finding the default app, wouldn't that be xdg-mime query default ? And if there is a need to use mimeinfo.cache, why not have all apps listed there in the dropdown list so the user can easily select? Now a user has to know which executable he needs to open something, not always easy. I agree with Dutch Kind that this should be handled on a desktop agnostic manner like xdg-mime. I have to add that we had this problem since at least Firefox 3.6.x on KDE. Possible solutions in order of preference: 1. make this work in a desktop agnostic manner (via xdg) 2. use the GNOME application order in the correct way (not via the cache file) This first solution seems better because it would be up to the distribution to ensure that xdg* works well, whereas the second one may reflect the good or bad behaviour of GNOME. Usage examples for xdg-mime query: [gustavo@localhost ~]$ xdg-mime query default application/pdf AdobeReader.desktop [gustavo@localhost ~]$ xdg-mime query default image/jpg gwenview.desktop [gustavo@localhost ~] > As for finding the default app, wouldn't that be xdg-mime query default xdg-mime postdates the creation of this code. Bug 296443 covers using it. Again, we're not actually using mimeinfo.cache ourselves. It's your GNOME configuration that's using it. > Besides, before upgrading from 9 to 10 I didn't have a problem with this behavior That's quite odd. I don't think this code changed from 9 to 10. Would you be willing to use nightly builds to find when the behavior changed for you? > 2. use the GNOME application order in the correct way (not via the cache file) Again, we are not doing anything with the cache file ourselves. We're just calling the official "get the app for a MIME type" GNOME APIs. First g_app_info_get_default_for_type and then if GIO is not around gnome_vfs_mime_get_default_application. You can see the code for yourself right here: I'd like to focus on figuring out why there was a behavior change from 9 to 10 on Dutch Kind's machine here, though. _Something_ weird is going on there. Maybe I first noticed the change by some update after which update-mime-database was called. A search on internet learned that the order of this cache file is randomly generated, although I understand it is not called directly. So I would say the problem has been there dormant and showed up after my cache was rebuilt. On thunderbird I noticed it before after a distro upgrade, possibly because chromium was now included in the opensuse build, which also resulted in a different cache. Just coincidence that I only discovered it now. OK, in that case it just sounds like the real problem is we're asking GNOME for the information and your GNOME is effectively misconfigured.... Is this then effectively a duplicate of bug 296443? It may be a GNOME problem but then it happens on at least two different distributions: Suse (cf Dutch Kind's report) and Mandriva (where I had the same problem). Whether or not this is a duplicate of bug 296443 depends on what will be done regarding that one. If bug 296443 is to be fixed (ie, xdg is adopted) then I guess this one won't be much relevant. Bug 296443 should probably be WONTFIX after bug 713802 lands (In reply to Chris Coulson from comment #10) > Bug 296443 should probably be WONTFIX after bug 713802 lands But, is that good? Why tie the default associations to GNOME if we can have a desktop independent way of achieving that? Using gio is a desktop independent way of doing that and isn't tied to GNOME. It's a hard dependency of gtk anyway. It implements everything we want, but it actually works (whereas xdg-mime and friends are pretty much unmaintained and are known for not being very reliable) I'm still not sure what's causing your current problem though. This has worked fine in the past with gnomevfs Ok, did some more investigations, when I copy the corresponding line containing for example pdf to $HOME/.local/share/applications/mimeapps.list then those settings are honored by firefox, when this is not found in this file then only the mimeinfo.cache is used. So, yes, firefox honors the user's settings if it is in that list. Still, it would be nice to use something more desktop independent. Chromium uses xdg and that works fine for me, no extra configuring, it takes my user's kde's default apps without a problem. The problem with the gnome way is that when you don't have gnome installed you have to add all the required apps to this mimeapps.list manually because KDE only writes those apps to this list that are manually changed by the user's kde configuration when this is different from the system kde default. xdg has no problems in this respect. GIO supports the shared mime specification, and so did GnomeVFS. If you run into problems with KDE, it should be a problem of KDE. I've seen xdg-mime does some KDE specific operations. Hi guys. I know nothing about your future GIO implementation but I hope it will support feature as in next example: $ xdg-mime query default x-scheme-handler/xmpp psi-plus.desktop and about default applications. for some reason FF does not read neither ~/.local/share/applications/mimeapps.list nor /usr/share/applications/mimeinfo.cache on my system (gentoo gnome3). So I always choose applications manually for each new mime. moreover FF does not remember my previous choice and may suggest wrong app (looks like it just doesn't try to guess mime and chooses last selected app for any file). xdg-open works fine though. I checked XDG_* vars and they are correct I love FF but chromium works much better in this regard. It's interesting, g_app_info_get_default_for_type() by itself does take mimeapps.list into account. As an experiment, I have created ~/.local/share/applications/mimeapps.list with the following line: inode/directory=kde4-gwenview.desktop; And used the following sample program: g_type_init (); GAppInfo *def; def = g_app_info_get_default_for_type ("inode/directory", TRUE); printf("EXEC: %s\n",g_app_info_get_executable(def)); It prints "gwenview". However, FF doesn't seem to take mimeapps.list into account and uses associations from mimeinfo.cache - e.g., when I click "Open containing folder" menu item for the downloaded file, the folder is opened using Dolphin (which is the default system-wide association), not Gwenview. I do see g_app_info_get_default_for_type() is called in FF the code, so something strange is going on here, indeed. And this makes FF quite inconvenient for KDE users, since KDE stores file associations in mimeapps.list; as a result, FF doesn't use associations set in the KDE control Center. Denis, would you be willing to just step through the relevant Firefox code on your system and see what g_app_info_get_default_for_type is returning, and whether it's even reached? It turns out that g_app_info_get_default_for_type is not reached in my system, indeed. When do_GetService() is called in nsGNOMERegistry routines, it doesn't detect giovfs and falls back to gnomevfs. So it is gnome_vfs_mime_* functions that do something wrong, but probably there is no need to bother about these obsolete routines. The question is - why FF doesn't detect giovfs? I am not a gio expert, maybe something is wrong with system environment/configuration, not with FF? I am using ROSA 2012 Marathon with FF 10. You mean the do_GetService call returns null? The most likely reason for that is that MOZ_ENABLE_GIO ends up not being defined. It looks like GIO is only enabled if the --enable-gio configure option is passed in when compiling.... See bug 713802 for why it's not defaulted on. But at least people who are compiling themselves can deal. :( Indeed, --enable-gio solves the problem in my case, thanks a lot! with --enable-gio everything works as expected. I hope it will become default soon. FWIW the problem seems to have gotten worse recently; GIMP 2.8 recently landed in Debian Testing, and somehow it got prioritized over Evince for PDF documents, which means every single PDF now opens with Gimp instead of Evince. I tried to override this in Preferences > Applications, but neither "PDF File" nor "Portable Document Format" entries had any effect. I wonder how that pane is supposed to work! :) I've been hit by this recently, but for me the problem is that PDFs are being handled by Inkscape. My mimeinfo.cache file (the file that is being used by GIO and therefore Firefox) contains the following line: application/pdf=inkscape.desktop;evince.desktop;zzz-gimp.desktop; The "zzz-" thing is a hack added by my distribution - see. Oddly enough, the ordering here does not actually seem to be alphabetical, because inkscape is listed first. This is why Firefox (or GIO, more precisely) has been trying to open PDFs in Inkscape. However, "xdg-mime query default application/pdf" IS alphabetical, because it does not use the mimeinfo.cache file. xdg-mime is actually a shell script, and it finds applications that support PDFs by grepping for the mime type in all of the .desktop files. The list of files passed to grep is alphabetical, so the list of files returned by grep is also alphabetical and starts with evince.desktop. Therefore, evince.desktop is what xdg-mime thinks is the default. The core problem here is of course that there is no priority information in the .desktop files. The relevant freedesktop spec specifically forbids this - see. The result is random priorities when there are no explicit settings. No doubt this policy was created with good intentions, but the fact is that there are legitimate reasons to encode priorities in the .desktop files. Evince should clearly have higher priority than GIMP or Inkscape for PDFs. The "the first one that I see" policy used by GIO and xdg-mime when there is no explicit user preference is always going to suck. Adding an explicit user preference worked around the problem for me. I don't use KDE or GNOME, so I don't have access to any of the fancy graphical configuration utilities, but the following command: xdg-mime default evince.desktop application/pdf ...seems to have been effective. I would personally argue against using the xdg-utils in Firefox, because they are shell scripts (seriously, wtf?) and because they don't seem to be inherently smarter aside from purely by chance returning the right answer in my case. Having the same problem, KDE. Looks like this is GIO's fault, but also KDE's fault, but also I don't know what I'm talking about but here's what I've got. Firefox opens PDFs in Inkscape, which is clearly insane. KDE opens them in Okular. Similarly, Firefox opens directories in Konqueror instead of Dolphin. xdg-mime gets both right: $ xdg-mime query default application/pdf okularApplication_pdf.desktop $ xdg-mime query default inode/directory dolphin.desktop But the first things in the mimeinfo.cache list are Firefox's (and GIO's) choices: application/pdf=inkscape.desktop;gimp.desktop;kde4-okularApplication_pdf.desktop;kde4-active-documentviewer_pdf.desktop; inode/directory=kde4-kfmclient_dir.desktop;kde4-cervisia.desktop;kde4-dolphin.desktop;kde4-filelight.desktop;kde4-gwenview.desktop; Looks like xdg-mime detects if I'm running KDE and uses "ktraderclient", whatever KDE plumbing thing that is. The difference appears to be that KDE respects an "InitialPreference" value within .desktop files -- the highest value wins. I can tell this is a KDE thing because the only files I have with such a setting are built-in KDE 4 apps. Okular is 8, Dolphin is 10. (kfmclient is also 10, so who knows what breaks the tie.) So, naturally, KDE applications will win over everything else unless otherwise specified. GIO, of course, ignores all this nonsense entirely. But, dearest CC list, there *is* a light at the end of the tunnel! You see, KDE *does* correctly write out user-preferred applications to ~/.local, but *only if you actually change them*. So if you want the builtin default of Okular to be your PDF reader, you can do something like this: - Open System Settings > File Associations - Find application/pdf - Select Okular and move it down one space (!) - Click Apply - Select Okular and move it back to the top - Click Apply again If you don't actually make a change, KDE will cleverly assume nothing should be done, despite showing a dialog that clearly says "I'm doing things", progress bar and all. Anyway, doing this for application/pdf and inode/directory has miraculously brought Firefox's opinion on those filetypes in line with mine, and all is well. Not a Firefox bug, not really a GIO bug, arguably not even a KDE bug. No one caused the problem, yet the problem remains. Ah, computers. It's perhaps ironic that this was all caused by exactly the thing comment 23 proposes would fix it: priority in .desktop files. Alex, nice contribution. I was stated on comments 16-20 that GIO wasn't enabled on Firefox by default. But on Ubuntu 12.04 we see after an ldd on libxul.so libgio-2.0.so.0 => /usr/lib/i386-linux-gnu/libgio-2.0.so.0 (0xb3d55000) and the problem still happens. For example, directories open with Gwenview instead of Dolphin because mimeinfo.cache contains this line: inode/directory=kde4-gwenview.desktop;kde4-dolphin.desktop;kde4-kfmclient_dir.desktop;kde4-kdesvn.desktop; So in the end is this a GIO problem or a Firefox problem? Comments 16–20 are ancient; bug 713802, to enable GIO by default, was marked FIXED over a year ago. tl;dr of comment 24: if you're on KDE, and Firefox isn't opening e.g. PDFs or directories with Dolphin or Okular, it's neither a Firefox nor a GIO problem. KDE supports a custom priority property in .desktop files, and built-in KDE apps have it set by default so they win out over everything else /if you haven't specified otherwise/. (Without this behavior, even KDE could start opening PDFs in Inkscape, because you never explicitly asked to use Okular. I'm not sure how GNOME et al. avoid a similar problem.) Meanwhile, GIO doesn't understand any of this because the property is non-standard. If you manually edit your file associations, KDE *will* write out your choices in the standard way, and GIO will obey them. Alex, Firefox is still honoring mimeinfo.cache which is the title of this bug. Even forgetting about any KDE specific support Firefox should honor /usr/share/applications/defaults.list so that a newly created user would have the system defined apps in line with his firefox settings. And a system administrador would know where to customize that on a system-wide manner. You found a workaround that means doing things on a user-by-user basis - it is possible but brings manual work. You say that Firefox currently uses GIO. If that is right then GIO has a bug because mimeinfo.cache should NOT be used as it lists applications in a random order and changes when packages are installed (therefore it can't be customized). So the question is who to blame for mimeinfo.cache being used instead of defaults.list or xdg. Thanks a lot, now I'm reading GIO source code. :) I removed the application/pdf line from my local mimeapps.list and added a junk entry (VLC, certainly not listed in mimeinfo.cache) to the system-wide defaults.list, and now Firefox wants to open PDFs in VLC. So as far as I can tell, this is all working correctly; the only problem *I* had was that KDE doesn't write out its initial defaults in a standard way in the first place. If you have a defaults.list and it's not working, all I can think is that you also have another file that's overriding it, since it's checked almost-last. The ordering is: - [Default Applications] in mimeapps.list (GNOME-specific) - [Added Associations] in mimeapps.list - [Removed Associations] in mimeapps.list - defaults.list (GNOME-specific) - mimeinfo.cache And each of these files is consulted in order, within each of the directories ~/.local/share/applications, /usr/share/applications, /usr/local/share/applications. Maybe you have some junk in one of those places? You might want to just put your system-wide configuration in an [Added Associations] section in /usr/share/applications/mimeapps.list anyway, which I believe any DE will understand. If you want to double-check what GIO thinks is going on without going through Firefox, a pretty easy way is to install the gobject bindings for Python (python-gobject on Ubuntu) and run: python -c 'import gio; print gio.app_info_get_all_for_type("application/pdf")' One other (unlikely) possibility is that you have application/pdf configured as a subtype, and are being bitten by this bug, fixed after 12.04 was released:. Check for a sub-class-of element in your /usr/share/mime/application/pdf.xml. Anyway, given the above and my own experimentation, I'm reasonably sure that this is not a bug in Firefox. If anything it's a bug in freedesktop; admits that there's no per-desktop way to specify defaults, and though they call the status quo desktop-specific, it's really toolkit-specific. Firefox is based on GTK+, so it uses the GNOME-specific API. Alex, You have a point here. After new tests on my system, based on Ubuntu 12.04 with KDE, where: - there is a defaults.list - there is a mimeinfo.cache - there is NO mimeapps.list - a new user has no specific settings on .local it seems that defaults.list is currently honored and Firefox (GIO perhaps) falls back to mimeinfo.cache if the application defined on defaults.list does not exist on the system. I had nautilus on defaults.list for inode/directory and Firefox was falling back to gwenview which is the first entry on mimeinfo.cache. I also tested changing the default application for PDF on defaults.list and it worked. Can someone else confirm that defaults.list is now correctly honored? --------------------------------------- Off topic: your python one liner does not behave has expected user@1204-IGAC:~$ python -c 'import gio; print gio.app_info_get_all_for_type("application/pdf")' [<gio.unix.DesktopAppInfo at 0xb71f68ec: Okular>, <gio.unix.DesktopAppInfo at 0xb71f6aa4: GIMP Image Editor>, <gio.unix.DesktopAppInfo at 0x9f35eb4: MuPDF>, <gio.unix.DesktopAppInfo at 0x9f35edc: Adobe Reader 9>, <gio.unix.DesktopAppInfo at 0x9f35f04: PDF Mod>] user@1204-IGAC:~$ xdg-mime query default application/pdf okularApplication_pdf.desktop defaults.lst has: application/pdf=kde4-okularApplication_pdf.desktop mimeinfo.cache has: application/pdf=gimp.desktop;mupdf.desktop;kde4-okularApplication_pdf.desktop;acroread.desktop;evince.desktop;pdfmod.desktop; (In reply to Eevee (Alex Munroe) [:eevee] from comment #26) > tl;dr of comment 24: if you're on KDE, and Firefox isn't opening e.g. PDFs > or directories with Dolphin or Okular, it's neither a Firefox nor a GIO > problem. This is not exactly the case, especially considering opening directories with dolphin. It turns out that FF (tested on 29.0.1 on Gentoo) when built with dbus support has a wrong way to do it. It first calls (instead of checking if exists first and then calling) org.freedesktop.FileManager1 on the session bus and thus if nautilus is installed it is started as "/usr/bin/nautilus --no-default-window" and in this way FF doesn't honor the XDG at all. So for now a workaround is either to delete the /usr/share/dbus-1/services/org.freedesktop.FileManager1.service or change its Exec to "dolphin" (if you are under KDE, though that may have implications 'cause dolphin doesn't register any org.freedesktop.FileManager1) There is patched version of firefox available in AUR for KDE environment (Arch Linux users). It has better integration for KDE than the default and official firefox build. I compiled it with provided patches and, after all, file associations are finally as they should be in the first place - out-of-the-box, immediately. Running KDE 4.14.4. I used to run Firefox from official Arch Package repos. File associations were just a mess at that time. TL;DR: firefox should do exactly the same as `xdg-mime query default` ---- I don’t really care how GNOME does things, and I don’t care about GIO. If $ xdg-mime query default application/pdf returns okularApplication_pdf.desktop Then I consider my system appropriately configured to open pdf files with Okular, and firefox not respecting this is a bug. ---- To help debug this: The above query worked like displayed even while my ~/.local/share/applications/mimeapps.list and ~/.config/mimeapps.list contained no entry for application/pdf Which probably means that something in the XDG spec other than mimeapps.list defines the order of default applications. Firefox however opened PDFs with Inkscape. Manually adding this line to one of the mimeapps.list files: application/pdf=okularApplication_pdf.desktop; …made firefox open PDFs with Okular A possible workaround is: rm /usr/share/applications/mimeinfo.cache touch /usr/share/applications/mimeinfo.cache chmod a-w /usr/share/applications/mimeinfo.cache This file together with the fact that desktops/distributions don't respect the freedesktop standards seems to cause more harm than good anyway.
https://bugzilla.mozilla.org/show_bug.cgi?id=727422
CC-MAIN-2016-26
refinedweb
3,941
57.98
Installation and start dotPeek is available for download in two distributions: as a part of dotUltimate installer and as portable versions for 32-bit and 64-bit processors. Both distributions are functionally equivalent. The installer-based distribution is a safe bet if you want to use dotPeek on a single computerbox on the page of dotPeek options. Open assembly files to the Assembly Explorer window. Browse. View source code Assembly code is presented as C# in the Code Viewer, which displays source or decompiled code in multiple tabs. Code syntax is highlighted ReSharper-style, with distinctive colors for properties, types, accessors, and methods. When you put the caret at a delimiter, be it a brace or parenthesis, it gets highlighted along with its counterpart, bringing focus to the scope of the particular code block you're in. If you need to copy some code, you can select the desired piece with Extend/shrink selection shortcuts Control+W/Control+Shift+W or with the Select containing declaration shortcut Control+Shift+OemOpenBrackets. To learn more about symbols without opening their declarations, use the quick documentation command Control+Q. Navigate and search There are plenty of ways to search code with dotPeek. In most of the cases, you can use the Search Everywhere command Control Control. To see all navigation commands available for the current caret position, use the Navigate To command . Another command that you can use after you found the desired symbol — Locate in Assembly Explorer Alt+Shift+L - will help you understand to which assembly, namespace, type, and so on the symbol belongs to..
https://www.jetbrains.com/help/decompiler/dotPeek_Getting_Started.html
CC-MAIN-2022-21
refinedweb
264
51.99
GzipSwift GzipSwift is a framework with an extension of Data written in Swift. It enables compress/decompress gzip using zlib. - Requirements: OS X 10.9 / iOS 8 / watchOS 2 / tvOS 9 or later - Swift version: Swift 5.0.0 Usage import Gzip // gzip let compressedData: Data = try! data.gzipped() let optimizedData: Data = try! data.gzipped(level: .bestCompression) // gunzip let decompressedData: Data if data.isGzipped { decompressedData = try! data.gunzipped() } else { decompressedData = data } Installation Manual Build - Open Gzip.xcodeproj on Xcode and build Gzip framework for your target platform. - Append the built Gzip.frameworkto your project. - Go to General pane of the application target in your project. Add Gzip.frameworkto the Embedded Binaries section. import Gzipin your Swift file and use in your code. Carthage GzipSwift is Carthage compatible. You can easily build GzipSwift adding the following line to your Cartfile: github "1024jp/GzipSwift" CocoaPods GzipSwift is available through CocoaPods. To install it, simply add the following line to your Podfile: pod 'GzipSwift' Swift Package Manager Install zlib if you haven't installed yet: $ apt-get install zlib-dev Add this package to your package.swift. If Swift build failed with a linker error: - check if libz.so is in your /usr/local/lib - if no, reinstall zlib as step (1) - if yes, link the library manually by passing '-Xlinker -L/usr/local/lib' with swift build License © 2014-2019 1024jp GzipSwift is distributed under the terms of the MIT License. See LICENSE for details. Github Help us keep the lights on Dependencies Used By Total:
https://swiftpack.co/package/1024jp/GzipSwift
CC-MAIN-2019-18
refinedweb
254
60.82
Translating HAKMEM 175 into C… A couple of years back, I made note of HAKMEM 175, a nifty hack by Bill Gosper that finds the next higher value that has the same number of ’1′ bits as the input. brainwagon » Blog Archive » HAKMEM 175. If I bothered to convert it to C, I didn’t scribble it down, so I thought I’d do it here. #include <stdio.h> /* _ _ _ _ ____ __ ___ __ __ _ ____ ___ * | || | /_\ | |/ / \/ | __| \/ / |__ | __| * | __ |/ _ \| ' <| |\/| | _|| |\/| | | / /|__ \ * |_||_/_/ \_\_|\_\_| |_|___|_| |_|_|/_/ |___/ * * A straightforward implementation of Gosper's algorithm for * determining the next higher value which has the same number of 1 * bits. This is useful for enumerating subsets. * * Translated by Mark VandeWettering. */ ; } main() { unsigned int x ; int i ; x = 3 ; do { printf("0x%x\n", x) ; x = hakmem175(x) ; } while (x != 0) ; } I was surprised to find that there is actually a bug in the published memo. The last instruction should obviously be an OR of A and D, not A and C as listed in the published memo. After discovering the error, I seeked to find mention of it onlne somewhere. This page lists the corrected PDP-10 assembly code without comment. It also suggests a couple of optimizations: you can save one register and one instruction by variable renaming, and with a bit of work, you can avoid the integer divide, which is probably a good thing on pretty much any architecture you are likely to use. Addendum: I thought it might be fun to show an illustration of how this could be used. At one point, while writing my checkers program, I thought about enumerating all possible positions with given numbers of checkers on each side. Counting them is actually rather easy, but enumerating them is a bit trickier, but using the trick of HAKMEM175, it becomes somewhat easier. #include <stdio.h> #include <stdlib.h>; } /* * Here's an interesting use (okay, semi-interesting use) of the * above function. Let's use it to enumerate all the potential * positions in checkers which have two checkers (not kings) for * each side. */ #define WHITEILLEGAL (0x0000000F) #define BLACKILLEGAL (0xF0000000) int main (int argc, char *argv[]) { unsigned int B, W; int nb = atoi (argv[1]); int nw = atoi (argv[2]); if (nb < 1 || nb > 12 || nw < 1 || nw > 12) { fprintf (stderr, "usage: checkers nb nw\n"); fprintf (stderr, " 1 <= nb, nw <= 12\n"); exit (-1); } B = (1 << nb) - 1; for (;;) { W = (1 << nw) - 1; for (;;) { while (W != 0 && (W & (B | WHITEILLEGAL))) W = hakmem175 (W); if (W == 0) break; /* output B, W, we could be prettier */ printf ("%x %x\n", B, W); W = hakmem175 (W); } B = hakmem175 (B); while (B != 0 && (B & BLACKILLEGAL)) B = hakmem175 (B); if (B == 0) break; } return 0; } Using this, you can easily enumerate the 125,664 positions with 2 checkers on each side, the 8,127,272 positions with 3 checkers on each side, and the 253,782,115 positions with 4 checkers on each side. By then, the code gets a little slow: it has to step over all the positions where there is a conflict in B and W placement, so it gets pretty slow. Still, it works reasonably well, and as such, might serve as part of the inner loop of a retrograde endgame analysis project. Recent Comments
http://brainwagon.org/2010/09/08/translating-hakmem-175-into-c/
CC-MAIN-2014-15
refinedweb
565
64.95
I’ve wanted to understand more about the process of how source code gets compiled and packaged with its dependencies into a deployable artifact. I’m starting with C, since most things either follow the C way of doing things or get compared to it. I’d like to start filling in some gaps in my knowledge like: - What are the steps of building a C program? A compiler? Linker? What else? - What are those .o files that come out? - How does source code depend on other files? How does the compiler package dependencies? - What are libraries? How are they different, how do I make them? Dynamic vs static? File types When dealing with C, we have four different types of files: Source code ( *.cfiles) Source files contain function definitions Header files ( *.hfiles) If we don’t define a function signature before using it, the compiler will complain. We include header files, which contain function declarations, when we want source files to reference externally defined functions. Object files ( *.ofiles) Object files are the output of a compiler. They are contain function definitions in binary form (machine code), but haven’t been packaged into an executable yet and may contain references to symbols. Binary executables Executables are the output of the linker, which links a number of object files together to form a file that can be directly executed. Sometimes just called binaries. But wait, here’s another file type for free! Libraries ( .afor static libraries, *.sofor dynamic libraries) Libraries are just object files joined together into one file. Conceptually, they do they same thing as object files: they contain binary forms of function definitions. They can be linked with other object files and libraries to form a binary. Static libraries are packaged into the executable at compile time like other object files. Dynamic libraries let us defer loading until runtime. Building our source code Building our code is the process of taking our source code to an executable. Without digging into the internals of a compiler, for C this involves: Preprocessor (source/header files to expanded source) The preprocessor is responsible for transforming source code as indicated by the preprocessor directives. For example the preprocessor replaces the line #include "header.hwith the entire contents of header.h. #defineis another common directive used for macros and constants, where the preprocessor can replace all instances of a defined keyword. The compiler invokes the preprocessor automatically before it runs, so all it sees are the processed source files. Compiler (expanded source -> object files) With the processed source code, the compiler turns source code into binary versions of the source code, the object files. Object files can be packaged together into a library by a separate tool. Linker (object files -> executable) The linker takes object files and libraries and combines them into an executable, resolving any external symbols in the process. The build C process is actually even simpler logically than the file types we mentioned. Header files are just source files that get preprocessed into other files, not a separate concept. By convention, header files contain just function declarations, but you can include anything a source file can, and people commonly do (like with single header file libraries). Libraries, again, are just object files packaged together. A library is like an uncompressed zip or tar archive and I like to think of it as a bunch of object files cat‘d together with an index at the top. So really what we have are source files (source and header files), intermediates (object files and libraries) and the final target (binaries). You may consider a library the final target of your build depending on if you’re building an executable or not. Building in action Let’s check out how this maps to the simplest of examples: // main.c int main() { return 0; } # compile main.c into an object file, main.o gcc -c main.c # link main.o into an executable gcc main.o -o main Cool! We’ve compiled a source file into an object file ( main.o) and then linked it into an executable ( main). Single source dependency Okay now let’s add a source file dependency: // add.h int add(int a, int b); // add.c int add(int a, int b) { return a + b; } // main.c #include "add.h" int main() { return add(0,1); } We can use gcc -E to see the output of the preprocessor: > gcc -E main # 1 "main.c" # 1 "<built-in>" # 1 "<command-line>" # 1 "/usr/include/stdc-predef.h" 1 3 4 # 1 "<command-line>" 2 # 1 "main.c" # 1 "add.h" 1 int add(int a, int b); # 3 "main.c" 2 int main() { return add(0,1); } I haven’t dug into what all the output is, but we can see that the preprocessor copies add.h into main.c as we thought. However, using the same compile commands fails: > gcc -c main.c > gcc main.o -o main main.o: In function `main': main.c:(.text+0xf): undefined reference to `add' collect2: error: ld returned 1 exit status Let’s run nm on main.o to see what symbols are used. # nm shows symbols in a object file # man nm shows all the symbol types # briefly T = symbol is in the code section, U = undefined > nm main.o U add 0000000000000000 T main Here we see add is undefined, which makes sense since we never compiled the add function to binary. We need to go through the same process to compile add.c into an object file and then link it with main.o. # compile object files gcc -c main.c gcc -c add.c # link gcc main.o add.o -o main Building our own static library Now let’s add mult.h/c and build our own static library. // mult.h int mult(int a, int b); // mult.c int mult(int a, int b) { return a * b; } Before we would have to do something like: # compile object files gcc -c main.c gcc -c add.c gcc -c mult.c # link gcc main.o add.o mult.o -o main But now we will package add.o and mult.o into a single library: gcc -c main.c gcc -c add.c gcc -c mult.c # create library ar rcs libmath.a add.o mult.o # link gcc main.o libmath.a -o main ar creates an archive from our object files and s makes it include an index. Let’s run nm on it: > nm libmath.a Archive index: add in add.o mult in mult.o add.o: 0000000000000000 T add mult.o: 0000000000000000 T mult So it looks like what we expected, it includes an index from symbol to object file and then the contents of each object file. We end up using it exactly the same as an object file when linking. Building our own dynamic library Dynamic libraries (aka shared libraries) do change things a little, they let us defer symbol resolution until runtime. This lets us do cool stuff like hot reloading code, and letting multiple binaries load the same shared library. Continuing from the same example before, our compiling now looks like: # compile object files # -fPIC makes it position independent # positions are relative, so it can be relocated in memory when loaded gcc -c main.c gcc -c -fPIC add.c gcc -c -fPIC mult.c # create library gcc -shared -o libmath.so.1 add.o mult.o # link # -L. adds the current dir to the library search path # you can also use -lmath to link libmath.so gcc main.o -o main -L. -l:libmath.so.1 When running we need to also specify the library search path (where the loader looks for dynamic libraries): # Run > LD_LIBRARY_PATH=. ./main # Show dynamic library dependency resolution > LD_LIBRARY_PATH=. ldd main ... libmath.so.1 => ./libmath.so.1 (0x00007fa023369000) ... Dynamic loading is a pretty big topic of its own, but it still serves the same purpose of resolving symbols like an object file, just with some magic so we can do that after compile time. Unfortunately, this complicates deploying build artifacts since you need to have the library in place with the final binary. Printing and libc We’re going to get a little crazy here and actually output text. This time we’re just going to have main.c but include stdio.h. // main.c #include <stdio.h> // puts int main() { puts("Hello"); } # compile main.c into an object file, main.o gcc -c main.c # link main.o into an executable gcc main.o -o main ./main # outputs: Hello We never defined stdio.h or puts but everything works fine. Running gcc -E main.c produces an enormous output but it looks like stdio.h is coming from somewhere. Lets run nm on the object file and the binary to see the symbols in each. > nm main.o 0000000000000000 T main U puts > nm main ... U __libc_start_main@@GLIBC_2.2.5 0000000000400526 T main U puts@@GLIBC_2.2.5 00000000004004a0 t register_tm_clones ... Looks like puts is referenced but not defined in main.o, and nm main points us to GLIBC. Libc is the standard library for c, and glibc is the implementation that gcc includes. It turns out this gets implicitly dynamically link on every build. Running ldd on the gcc output shows us dynamic library dependencies (also called shared objects), confirming gcc is linking more than just our main.o # ldd prints "shared object dependencies" (dynamic libraries) > ldd main linux-vdso.so.1 => (0x00007fff945b0000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f0c8d0f7000) /lib64/ld-linux-x86-64.so.2 (0x0000556bcd7fb000) GCC is doing a lot more than just calling the linker ld with ld main.o -o main, we can see it all with gcc -v main.o -o main… it’s a lot. It seems hard to make ld work directly because of all the libraries we need to link against to actually make a C executable. So even for 4 lines of code, we’ve got a lot going on. We found out GCC is doing a lot implicitly to build an executable that we glossed over before. Apparently we need vdso (lib to attempt to use faster hardware instructions for system calls?), libc (standard c library) and ld-linux (dynamic linker/loader). BUT, it does still fall under our mental model. We build main.c into main.o, which has some undefined references. In order to make an executable, we combine our intermediate object files with libc dynamically (and a loader) and if every symbol is resolved, it works! It’s the same as the dynamic library example, with just implicit stuff happening that probably makes reliable building a headache. Wrapping up I’ve learned a lot about C builds from this, and I’m curious to see what other languages do. Thankfully, the mental model of source to object to binary (or library) target is pretty straightforward, even if we ended up doing a lot of work digging into some really simple builds.
http://seenaburns.com/building-c-programs/
CC-MAIN-2018-05
refinedweb
1,847
68.77
Open Source Your Knowledge, Become a Contributor Technology knowledge has to be shared and made accessible for free. Join the movement. It will take approximately 20 minutes to read this article. Python is one very special language. No matter how well known, everyday new cool feature of it is seeing the daylight. In this post, we are going to examine either most special and/or unique features of Python language. Please note that, all answers taken from stackoverflow. So here is the question. What are the lesser-known but useful features of the Python programming language? Argument Unpacking For example: def draw_point(x, y): # do some magic point_foo = (3, 4) point_bar = {'y': 3, 'x': 2} draw_point(*point_foo) draw_point(**point_bar) Very useful shortcut since lists, tuples and dicts are widely used as containers. Braces from __future__ import braces Chaining Comparison Operators >>> x = 5 >>> 1 < x < 10 True >>> 10 < x < 20 False >>> x < 10 < x*10 < 100 True >>> 10 > x <= 9 True >>> 5 == x > 4 True. Decorators Example shows a print_args decorator that prints the decorated function's arguments before calling it: >>> def print_args(function): >>> def wrapper(*args, **kwargs): >>> print 'Arguments:', args, kwargs >>> return function(*args, **kwargs) >>> return wrapper >>> @print_args >>> def write(text): >>> print text >>> write('foo') Arguments: ('foo',) {} foo Default Argument Gotchas / Dangers of Mutable Default arguments >>> def foo(x=[]): ... x.append(1) ... print x ... >>> foo() [1] >>> foo() [1, 1] >>> foo() [1, 1, 1] Instead, you should use a sentinel value denoting "not given" and replace with the mutable you'd like as default: >>> def foo(x=None): ... if x is None: ... x = [] ... x.append(1) ... print x >>> foo() [1] >>> foo() [1] Descriptors When you use dotted access to look up a member (eg, x.y), Python first looks for the member in the instance dictionary. If it's not found, it looks for it in the class dictionary. If it finds it in the class dictionary, and the object implements the descriptor protocol, instead of just returning it, Python executes it. A descriptor is any class that implements the get, set, or delete methods. Here's how you'd implement your own (read-only) version of property using descriptors: class Property(object): def __init__(self, fget): self.fget = fget def __get__(self, obj, type): if obj is None: return self return self.fget(obj) and you'd use it just like the built-in property(): class MyClass(object): @Property def foo(self): return "Foo!" Descriptors are used in Python to implement properties, bound methods, static methods, class methods and slots, amongst other things. Understanding them makes it easy to see why a lot of things that previously looked like Python 'quirks' are the way they are. Raymond Hettinger has an excellent tutorial that does a much better job of describing them than I do. Dictionary default .getvalue It's great for things like adding up numbers: sum[value] = sum.get(value, 0) + 1 Docstring Tests Example extracted from the Python documentation: def factorial(n): """Return the factorial of n, an exact integer >= 0. If the result is small enough to fit in an int, return an int. Else return a long. >>> [factorial(n) for n in range(6)] [1, 1, 2, 6, 24, 120] >>> factorial(-1) Traceback (most recent call last): ... ValueError: n must be >= 0 Factorials of floats are OK, but the float must be an exact integer: """() Ellipsis Slicing Syntax >>> class C(object): ... def __getitem__(self, item): ... return item ... >>> C()[1:2, ..., 3] (slice(1, 2, None), Ellipsis, 3) Enumeration For example: >>> a = ['a', 'b', 'c', 'd', 'e'] >>> for index, item in enumerate(a): print index, item ... 0 a 1 b 2 c 3 d 4 e >>> References: ) for i in foo: if i == 0: break else: print("i was never 0") The "else" block will be normally executed at the end of the for loop, unless the break is called. The above code could be emulated as follows: found = False for i in foo: if i == 0: found = True break if not found: print("i was never 0") Function as iter() argument For instance: def seek_next_line(f): for c in iter(lambda: f.read(1),'\n'): pass The iter(callable, until_value) function repeatedly calls callable and yields its result until until_value is returned. Generator expressions If you write x=(n for n in foo if bar(n)) you can get out the generator and assign it to x. Now it means you can do for n in x: The advantage of this is that you don't need intermediate storage, which you would need if you did x = [n for n in foo if bar(n)] In some cases this can lead to significant speed up. You can append many if statements to the end of the generator, basically replicating nested for loops: >>> n = ((a,b) for a in range(0,2) for b in range(4,6)) >>> for i in n: ... print i (0, 4) (0, 5) (1, 4) (1, 5) import this import this # btw look at this module's source :) Place Value Swapping >>> a = 10 >>> b = 5 >>> a, b (10, 5) >>> a, b = b, a >>> a, b (5, 10) The right-hand side of the assignment is an expression that creates a new tuple. The left-hand side of the assignment immediately unpacks that (unreferenced) tuple to the names a and b. After the assignment, the new tuple is unreferenced and marked for garbage collection, and the values bound to a and b have been swapped. As noted in the Python tutorial section on data structures, Note that multiple assignment is really just a combination of tuple packing and sequence unpacking. List stepping a = [1,2,3,4,5] >>> a[::2] # iterate over the whole list in 2-increments [1,3,5] The special case x[::-1] is a useful idiom for 'x reversed'. >>> a[::-1] [5,4,3,2,1] __missing__items >>> class MyDict(dict): ... def __missing__(self, key): ... self[key] = rv = [] ... return rv ... >>> m = MyDict() >>> m["foo"].append(1) >>> m["foo"].append(2) >>> dict(m) {'foo': [1, 2]} There is also a dict subclass in collections called defaultdict that does pretty much the same but calls a function without arguments for not existing items: >>> from collections import defaultdict >>> m = defaultdict(list) >>> m["foo"].append(1) >>> m["foo"].append(2) >>> dict(m) {'foo': [1, 2]} I recommend converting such dicts to regular dicts before passing them to functions that don't expect such subclasses. A lot of code uses d[a_key] and catches KeyErrors to check if an item exists which would add a new item to the dict. Multi-line Regex In Python you can split a regular expression over multiple lines, name your matches and insert comments. Example verbose syntax (from Dive into) Example naming matches (from Regular Expression HOWTO) >>> p = re.compile(r'(?P<word>\b\w+\b)') >>> m = p.search( '(((( Lots of punctuation )))' ) >>> m.group('word') 'Lots' You can also verbosely write a regex without using re.VERBOSE thanks to string literal concatenation. >>> ... ) >>> print pattern "^M{0,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})$" Named string formatting >>> print "The %(foo)s is %(bar)i." % {'foo': 'answer', 'bar':42} The answer is 42. >>> foo, bar = 'question', 123 >>> print "The %(foo)s is %(bar)i." % locals() The question is 123. And since locals() is also a dictionary, you can simply pass that as a dict and have % -substitions from your local variables. I think this is frowned upon, but simplifies things.. New Style Formatting >>> print("The {foo} is {bar}".format(foo='answer', bar=42)) Nested list/generator comprehensions [(i,j) for i in range(3) for j in range(i) ] ((i,j) for i in range(4) for j in range(i) )) ) These can replace huge chunks of nested-loop code. New types at runtime >>> NewType = type("NewType", (object,), {"x": "hello"}) >>> n = NewType() >>> n.x "hello" which is exactly the same as >>> class NewType(object): >>>>> n = NewType() >>> n.x "hello" Probably not the most useful thing, but nice to know. Edit: Fixed name of new type, should be NewType to be the exact same thing as with classstatement. Edit: Adjusted the title to more accurately describe the feature. .pthfiles "The most convenient way [to modify python's search path].)" ROT13 Encoding #!/usr/bin/env python # -*- coding: rot13 -*- cevag "Uryyb fgnpxbiresybj!".rapbqr("rot13") Regex Debugging Regular expressions are a great feature of python, but debugging them can be a pain, and it's all too easy to get a regex wrong. Fortunately, python can print the regex parse tree, by passing the undocumented, experimental, hidden flag re.DEBUG (actually, 128) to re.compile. >>> Once you understand the syntax, you can spot your errors. There we can see that I forgot to escape the [] in [/font]. Of course you can combine it with whatever flags you want, like commented regexes: >>> re.compile(""" ^ # start of a line \[font # the font tag (?:=(?P<size> # optional [font=+size] [-+][0-9]{1,2} # size specification ))? \] # end of tag (.*?) # text between the tags \[/font\] # end of the tag """, re.DEBUG|re.VERBOSE|re.DOTALL) Sending to Generators def mygen(): """Yield 5 until something else is passed back via send()""" a = 5 while True: f = (yield a) #yield a and possibly get f in return if f is not None: a = f #store the new value You can: >>> g = mygen() >>> g.next() 5 >>> g.next() 5 >>> g.send(7) #we send this back to the generator 7 >>> g.next() #now it will yield 7 until we send something else 7 Tab Completion in Interactive Interpreter try: import readline except ImportError: print "Unable to load readline module." else: import rlcompleter readline.parse_and_bind("tab: complete") >>> class myclass: ... def function(self): ... print "my function" ... >>> class_instance = myclass() >>> class_instance.<TAB> class_instance.__class__ class_instance.__module__ class_instance.__doc__ class_instance.function >>> class_instance.f<TAB>unction() You will also have to set a PYTHONSTARTUP environment variable. Ternary Expression x = 3 if (y == 1) else 2 It does exactly what it sounds like: "assign 3 to x if y is 1, otherwise assign 2 to x". Note that the parens are not necessary, but I like them for readability.. try/except/else try: put_4000000000_volts_through_it(parrot) except Voom: print "'E's pining!" else: print "This parrot is no more!" finally: end_sketch() The use of the else clause is better than adding additional code to the try clause because it avoids accidentally catching an exception that wasn’t raised by the code being protected by the try ... except statement. See withstatement Introduced in PEP 343, a context manager is an object that acts as a run-time context for a suite of statements. Since the feature makes use of new keywords, it is introduced gradually: it is available in Python 2.5 via the future directive. Python 2.6 and above (including Python 3) has it available by default. I have used the "with" statement a lot because I think it's a very useful construct, here is a quick demo: from __future__ import with_statement with open('foo.txt', 'w') as f: f.write('hello!') What's happening here behind the scenes, is that the "with" statement calls the special _enter_and exit methods on the file object. Exception details are also passed to exit if any exception was raised from the with statement body, allowing for exception handling to happen there. What this does for you in this particular case is that it guarantees that the file is closed when execution falls out of scope of the with suite, regardless if that occurs normally or whether an exception was thrown. It is basically a way of abstracting away common exception-handling code. Other common use cases for this include locking with threads and database transactions.
https://tech.io/playgrounds/2302/best-tricks-of-python
CC-MAIN-2019-26
refinedweb
1,955
63.49
Free "1000 Java Tips" eBook is here! It is huge collection of big and small Java programming articles and tips. Please take your copy here. Take your copy of free "Java Technology Screensaver"!. JavaFAQ Home » Java Lectures by Anatoliy Malyarenko Every time the user types a character or pushes a mouse button, an event occurs. Any object can be notified of the event. All the object has to do is implement the appropriate interface and be registered as an event listener on the appropriate event source. Every event handler requires three pieces of code: public class MyClass implements ActionListener { someComponent.addActionListener(instanceOfMyClass); public void actionPerformed(ActionEvent e) {<br /> ...//code that reacts to the action...<br /> } Event handlers can be instances of any class. Often an event handler that has only a few lines of code is implemented using an anonymous inner class --. Swing components can generate many kinds of events. The following table lists a few examples. If you take another look at the snapshot of SwingApplication, -- 30 extra pixels on the top, left, and right, and 10 extra pixels on the bottom. Borders are a feature that JPanel inherits from the JComponent class. Our next example, CelsiusConverter, does something that's somewhat useful: It is a simple conversion tool. The user enters a temperature in degrees Celsius and clicks the Convert... button, and a label displays the equivalent in degrees Fahrenheit. Let's examine the code to see how CelsiusConverter parses the number entered in the JTextField. First, here's the code that sets up the JTextField: JTextField tempCelsius = null; ... tempCelsius = new JTextField(5); The integer argument passed in the JTextField constructor, 5 in the example, indicates the number of columns in the field. This number is used along with metrics provided by the current font to calculate the field's preferred width. This number does not limit how many character the user can enter. We want to handle the button-click event, so we add an event listener to the button. JButton convertTemp; ... convertTemp.addActionListener(this); ... public void actionPerformed(ActionEvent event) { // Parse degrees Celsius as a double and convert to Fahrenheit. int tempFahr=(int)((Double.parseDouble( tempCelsius.getText())) * 1.8 + 32); fahrenheitLabel.setText(tempFahr + " Fahrenheit"); } The getText method is called on the text field, tempCelsius, to retrieve the data within it. Next, the parseDouble method parses the text as a double before converting the temperature and casting the result to an integer. Finally, the setText method is called on the fahrenheitLabel to display the converted temperature. All this code is found in the event handler for the button, as the conversion happens only once the button is clicked. You can make a JButton be the default button. At most one button in a top-level container can be the default button. The default button typically has a highlighted appearance and acts clicked whenever the top-level container has the keyboard focus and the user presses the Return or Enter key. The exact implementation depends on the look and feel. You set the default button by invoking the setDefaultButton method on a top-level container's root pane: //In the constructor for a JDialog subclass: getRootPane().setDefaultButton(setButton); You can use HTML to specify the text on some Swing components, such as buttons and labels. We can spice up the CelsiusConverter program by adding HTML text to the fahrenheitLabel and adding an image to the convertTemp button. The revised program is CelsiusConverter2. First, let's look at how we specify the HTML tags for the fahrenheitLabel. As you can see from this code, the temperature (tempFahr) is displayed one of three different colours, depending on how hot or cold the converted temperature is: // Set fahrenheitLabel to new value and font colour based // on temperature. if (tempFahr <= 32) { fahrenheitLabel.setText("<html><font color=blue>" + tempFahr + "° Fahrenheit </font></html>"); } else if (tempFahr <= 80) { fahrenheitLabel.setText("<html><font color=green>" + tempFahr + "° Fahrenheit </font></html>"); } else { fahrenheitLabel.setText("<html><font color=red>" + tempFahr + "° Fahrenheit </font></html>"); } To add HTML code to the label, simply put the ¡HTML¿ tag at the beginning of a string, and then use any valid HTML code in the remainder of the string. Using HTML can be useful for varying the text font or colour within a button and for adding line breaks. To display the degree symbol, we use the HTML code °. If the string is to be all one size and colour, you don't have to use HTML. You can call the setFont method to specify the font of any component. Some Swing components can be decorated with an icon -- a fixed-size image. A Swing icon is an object that adheres to the Icon interface. Swing provides a particularly useful implementation of the Icon interface: ImageIcon. ImageIcon paints an icon from a GIF or a JPEG image. Here's the code that adds the arrow graphic to the convertTemp button: ImageIcon icon = new ImageIcon("images/convert.gif", "Convert temperature"); ... convertTemp = new JButton(icon); The first argument of the ImageIcon constructor specifies the file to load, relative to the directory containing the application's class file. The second argument provides a description of the icon that assistive technologies can use. RSS feed Java FAQ News
http://www.javafaq.nu/java-article1090.html
CC-MAIN-2014-52
refinedweb
869
57.16
Nengo comes with a variety of templates: pre-built components that can be used to build your models. These are the various icons on the left side of the screen that can be dragged in to your model. These components are defined in python/nef/templates. There is one file for each item, and the following example uses thalamus.py. The file starts with basic information, including the full name (title) of the component, the text to be used in the interface (label), and an image to use as an icon. The image should be stored in /images/nengoIcons: title='Thalamus' label='Thalamus' icon='thalamus.png' Next, we define the parameters that should be set for the component. These can be strings (str), integers (int), real numbers (float), or checkboxes (bool). For each one, we must indicate the name of the parameter, the label text, the type, and the help text: params=[ ('name','Name',str,'Name of thalamus'), ('neurons','Neurons per dimension',int,'Number of neurons to use'), ('D','Dimensions',int,'Number of actions the thalamus can represent'), ('useQuick', 'Quick mode', bool, 'If true, the same distribution of neurons will be used for each action'), ] Next, we need a function that will test if the parameters are valid. This function will be given the parameters as a dictionary and should return a string containing the error message if there is an error, or not return anything if there is no error: def test_params(net,p): try: net.network.getNode(p['name']) return 'That name is already taken' except: pass Finally, we define the function that actually makes the component. This function will be passed in a nef.Network object that corresponds to the network we have dragged the template into, along with all of the parameters specified in the params list above. This script can now do any scripting calculations desired to build the model: def make(net,name='Network Array', neurons=50, D=2, useQuick=True): thal = net.make_array(name, neurons, D, max_rate=(100,300), intercept=(-1, 0), radius=1, encoders=[[1]], quick=useQuick) def addOne(x): return [x[0]+1] net.connect(thal, None, func=addOne, origin_name='xBiased', create_projection=False) The last step to make the template appear in the Nengo interface is to add it to the list in python/nef/templates/__init__.py.
http://ctnsrv.uwaterloo.ca/docs/html/advanced/dragndrop.html
CC-MAIN-2017-47
refinedweb
387
50.16
What made you laugh at 7:25? Is that a bird?! When do we get meet her/him? :) That's actually my cat. :) He does some chirp-y sounding meows when he sees things outside like birds or squirrels. I'll have to let him in one of the videos soon. Cool video and very clear explanations ! Maybe you can give me a hint on how you would deal with a not so different event management app I'm working on : User manage its Collaborators like on its phone, meaning full control on Collaborator profiles, and add them to an Event. Two dedicated models are used since Collaborator isn't unique (many User can create a Collaborator which is the same person). The thing is a User is notified of each event with a Collaborator that share the same email, and can choose to take control of it (Collaborator become User). This is the only way I found to have a fully working app before all collaborators joined it as user. It works, but I'm kind of stuck when, on a project, I have to loop through User and Collaborator to show the Event team efficiently. Is there a good way to do this ? Thank you for your very helpful advices Great stuff. Probably saved me a lot of digging around time. Thanks. Would have liked to see some aspect of testing addressed. I have come to appreciate good tests, and am still getting comfortable with good testing habits and strategies. I really like how you show real-world stumbling blocks and how to navigate them well. Seeing similar struggle and success with testing would be helpful for me. I do see devise has some example tests posted, so maybe those are enough to get me going. Thanks again! Great episode. Could we have the souce code, please? I am using rails 4 and got the error below. Any ideas what I am doing wrong? I used the attr_accessor :email in the User.rb model. I have that feeling that I missed something really simple but I am having a brain freeze LOL NameError in TeamsController#create undefined local variable or method `email' for #<teamuser:0x007fcef90e37b8> Extracted source (around line #8): def set_user_id existing_user = User.find_by(email: email) self.user = if existing_user.present? existing_user else BTW, In my case this is a CRM application so we are inviting users to Teams rather than Projects obviously. Yeah exactly. I would create my own controller and pass over the user_id which can then look up the user and associate them. Since the user might already have an account, you might just send them an email notice saying they now have access rather than an invite since they already have an account. You could also do an approval process there before it's finally accepted, kind of up to you. Chris, was able to get emails sent to already-existing users this way below from the ProjectUser model def set_user_id existing_user = User.find_by(email: email) self.user = if existing_user.present? existing_user else User.invite!(email: email) end if existing_user.present? InviteMailer.existing_user_invite(email).deliver end end About your post here about creating a separate controller, is there a better way to deal with existing users and identify project params? I don't think you can do so from the model here above so assuming that's why you said do it from controller. But issue I have or misunderstanding is how to invite both nonexisting users (new users) and already existing users from the same form input so you wouldn't have to set up two different ones - one for new devise invitable users and a separate one for already-existing users. Seems unnecessary or confusing to users. Tried to overried the devise controller but kind of confusing here and doesn't act as expected from the devise action that is there to invite already existing users. Tried to add it from their documentation but only messes up other things here I'm guessing because of the model setup and inheritance issues. Any quick solution so can send send projectuser or project params to the already-existing user in the email? Thanks a lot. Chris great episode, thank you so much! One question, everything works great on my end. But, the emails don't actually reach their destinations. Is that because we are in development or do we need to have something extra installed in our app? Thanks mate! If you've got a real Actionmailer config setup to hit a real smtp server, you can have it send real emails in development. If they aren't arriving, then you will want to double check your configuration to make sure that it's authenticating correctly and you see the emails being sent from your email provider logs. If you want add "remove user" functionality, here is an example. In project_users_controller.rb : def destroy @project.project_users.find(params[:id]).destroy redirect_to projects_path, notice: "Member removed" end In projects/show.html.erb : <h4>Users</h4> <% @project.project_users.each do |project_user| %> <div><%= project_user.user.email %> <td><%= link_to "Remove member", project_project_user_path(@project, project_user), method: :delete %></td></div> <% end %> Thank you so much for sharing, I've been working on this for so long. Still don't quite have it, but i'm getting close, thanks! The video seems different then the text. I get an error when following the text. When we make the edit to app/views/projects/show.html.erb. I also was getting an error when I followed the video alone. I had to combine what was in the text and video to make it to where I am at now(still not done). Can we get a repo or an update on this? I actually joined for this very lesson. If I follow the text I get this error: NoMethodError in Projects#show, undefined method `email' for # How do you actually check that role field later, i.e. vs `current_user`? I figured it out (not perfect, but it works): @owner = @project.project_users.where( role: "owner" ).first.user Chris thanks for this great series. If you wanted to allow some projects to be publicly viewable (i.e. not just for invited people who get invited through devise invitable), is there an easy way to toggle this with a 'public/private' option. Just having trouble figuring out how to model in the database. Can create new projects - that's great - but it seems that all new projects must be associated with invited project users. How to have the option to have some publicly available so that not just project users can view them? Any help would be greatly appreciated. Thanks. Generally for that, you'd want a public or private boolean on the Project. Then you'd change your query for finding the project. A high level example: def set_project @project = Project.find(params[:id]) # Private projects require use to verify the user has access. if @project.private? raise ActiveRecord::NotFoundError unless @project.users.include?(current_user) end end Ideally, you'd use something like Pundit to do the authorization. That way it can be applied always and not be forgotten somewhere. Thanks Chris. Makes it less complicated than having a separate user has_many projects table if that's even possible given the already-existing join table. I will check out the pundit gem also through Cancancan seems to be working fine so far for me. For the devise invitable links that are sent out in the email notifications here, they all seem to go to root page or sign in page but maybe I'll check out the documentation to see if there's a way to get the url pointing to the project itself after signing in but if you have any advice on that would appreciate it. Your great series saves a lot of time and really appreciate your insights. Yeah, basically private projects need a ProjectUser association to keep track of who has access. A public one can ignore that since it's open to everyone. If you're already using CanCan, keep using it. Pundit's just an alternative. Devise invitable does have a method for doing that, it's in the docs somewhere. 👍 And glad I can be of help! Chris if you wanted to change button text based on whether someone has been invited to a group (saying already invited for instance) would you go the route of just doing it through js entirely like in the button ainimations or would you do css selectors based on database values that proved invitation? I guess it must br a combination of both? Just seems more complex in this case of invitations going out because it isn't a simple boolean or some other db object that id being referenced by css with id selectors but instead there's an actual invitation outstanding to a particular person . Is this too much database referencing to efficiently and quickly render say an index page of people who have the correct invite or invited buttons? Just wondering how to go about this useful button feature without slowing or messing up an index page. Your insights are always valuable - thank you I would use the database values. It's not going to slow you down to do that, since you'll already have the database records in memory to display them on the page. You can then do whatever you find easiest to change the buttons. I would probably just use an if statement to check the different statuses of a user's invitation and then display different buttons accordingly. Is there any resource repo on github for this screencast? I've seen this implemented in other apps and I'm trying to extend it. It's not neccesiarly a devise invitable question but a general question using a role attribute. Take a User that has and belongs to many Stores via a Team join table. A User that owns the Store has an Team role attribute of Owner, and he can invite Team Members to collaborate with restricted permissions via Pundit. Team Members that are invited have a Team role attribute of User (for restricted permissions). An Owner could promote a Team Member from User to Owner status (to gain full permissions over the Store model). An Owner can also demote a Team Member from Owner to User. How can I add a guard to only allow an Owner to be demoted if there is another Owner of the Store? Meaning, a Store could never be left without an Owner. make_user should have some type of return if the current_company.members.owners is equal to or less than 1. What's a logical way to add that guard or is the above sufficient? def make_owner member = current_company.members.find(params[:id]) member.update_attribute(:role, "owner") redirect_to admin_dashboard_path, notice: "#{member.user.email} is now an owner." end def make_user member = current_company.members.find(params[:id]) member.update_attribute(:role, "user") redirect_to admin_dashboard_path, notice: "#{member.user.email} is now a user." end I would add a validation to the TeamMember for that. class TeamMember belongs_to :team validate :team_has_owner def team_has_owner error.add(:base, "Team must have at least one owner.") unless team.team_members.where(role: :owner).exists? end end I felt close with this one! If you see the trace below it still manages to demote the Owner to a User, and then when trying to promote the User back to an Owner, it throws "Team must have at least one owner", since it just demoted the last and only owner. How can you check that as the only Owner, you can't demote yourself? unless team.team_members.where(role: :owner).exists? will return true. Member Exists? (0.5ms) SELECT 1 AS one FROM "members" WHERE "members"."store_id" = $1 AND "members"."role" = $2 LIMIT $3 [["store_id", 1], ["role", 0], ["LIMIT", 1]] 15:15:12 web.1 | ↳ app/models/member.rb:21:in `store_has_owner' 15:15:12 web.1 | Member Update (0.4ms) UPDATE "members" SET "role" = $1, "updated_at" = $2 WHERE "members"."id" = $3 [["role", 1], ["updated_at", "2019-08-26 19:15:12.965431"], ["id", 1]] 15:15:12 web.1 | ↳ app/controllers/admin/members_controller.rb:39:in `make_user' 15:15:12 web.1 | (0.6ms) COMMIT 15:15:12 web.1 | ↳ app/controllers/admin/members_controller.rb:39:in `make_user' 15:15:12 web.1 | Redirected to 15:15:12 web.1 | Completed 302 Found in 24ms (ActiveRecord: 3.8ms | Allocations: 9037) Hi Chris, Im kind of lost for ideas, hope you have time to help/give a hint in the right direction. Now I want to be able to have Admins invite Users, but I don't now how to do that and have Googled until my fingers bleed :) In my routes I only have the normal User routes to the invitation pages. Hope my question make sense :) Nothing special for that really. If you have two models, you have current_user and current_admin because Devise separates those out. Devise Invitable has a polymorphic invited_by column, so you can pass in any object for that. Your invite code would look like this: User.invite!({ email: '[email protected]' }, current_admin) Join 27,623+ developers who get early access to new screencasts, articles, guides, updates, and more.
https://gorails.com/forum/inviting-users-with-devise_invitable-gorails
CC-MAIN-2019-51
refinedweb
2,217
65.83
The PathAttribute allows setting an attribute at a given position in a Path. More... The PathAttribute object allows attibutes consisting of a name and a value to be specified for the endpoints of path segments. The attributes are exposed to the delegate as Attached Properties. The value of an attribute at any particular point is interpolated from the PathAttributes bounding the point. The example below shows a path with the items scaled to 30% with opacity 50% at the top of the path and scaled 100% with opacity 100% at the bottom. Note the use of the PathView.scale and PathView.opacity attached properties to set the scale and opacity of the delegate. import Qt 4.7 Rectangle { width: 240; height: 200 Component { id: delegate Item { width: 80; height: 80 scale: PathView.iconScale opacity: PathView.iconOpacity Column { Image { anchors.horizontalCenter: name.horizontalCenter; width: 64; height: 64; source: icon } Text { text: name; font.pointSize: 16} } } } } } } } See also Path. name : string. value : string the new value of the attribute..
http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/qml-pathattribute.html
CC-MAIN-2014-10
refinedweb
167
69.18
The Picasa user interface (UI) includes custom buttons which can open image files in local applications and upload the selected image files to the web using the Picasa Web Uploader. Buttons in Picasa: This documentation is intended for programmers who want to add custom buttons to the Picasa software user interface. You should be familiar with Picasa, XML, Adobe® Photoshop®, the Windows registry and executable files. This documentation provides descriptions of the necessary components of this technology and a reference of an XML schema that allows you to control how your buttons will function.Button Structure This API makes use of several file types: A PBF is an XML file that contains all of the information necessary to create an individual button in Picasa. The importbutton feature of the Picasa pluggable protocol installs the PBZ file. Since PBZ files are essentially your redistributable packages, you can use more recognizable names than the PBF and PSD files they contain (see Naming Your Files). However, be sure to name your PBZ files with some measure of uniqueness, so as to avoid conflicts with an existing PBZ of a similar name. It is a good rule of thumb to use your company name or website name in your PBZ filenames to provide some namespace differentiation. For example, a filename like acme-inc-photo-uploader.pbz is unlikely to have a name collision, and it makes the button's origin obvious. In contrast, a filename like uploader.pbz is far too generic and likely to cause confusion. (If in doubt, use a GUID as the filename if human readability is not important.) In addition, using a PBZ file allows the distribution of multiple buttons in a single file. Internally, this is exactly what Picasa does. All of the default buttons you see when you first install Picasa are contained in a single PBZ file. Each button you add is defined by an individual PBF file. There are four elements of each button: You can specify localization of the label and tooltip text for the languages supported by Picasa. All localized text must be present in the button configuration file. You must name your PBF using a Globally Unique Identifier (GUID). A GUID ensures that no two buttons will be confused for one another, which can occur with common button names such as “Upload Button”. GUIDs are created using algorithms that use unique aspects of a machine’s hardware and software environment, such as the network interface MAC address and the current date and time. Learn more about GUIDs. Picasa uses the Windows registry format for GUIDs. A typical GUID in this registry format looks like this: {3acc6bb4-ffd9-11da-95f3-00e08161165f} Note that for Picasa, the curly braces are included in the GUID. Creating a GUID for your button is straightforward. Many tools exist for creating GUIDs in the required format. If you are using a Windows development environment like Visual Studio, the executable guidgen.exe is automatically installed for you. guidgen.exe will create a GUID for you at the click of a button. If you do not have a development environment installed, web-based solutions exist, such as guidgen.com. (Perform a web search for the term “guidgen” if you need help finding a guild generator). Once you have obtained a GUID for your button, save it to a text file for later use, as you will need to copy and paste it a few times. For example, the PBF file that describes the button with the GUID above would be named: {3acc6bb4-ffd9-11da-95f3-00e08161165f}.pbf The Photoshop (PSD) file that contains the icon for the button should be named similarly: {3acc6bb4-ffd9-11da-95f3-00e08161165f}.psd Finally, use the GUID inside the PBF file’s XML code to identify the button and to refer to the PSD file. The basic XML structure of a PBF is as follows: <?xml version="1.0" encoding="utf-8" ?> <buttons format="1" version="version_number"> <button id="buttonid" type="dynamic"> <icon name="PSDfile/layername" src="pbz"/> <label>Button Label</label> <tooltip>Tooltip text goes here.</tooltip> <action verb="open"|"trayexec"|"hybrid"> <param name="name" value="value"/> </action> </button> </buttons> You can use the above snippet as a starting point, editing it as needed. See Verbs and Parameters below.Verbs There are three verbs that you can use in the <action> tag: trayexec, hybrid and open. <action verb="trayexec"> <param name="exe_name" value="executable"/> <param name="export" value="1"/> (optional) <param name="foreach" value="1"/> (optional) </action> This action launches an executable, passing the filenames of the images in the Picasa tray as command line parameters. For example, you can use it to open all of the files in the Picasa tray with another program like Photoshop. Many options exist for this action, all exposed via the <param> tags. <action verb="hybrid"> <param name="url" value="hybrid_uploader_url"/> </action> This action launches the Picasa Web Uploader, using the specified webpage for its content. The Web Uploader is the best way to integrate a web service with Picasa. For example, the BlogThis! feature uses the Web Uploader to export images from Picasa to a user’s blog via Blogger. The name attribute must be “url” and the value attribute must specify the URL of your server that hosts a web application that uses the Web Uploader API. For more information, please read the Hybid Uploader documentation. <action verb="open"> <param name="url" value="unique_resource_locator"/> </action> The “open" action performs an OS-level shell execute of the URL specified in the name-value pair. The name attribute must be “url” and the value attribute must specify the URL to open. For example, you might add a button that launches your website.Parameters This section describes XML elements in a PBF file. These tags associate an action with a button. The verb attribute of the <action> tag must be one of three verbs. The <param> tags pass parameters to the action command processor as name-value pairs. This tag is the root tag of the hierarchy. The format and version attributes are checked by Picasa when loading the PBF. As Picasa evolves, it is possible that future PBF formats will differ significantly enough from the current format to warrant filtering out buttons whose button format is no longer supported. Every effort will be made to make future button formats backwards-compatible, but in the event that it becomes necessary to filter buttons based on their format, the format attribute will be used to accomplish that. Currently, the format attribute should be set to 1. The button creator uses the the version tag to ensure that only the latest version of a button is loaded. If multiple buttons with the same GUID are found during startup, Picasa loads the one with the highest version. This allows you to release updates to your buttons and ensure that the latest version to the system will be loaded. Your version numbering system should begin at 1 and then increase by whole number amounts for subsequent releases (i.e., version numbers are integers). This tag is nested within the <buttons> tag and declares that the button should be added to the buttonbar. The id attribute is required, use a “category/identifier” format and be unique. The type attribute is also required and currently must be set to the value “dynamic”. A dynamic button is defined by its nested tags and must at minimum have both a <label> and an <action> tag. Button IDs should be unique so that they don't conflict with other button IDs. Using the form “companyname/guid” helps differentiate customized buttons. The following is an example of this best practice: <button id=”acme-inc/{3acc6bb4-ffd9-11da-95f3-00e08161165f}” type=”dynamic”> This tag is nested inside of a <button> tag and specifies the graphic to use on the button. The name attribute is required and has the “PSDfile/layername” format, where PSDfile is the name of a Photoshop format file (minus the .PSD extension) and layername is the name of the layer within the PSD file to use for the icon. The src attribute is required and should always be set to the value “pbz”, which signifies that the PSD file with the icon graphic is located in the same PBZ file as the PBF. Though not a requirement, you should use the button GUID as the PSD name. This will make it obvious which PSD has the corresponding content for a given PBF. The layername is simply the name of the layer inside the PSD file that contains the icon. For example, if the icon was drawn on the layer called my_icon inside the Photoshop file {3acc6bb4-ffd9-11da-95f3-00e08161165f}.psd, the name attribute would be “{3acc6bb4-ffd9-11da-95f3-00e08161165f}/my_icon”. The tag would look like this: <icon name="{3acc6bb4-ffd9-11da-95f3-00e08161165f}/my_icon" src="pbz"/> The PSD file you specify must be present in the PBZ along with the PBF file that references it. If the graphic is larger than 40 by 25, it may be resampled to fit inside the 40x25 area by the UI system and could lose quality (see above). The alpha channel is respected, so feel free to use anti-aliasing when creating your artwork. This tag specifies the label text for a button. The text is rendered immediately below the icon in an anti-aliased font. For most alphabetic languages, there is room for approximately 8 to 10 characters on exactly one line. Text longer than this is truncated. Button labels can be localized by appending the language and, optionally, the country code to the label tag itself such that the tag is of the form <label_lc-cc> where lc is the language code and cc is the country code. Both the language code and the country code must be in lower case. For example, a button with the label “Upload” can be localized by including the following <label> tags: <label>Upload</label> <label_en>Upload</label_en> <label_zh-tw>上传</label_zh-tw> <label_zh-cn>上載</label_zh-cn> <label_cs>Odeslat</label_cs> <label_nl>Uploaden</label_nl> <label_en-gb>Upload</label_en-gb> <label_fr>Transférer</label_fr> <label_de>Hochladen</label_de> <label_it>Carica</label_it> <label_ja>アップロード</label_ja> <label_ko>업로드</label_ko> <label_pt-br>Fazer upload</label_pt-br> <label_ru>Загрузка</label_ru> <label_es>Cargar</label_es> <label_th>อัปโหลด</label_th> Note - All of the label tags should be at the same nesting level in the file. There is no containing <labels> tag. When a button configuration file is parsed, the <label_lc-cc> tag that best matches the current interface language will be used to render the label. For example, if Picasa’s language setting is currently en-gb (for British English), the button loader gives precedence to the tag <label_en-gb>. If that tag is not found, it will next look for the tag <label_en> and use it. If neither of those tags is found, the button uploader uses the plain <label> tag. The plain <label> tag is expected to be in U.S. English. It is equivalent to <label_en-us> so there is no need to explicitly declare <label_en-us>. This tag specifies the tooltip text for the <button> tag it is nested within. Tooltips can be localized in the same manner as <label> tags (e.g., the tag for a German tooltip would be <tooltip_de>). In addition, the <tooltip> tag text specifies the button description in the Button Configuration dialog box. It can be much longer than the label text, of course, but you should keep it be brief (typically a single sentence).Launching the Executable You must specify an executable (such as Abobe Photoshop) to launch. You can specify it explicitly, as a lookup in the Windows Registry or configure the path to the executable separately. The relevant parameters are: <param name="exe_name" value="executable_filename"/> <param name="exe_name_regkey" value="registry_key_path"/> <param name="exe_path" value="path_to_executable"/> <param name="exe_path_regkey" value="registry_key_path"/> You must specify the exe_name parameter. This parameter will be passed to the CreateProcess Win32 API and must abide by rules of how Windows finds the executable. In general, it is best to specify the full pathname and not expect that the executable can be found in via the system’s PATH environment variable. Many times, the full pathname to an executable cannot be known without querying the registry. If the exe_name_regkey parameter is specified instead of exe_name (they are mutually exclusive), the value attribute is expected to contain a full registry path to a setting from which to read the full executable path name. Named settings as well as default settings are supported, depending on the format of the registry path you specify. Examples: The differentiating factor in the above examples is the trailing backslash. If the registry path ends with a backslash, it refers to the full path to a specific key. In this case, the default setting for that key is read and used as the full pathname of the executable. If the value does not end with a backslash, it refers to the full path to a specific registry setting. For both cases, a string-type registry setting is required and assumed to be present. In some cases, you may need to query the executable name and path separately. If the parameter exe_path is specified, its value will be prepended to the value set by the exe_name parameter. The corresponding registry querying parameters are exe_path_regkey and exe_name_rekey. These parameters are useful when you know the name of the executable, but the installation directory must be queried from the registry. For example, the Picasa2 executable is always named Picasa2.exe, but its location depends on the folder chosen by the user during installation. In this case, you know the executable name, but must lookup the installation path in the registry. The parameters you would need to specify are: <param name="exe_name" value="Picasa2.exe"/> <param name="exe_path_regkey" value="HKEY_LOCAL_MACHINE\SOFTWARE\Google\Picasa\Picasa2\Directory"/>Exporting Files Once the parameters specifying the executable are set, you can configure these options: If the following tag is included: <param name="export" value="1"/> then all of the images in the tray will be exported as JPGs at their original sizes to a temporary folder and then used as the source files passed to the executable via the command line. Note - This is the only method that ensures that unsaved edits are applied to the images before they are passed along to the executable. If the “export” parameter is not specified (or if its value is set to anything other than “1”) then Picasa sends the unsaved source images to the executable via the command line. When you edit a picture in Picasa, the original image is not modified until you explicitly save the image. This includes edits like cropping and rotating. If you do not specify the export parameter, any edits applied to an image since the last explicit save operation are not included. As Picasa exports the files, a small notification dialog box appears on the right-hand side of the screen. You can customize the message displayed in this dialog box with the “export_message” parameter. For example: <param name="export_message" value="Sending to Photoshop…"/> Note - Localization support for this parameter may be added in the future. For now, the value string will be used as-is.Command Line Options If the following tag is included: <param name="foreach" value="1"/> then the executable launches once for each source image file. That is, CreateProcess will be called repeatedly for each source image file will be passed as a command line parameter to the process. This is useful for applications that cannot take multiple command line parameters. It is also good for applications such as Photoshop that have a single running instance but that can be shell executed multiple times to open additional files. If you do not include the “foreach” parameter or set its value to anything other than “1”, then the executable launches exactly once with a command line that lists all of the image files, individually quoted and separated by spaces. This can fail if too many images are in the Picasa tray. There is a 32K size limit to the command line that can be passed to a process, and constructing a list of full pathnames to files can become large very quickly. Assuming a worst case of MAX_PATH (usually 512 bytes) for each pathname, this method does not scale unless very few images are processed at a time.Packaging and Distributing Your PZB Once you create a PBZ, it is easy to add it to Picasa. Picasa includes a pluggable protocol called picasa that you use to install the button. You can implement simple hyperlinks on a webpage that use the importbutton feature of the picasa protocol to distribute custom buttons. For example, a HTML snippet like the following can install a button: <p>Add the <a href="picasa://importbutton/?url="> Acme Inc. Uploader</a>to Picasa </p> When the user clicks on the hyperlink, Picasa launches (if it is not already running), installs the button and then displays the Button Configuration dialog box which displays the new button and allows the user to choose where to place the button in Picasa toolbar. On your website, simply provide a link like the one above, replacing the URL parameter with the location of your PBZ file (replace the text after “?url=” with the URL of the button file on your website). Adobe and Photoshop are either registered trademarks or trademarks of Adobe System Incorporated in the Unitized States and/or other countries.
http://code.google.com/apis/picasa/docs/button_api.html
crawl-001
refinedweb
2,927
53.31
Download pdf on Safari and open in Pythonista script? I download a pdf in Safari, then, on this pdf, I "open in" a Pythonista script that uses appex and receives a "get_url" as "". How can I read/copy this file locally in Pythonista dirs? To convert the file:URL to a normal path, you can simply remove the using path = url[len("file://"):]. (You can of course just write path = url[7:], but it's not very obvious what the 7 means.) Pythonista's "Script Library" is located at ~/Documents. ~is the "home" directory, to convert that to a normal path use os.path.expanduser("~/Documents"). So if you want to save the file into Pythonista, you can use something like this: import appex import os import shutil path = appex.get_url()[len("file://"):] name = os.path.basename(path) dest = os.path.join(os.path.expanduser("~/Documents"), name) shutil.copy(path, dest) This could be condensed into less lines, I've split it up a little for clarity. Though if you just want to work with the PDF file and don't need to keep it permanently, you can open the path normally: with open(path, "rb"): # do whatever you need to Resolved: just found "urlretrieve" in a post (jonB) of last week... @dgelessus That won't work correctly if the URL contains percent escapes. A better method would be to use urlparseand urllib.url2pathname, like this: from urlparse import urlparse from urllib import url2pathname p = urlparse(file_url) file_path = url2pathname(p.path) Tanks a lot for your both clarifications...
https://forum.omz-software.com/topic/2978/download-pdf-on-safari-and-open-in-pythonista-script
CC-MAIN-2018-51
refinedweb
258
74.9
Console.input_alert freeze I have a webview that calls console.input_alertin the webview_did_finish_loadfunction. I ensure that this is called on a background thread using the @ui.in_background decorator. I am experiencing freezes when the alert is displayed where I can't press any buttons or enter any text in the alert and I can't exit the Python app using the little cross in the top left. This issue seems be caused when the ui thread is trying to do some animation or drawing while console.alert is presented. In this case, webview is probably finishing rendering while the alert is trying to pop up, not sure. . The workaround is to ui.delay your alert for a small time, 0.5 has been pretty safe, but maybe as low as 0.25, depending on what was being done. if you have an actual animation pending, need to wait for animation to complete. Also, make sure you don't have any pending delays that could try to do anything to change the ui state. def webview_did_finish_load(self,webview): @ui.in_background def alert(): console.alert('test') ui.delay(alert,0.5) # give enougn time for webview to finish drawing #ui.delay(webview.superview.close,5.0) an example of what NOT TO DO.. this will crash if alert is still open after 5 sec I will point out that hud_alertis much nicer if you just want to display a message that says the page loaded. This doesn't need to be backgrounded or delayed, and you don't have to handle the keyboardinterrupt that alert raises, etc. only use alertif you want actual interaction with the user.
https://forum.omz-software.com/topic/1156/console-input_alert-freeze
CC-MAIN-2019-04
refinedweb
275
67.35
Hello! I'm new to PSOC programming, but when I tried to upload the following code #include <device.h> That might have to do with something else within the project. Will you please upload the project here. To do so, in PSoC Creator: Build -> Clean Project File -> Create Workspace Bundle(minimal) and then upload the resulting .Zip-file here. Bob Did you check that the LCD is assigned to Port 2 (P2[0]..P2[6])? Hi daiengineering, According to your code, "Hello World" should be printed in the second line of the LCD. Please make sure that the LCD is fixed firmly in the allocated header. As hli has already suggested, assign the port to P2[6..0] in Pins tab of cydwr.
https://community.cypress.com/t5/PSoC-5-3-1-MCU/LCD-Help/td-p/169766
CC-MAIN-2021-10
refinedweb
123
83.96
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 18/01/2013 at 15:42, xxxxxxxx wrote: User Information: Cinema 4D Version: R14 Platform: Windows ; Language(s) : C++ ; --------- Hey Guys, Is it possible to exclude an object from an specific generator? I know, there is the Stop-Tag but that means all generators with an object will not work. But I want to disable just some Boole-Generators. Anyone an idea how to? On 18/01/2013 at 16:00, xxxxxxxx wrote: for custom ObjecData plugins written with that functionality in mind it is possible (just check for the op parent and alter your ouput depending on the result), but for already existing object types as a tag solution i do not think so. the only possible way i could think of might be a hook, but hat is beyond my scope. On 19/01/2013 at 06:14, xxxxxxxx wrote: Hey Ferdinand, Thank you for your answer. Could you explain what you mean with: just check for the op parent and alter your ouput depending on the result Because I use an ObjectData plugin. On 19/01/2013 at 07:02, xxxxxxxx wrote: something like that : def GetVirtualObjects(self, op, hierarchyhelp) : parent = op.GetUp() if (parent != None and parent.GetType() == c4d.Oboole) : return c4d.BaseObject(c4d.Onull) else: return c4d.BaseObject(c4d.Osphere) you could do it from any other method, which provides the node instance as a parameter. you will also have to make sure, that your object is flaged as dirty as soon as its position in the object tree has changed, to ensure your ouput is changed properly or it might take a while before GetVirtualObjectsis called again. you might get some more fancy results to work with ObjectData.Execute() - stop the generator/bool object execuion, instead of hiding your output. but i have not used Exeute ye, so i have only a vague idea of what it does
https://plugincafe.maxon.net/topic/6872/7694_exclude-object-from-generator
CC-MAIN-2021-31
refinedweb
358
64.1
Names in java, maven, and gradle Ned Twigg ・4 min read A central aspect of Java's philosophy is that names matter. Brian Goetz, Java Language Architect and author of Java Concurrency in Practice Packages If you work for "acme.com" and you're working on a project called "foo" then the convention is for your root package to be com.acme.foo. Being a forward-looking bunch, we tend to add lots of "grouping" packages. Our foo project is a utility library, and we might make other ones, so we put it into com.acme.util.foo, to make sure there is space for other utility projects. If we're really forward-looking, we'll take into account that foo is a utility library for manipulating text, so we better put it into com.acme.util.text.foo. On the other hand, YAGNI. In the javascript world, there are no packages, you just get one name - foo - and that's it. I'm glad that I do most of my work on the JVM rather than a runtime which was designed and built in 10 days, but whenever I start nesting my root package deeper than com.acme.foo, I try to remember that all those gobs of software being written in Node.js are getting by without any nesting at all, so maybe I can get by with 3 or 4 levels of nesting for my root package, rather than 5 or 6. Maven groupId:artifactId One nice thing about .class files is that you don't have to pick names for them - they get their name automatically from a 1:1 mapping on the .java file they came from. Unfortunately, maven asks you to pick two names, so it can't be that simple. Luckily, maven also provides not only one, but two slightly different conventions for how to pick these names! Accordingly, if you ask StackOverflow, you'll see two popular answers. And when a major library like RxJava ships a new version, you'll need a 12-screen-long debate to figure out what the names ought to be. Luckily, there is a mechanistic answer out there! JitPack turns any git commit into a maven artifact at , which I've found to be huge improvement over -SNAPSHOT for integration testing. The naming convention that it uses is: com.github.{user}:{repo}(also com.gitlab, org.bitbucket, and com.gitee) com.github.{user}.{repo}:{subproject}for multi-module builds. If you use JitPack's custom domain integration, then you can replace com.github.{user} with com.acme for a professional touch. Even if you don't use JitPack, using this convention will mean that you could, and it lays down a simple rule that works well enough for all these people. Gradle plugin id In Gradle-land, you can apply a plugin like this: plugins { id 'any.plugin.id.you.want' }. Gradle provides an excellent guideline: As a convention, we recommend you use an ID based on the reverse-domain pattern used for Java packages, for example org.example.greeting. The trouble is, a huge number of the plugins which have been published so far are named... gradle-plugin. It seems reasonable while you're writing the plugin, and you don't realize how silly it is until you use it from a distance: plugins { id 'com.acme.gradle.foo' id 'org.nonprofit.bar.bar-gradle-plugin' id 'beetlejuice.beetlejuice.gradle' } What kind of car is that? That's the Chevy Corvette Car. What kind of phone is that? It's an iPhone XS Phone. Wouldn't this be better? plugins { id 'com.acme.foo' id 'org.nonprofit.bar' id 'beetlejuice' } It's interesting, because the guidelines that Gradle gives are very good - no excessive nesting, no unnecessary gradle.gradle, just the bare minimum. And yet, a lot of the people who use it feel like they should add a gradle or two, just in case. Probably the only way to save us from ourselves would be for the gradle tooling to search for gradle in the id and throw a warning, to help us think about it a little before we publish. Lessons Designing namespaces is a rare opportunity. Most of us never do it even once in our entire career. And since nobody has experience in it, we're still making lots of beginner mistakes, even in the quarter-century-old world of java. It's hard! Typosquatting, IDN homograph attacks, there are so many pitfalls. If I ever end up defining a namespace, I'm gonna try to remember these lessons: - namespaces are helpful for identifying the author/maintainer: com.acme.foo👍 - namespaces are overkill for fine-grained categories: com.acme.util.text.foo👎 - defining the name to be a tuple of two other names is 👎 - if the names are all plugins to foobar, people are just gonna name them my-foobar-plugin, and it might be good to warn them that it's probably not the best choice ¯\_(ツ)_/¯ Demystifying Open Source Contributions This quick guide is mainly for first-time contributors and people who want to start helping open sour... If you want to relocate a gradle plugin with a nice warning message, here's some sample code: 1, 2 As Jeremie Bresson (@j2r2b) notes on Twitter, eclipse-hosted projects have a unique culture around this, which is probably because eclipse predates maven. (link)
https://dev.to/nedtwigg/names-in-java-maven-and-gradle-2fm2
CC-MAIN-2020-16
refinedweb
905
64.51
$(...).DataTable is not a function $(...).DataTable is not a function I would like to implement DataTables with my Vue3 project. But have a problem with correct import. Import according documentation does not work. var $ = require( 'jquery' ); var dt = require( 'datatables.net' )(); Throws: Cannot set property '$' of undefined So I tried everything I found but without success. I have a github project with example branch datatables Now I have this import import jQuery from 'jquery/src/jquery.js' window.$ = window.jQuery = jQuery; import dataTables from 'datatables.net'; window.dt = dataTables; Throws: $(...).DataTable is not a function. I am lost in it. What happened under the DataTables hood why it is not possible to join it with jQuery? Thanks for any help. This question has an accepted answers - jump to answer Answers That top one should work - have you also added npm install jquery? Colin Yes I have jq installed. I have both require() commands in one component. The second one throws me an error: > TypeError: Cannot set property '$' of undefined There is a code on line 132 in jquery.dataTables,js require('datatables.net') have a hint in editor which says: Dont know what it means but as I run the @type/datatable.net it adds datatable in the @type folder in node_modules folder. I found this tutorial which removes previous error and have another. But there are two commands importing jquery. Dont understand the difference. You are using TypeScript, so it is attempting to load "type" information about DataTables. We do actually ship type information with DataTables now so the @types/datatables.netpackage shouldn't be needed. I would suggest sticking with the latter. The first one is importing the minified jQuery, while the second will get the default jQuery file (which is not minified). There is an implicit there that you would be using a minifier as part of your build process. Allan Ok I have it. But as I found out Vue components lifecycle is not compatible with dataTables. Cause if I load the data via axios and then render the template, the dataTable() call is done before the template is rendered. It seems it is not joined with vue model. How can I solve it? installing your npm package doesn't include a types directory or any reference to typescript types. Is this something that is planned on being included because it doesn't look like you are including types. Ah yes, I remember now - you are absolutely correct, my apologies. We set everything up for it, but didn't include it in the filesproperty for the package.json... We are working on correcting that - sorry! Make sure you use v-oncein the <table>tag, and initialise the DataTable in the mountedevent. Allan
https://www.datatables.net/forums/discussion/69158/datatable-is-not-a-function
CC-MAIN-2022-33
refinedweb
456
68.67
A resource URI is a pointer to a media object (eg. a broadcast or an uploaded image). By default, media objects can only be accessed by their owner. The owner may share the media object by signing its resourceUri and handing the result to someone else, i.e. "delegating view access". The Iris player SDKs for Android and iOS, as well as the web player, require a signed resourceUri as input. Your backend, which is aware of your auth scheme, should decide whether and when a given client should be privileged to access a given media object, and hand it a corresponding signed resourceUri as necessary. You can either make your backend capable of signing a resourceUri with your daId/ daSecret credential pair, or request pre-signed resourceUris from the REST API. When bootstrapping a player with a Iris broadcast, the resourceUri should have the following structure: If you don't need custom authorization and simply want to let anybody access a media object with a single resourceUri the REST API for broadcast metadata provides pre-signed resourceUris which may be used more than once and do not expire. Similarly, the REST API for image metadata provides image URLs which don't have any access restrictions. The access control behaviour for a resource URI can be modified by certain parameters defined by a protocol called Delegation API (DA). A resourceUri. broadcastIdfor a broadcast created by one of your applications da_idand a da_secret_key, which you can find on the Developer page on the Iris site. 1) Start by constructing a resourceUri using the broadcastId for the broadcast you want to access. It should look something like this: 2) Select appropriate values for each of the DA parameters listed below and add them as a query string to your resourceUri. The result could look like this (line breaks added for better visibility): ?da_id=MY_DA_ID &da_timestamp=1471360487 &da_nonce=0.7911932193674147 &da_signature_method=HMAC-SHA256 3) Generate a signature Add GET to the start of the result from the previous step. Then produce a hex digest of the HMAC (SHA-265) of the string using your da_secret_key as the secret key. The result of this operation is the signature. var crypto = require('crypto'); var stringToSign = 'GET'; var signature = crypto.createHmac('sha256', 'MY_DA_SECRET_KEY') .update(stringToSign) .digest('hex'); import hmac import hashlib stringToSign = 'GET' signature = hmac.new('MY_DA_SECRET_KEY', stringToSign, hashlib.sha256).hexdigest() print signature 'GET'; $signature = hash_hmac('SHA256', $stringToSign, 'MY_DA_SECRET_KEY');$stringToSign = 4) Finally, add the signature to the end of the resourceUri as a query parameter named da_signature (line breaks added for better visibility): ?da_id=MY_DA_ID &da_timestamp=1471360487 &da_nonce=0.7911932193674147 &da_signature_method=HMAC-SHA256 &da_signature=8dca3b1eae750b5b4d88e4e7dd1bfd3f6c605faff13302276af0da883c0d7642 You now have a signed resourceUri! da_id - Your public Delegation API id which corresponds to your secret key. da_timestamp - Timestamp, given as seconds since January 1 1970 (UTC). Used as a security measure, to invalidate the signature a reasonable number of minutes after the request is signed. da_nonce - A random value chosen by you. Choose a different nonce value for each request. For signed requests with unique nonce values, the Iris backend will block attempts to replay the request after being used once. da_signature_method - Indicates which algorithm was used to generate the signature. Use the value HMAC-SHA256. da_ttl - Duration, given as seconds, defining how long the signed URI is valid, counting from da_timestamp. The default value is 3600 seconds. TTLs of less than a few minutes is generally not recommended due to potential synchronization issues between server clocks. da_static - When set, this parameter makes the signed URI valid for repeated use, effectively making the Iris backend ignore any da_nonce. The signature is a Hex digest of the HMAC of your secret key and an SHA-256 hash of a string which represents the request the client is about to make, including all da_-prefixed parameters excluding da_signature. The request string consists of the HTTP verb followed by space, followed by the domain, path and the query string. Each DA parameter should be appended to the query string, which is then signed, and the resulting da_signature must be appended as the last parameter in the query string.
https://irisplatform.io/docs/key-concepts/resource-uri/
CC-MAIN-2017-22
refinedweb
686
54.93
Category talk:General relativity I disagree with the approach of the previous version that "It seems simpler to consider it as an imaginary number ict where i is the quadratic root of -1 and c the speed of light. Then the space-time has the following four dimensions: (x,y,z,w=ict)." Instead, I think that introducing imaginary time only confuses lots of people. I plan to replace other sections too. -- Schmelzer 22:19, 3 February 2008 (UTC) namespace conventions[edit source] The contents of this page will need to be moved. Pages in the Category: namespace should only contain a list of pages, and not text. We should probably copy (ie. merge) the text to General relativity or to a new page, and then Category:General relativity should be left empty. (No redirect) --mikeu talk 22:40, 3 February 2008 (UTC)
https://en.wikiversity.org/wiki/Category_talk:General_relativity
CC-MAIN-2021-25
refinedweb
143
56.45
note BrowserUk <P>First off, you are probably limiting your audience unnecessarily by mentioning "threads" in the title. <p>Your basic problem, that of needing a non-blocking read, has nothing to do with threading. <p>Secondly, do you really need to terminate your read thread? That is, if you detached it, it would just silently melt into the ether when your main thread decides to exit the process. <P>But if you do really need to, I think that you've already discovered the 'right' solution, namely <c>read_bg()</c>. All you nee to do is use the API correctly and alter your coding strategy a little. (NOTE: None of the following is tested.) <blockquote><i> So I changed to "background read" my ($receivedBytes, $data) = $HandleToRS232->read_bg(); Doing this I got nothing in $data so I did a call to my($done, $count_in, $string_in) = $HandleToRS232->read_done(1); Then I receive the message in the $string_in variable, but the call to read_done is also blocking. </i></blockquote> <p>Based on my reading the source -- and I may well have misread it -- <ul><li><c>read_bg()</c> uses overlapped IO to operate, therefore it is expected that it will (often) not return (somplete) data immediately <p>It basically says, this is what I want to read, initiate that for me and return immediately (with any data that happens to have already arrived.) <P>I'll call <c>read_done()</c> when I'm ready for (the rest of) it. </li><li><c>read_done()</c> is only blocking if you pass a true argument (As you are:<c>read_done(1);</c>). <P>To prevent it from blocking, pass 0; <P>That will require you to recode your read thread so that instead of blocking in a read, you poll (with a delay!). <p>Then within that polling loop you can check to see if you should exit the thread. <6009 1016009
http://www.perlmonks.org/index.pl?displaytype=xml;node_id=1016023
CC-MAIN-2016-40
refinedweb
318
57.91
{-# LANGUAGE BangPatterns #-} {- | Module : Data.IntDisjointSet Description : Persistent Disjoint-Sets (a.k.a. Union-Find) Copyright : (c) 2012 Maxwell Sayles. License : LGPL Maintainer : maxwellsayles@gmail.com Stability : stable Portability : non-portable (only tested with GHC 6.12.3). -} module Data.IntDisjointSet (IntDisjointSet, empty, singleton, insert, unsafeMerge, union, lookup, elems, toList, fromList, equivalent, disjointSetSize, size, map) where import Control.Arrow import Control.Monad.State.Strict import Control.Monad.Trans.Maybe import qualified Data.IntMap as IntMap import qualified Data.List as List import Data.Maybe import Prelude hiding (lookup, map) {-| Represents a disjoint set of integers. -} data IntDisjointSet = IntDisjointSet { parents :: IntMap.IntMap Int, ranks :: IntMap.IntMap Int } instance Show IntDisjointSet where show = ("fromList " ++) . show . fst . toList {-| Create a disjoint set with no members. O(1). -} empty :: IntDisjointSet empty = IntDisjointSet IntMap.empty IntMap.empty {-| Create a disjoint set with one member. O(1). -} singleton :: Int -> IntDisjointSet singleton !x = let p = IntMap.singleton x x r = IntMap.singleton x 0 in p `seq` r `seq` IntDisjointSet p r {-| Insert x into the disjoint set. If it is already a member, then do nothing, otherwise x has no equivalence relations. O(logn). -} insert :: Int -> IntDisjointSet -> IntDisjointSet insert !x set@(IntDisjointSet p r) = let (l, p') = IntMap.insertLookupWithKey (\_ _ old -> old) x x p in case l of Just _ -> set Nothing -> let r' = IntMap.insert x 0 r in p' `seq` r' `seq` IntDisjointSet p' r' {-|. -} unsafeMerge :: IntDisjointSet -> IntDisjointSet -> IntDisjointSet unsafeMerge (IntDisjointSet p1 r1) (IntDisjointSet p2 r2) = IntDisjointSet (IntMap.union p1 p2) (IntMap.union r1 r2) {-|. -} union :: Int -> Int -> IntDisjointSet -> IntDisjointSet union !x !y set = flip execState set $ runMaybeT $ do repx <- MaybeT $ state $ lookup x repy <- MaybeT $ state $ lookup y guard $ repx /= repy (IntDisjointSet p r) <- get let rankx = r IntMap.! repx let ranky = r IntMap.! repy put $! case compare rankx ranky of LT -> let p' = IntMap.insert repx repy p r' = IntMap.delete repx r in p' `seq` r' `seq` IntDisjointSet p' r' GT -> let p' = IntMap.insert repy repx p r' = IntMap.delete repy r in p' `seq` r' `seq` IntDisjointSet p' r' EQ -> let p' = IntMap.insert repx repy p r' = IntMap.delete repx $! IntMap.insert repy (ranky + 1) r in p' `seq` r' `seq` IntDisjointSet p' r' {-| Find the set representative for this input. This performs path compression and so is stateful. Amortized O(logn * \alpha(n)) where \alpha(n) is the extremely slowly growing inverse Ackermann function. -} lookup :: Int -> IntDisjointSet -> (Maybe Int, IntDisjointSet) lookup !x set = case find x set of Nothing -> (Nothing, set) Just rep -> let set' = compress x rep set in set' `seq` (Just rep, set') {-| Return a list of all the elements. -} -- This is stateful for consistency and possible future revisions. elems :: IntDisjointSet -> ([Int], IntDisjointSet) elems = IntMap.keys . parents &&& id {-| Generate an association list of each element and its representative. -} toList :: IntDisjointSet -> ([(Int, Int)], IntDisjointSet) toList set = flip runState set $ do xs <- state elems forM xs $ \x -> do Just rep <- state $ lookup x return (x, rep) {-| Given an association list representing equivalences between elements, generate the corresponding disjoint-set. -} fromList :: [(Int, Int)] -> IntDisjointSet fromList = foldr (\(x, y) -> union x y . insert y . insert x) empty {-| True if both elements belong to the same set. -} equivalent :: Int -> Int -> IntDisjointSet -> (Bool, IntDisjointSet) equivalent !x !y set = first (fromMaybe False) $ flip runState set $ runMaybeT $ do repx <- MaybeT $ state $ lookup x repy <- MaybeT $ state $ lookup y return $! repx == repy {-| Return the number of disjoint sets. O(1). -} disjointSetSize :: IntDisjointSet -> Int disjointSetSize = IntMap.size . ranks {-| Return the number of elements in all disjoint sets. O(1). -} size :: IntDisjointSet -> Int size = IntMap.size . parents {-| Map each member to another Int. The map function must be a bijection, i.e. 1-to-1 mapping. -} map :: (Int -> Int) -> IntDisjointSet -> IntDisjointSet map f (IntDisjointSet p r) = let p' = IntMap.fromList $ List.map (f *** f) $ IntMap.toList p r' = IntMap.fromList $ List.map (first f) $ IntMap.toList r in p' `seq` r' `seq` IntDisjointSet p' r' -- Find the set representative. -- This traverses parents until the parent of y == y and returns y. find :: Int -> IntDisjointSet -> Maybe Int find !x (IntDisjointSet p _) = do x' <- IntMap.lookup x p return $! if x == x' then x' else find' x' where find' y = let y' = p IntMap.! y in if y == y' then y' else find' y' -- Given a start node and its representative, compress -- the path to the root. compress :: Int -> Int -> IntDisjointSet -> IntDisjointSet compress !x !rep set = helper x set where helper !x set@(IntDisjointSet p r) | x == rep = set | otherwise = helper x' set' where x' = p IntMap.! x set' = let p' = IntMap.insert x rep p in p' `seq` IntDisjointSet p' r
http://hackage.haskell.org/package/disjoint-set-0.1/docs/src/Data-IntDisjointSet.html
CC-MAIN-2015-48
refinedweb
766
70.29
Get the user name (this howto is deprecated)Tag(s): DEPRECATED In application : will print the current user. You can't use this technique to secure your application since it is very to spoof. public class Test { public static void main(String args[]) { System.out.println( System.getProperty("user.name") ); } } You just need to specify a "user.name" from the command line. > java -Duser.name=Elvis Test Elvis As an alternative with JDK1.5, public class Test { public static void main(String args[]) { com.sun.security.auth.module.NTSystem NTSystem = new com.sun.security.auth.module.NTSystem(); System.out.println(NTSystem.getName()); System.out.println(NTSystem.getDomain()); } } In Applet there is no way unless you ask for it or use a signed applet. If you have access to a server-side, something like an ASP page can be used to detect the current NT user name if the client and the server are configured correcty (SSO). See this related HowTo for a JSP hack!
https://rgagnon.com/javadetails/java-0048.html
CC-MAIN-2019-26
refinedweb
164
50.94
4 AnswersNew Answer how this works. public class Program { public static void main(String[] args) { int i,j; for(i=1;i<4;i++){ for(j=1;j<2;j++){ System.out.println("v"); System.out.println("8"); } } } } 6/23/2018 7:03:48 PMstephen haokip 4 AnswersNew Answer first loop is execute 3 time for i=1,2,3 and second for loop is execute one time because 1<2 so only one time condition true and loop body executed so for i = 1 the inner loop print v,8 for i = 2 the inner loop print v,8 for i = 3 the inner loop print v,8 so output came as v 8 v 8 v 8 I don’t know exactly what you want, but it’s basically two loops, one inside the other, or nested, printing out v and 8. if the first for statement is true, it moves to the second for statement, which prints the v and 8 out. thank q pal In this code there are two nested for loops. First the inner loop iterates, after the loop ends the outer loop is iterating one time and again the inner loop starts.You can read more here Sololearn Inc.535 Mission Street, Suite 1591
https://www.sololearn.com/Discuss/1364269/i-could-not-figure-it-out/
CC-MAIN-2022-27
refinedweb
211
65.76
[Android] Making a custom Android button using a custom view 13 September, 2008 34 Comments Creating a custom view is as simple as inheriting from the View class and overriding the methods that need to be overridden. In this example, a custom button is implemented in this way. The button shall feature a labelled image (i.e. an image with text underneath). 1 public class CustomImageButton extends View { 2 private final static int WIDTH_PADDING = 8; 3 private final static int HEIGHT_PADDING = 10; 4 private final String label; 5 private final int imageResId; 6 private final Bitmap image; 7 private final InternalListener listenerAdapter = new InternalListener(); 8 The constructor can take in the parameters to set the button image and label. 9 /** 10 * Constructor. 11 * 12 * @param context 13 * Activity context in which the button view is being placed for. 14 * 15 * @param resImage 16 * Image to put on the button. This image should have been placed 17 * in the drawable resources directory. 18 * 19 * @param label 20 * The text label to display for the custom button. 21 */ 22 public CustomImageButton(Context context, int resImage, String label) 23 { 24 super(context); 25 this.label = label; 26 this.imageResId = resImage; 27 this.image = BitmapFactory.decodeResource(context.getResources(), 28 imageResId); 29 30 setFocusable(true); 31 setBackgroundColor(Color.WHITE); 32 33 setOnClickListener(listenerAdapter); 34 setClickable(true); 35 } 36 With the constructor defined, there are a number of methods in the View class that needs to be overridden to this view behave like a button. Firstly, the onFocusChanged gets triggered when the focus moves onto or off the view. In the case of our custom button, we want the button to be “highlighted” when ever the focus is on the button. 37 /** 38 * The method that is called when the focus is changed to or from this 39 * view. 40 */ 41 protected void onFocusChanged(boolean gainFocus, int direction, 42 Rect previouslyFocusedRect) 43 { 44 if (gainFocus == true) 45 { 46 this.setBackgroundColor(Color.rgb(255, 165, 0)); 47 } 48 else 49 { 50 this.setBackgroundColor(Color.WHITE); 51 } 52 } 53 The method responsible for rendering the contents of the view to the screen is the draw method. In this case, it handles placing the image and text label on to the custom view. 54 /** 55 * Method called on to render the view. 56 */ 57 protected void onDraw(Canvas canvas) 58 { 59 Paint textPaint = new Paint(); 60 textPaint.setColor(Color.BLACK); 61 canvas.drawBitmap(image, WIDTH_PADDING / 2, HEIGHT_PADDING / 2, null); 62 canvas.drawText(label, WIDTH_PADDING / 2, (HEIGHT_PADDING / 2) + 63 image.getHeight() + 8, textPaint); 64 } 65 For the elements to be displayed correctly on the screen, Android needs to know the how big the custom view is. This is done through overriding the onMeasure method. The measurement specification parameters represent dimension restrictions that are imposed by the parent view. 66 @Override 67 protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) 68 { 69 setMeasuredDimension(measureWidth(widthMeasureSpec), 70 measureHeight(heightMeasureSpec)); 71 } 72 The call to setMeasuredDimension in the onMeasure method is important. The documentation states that the call is necessary to avoid a IllegalStateException. 73 private int measureWidth(int measureSpec) 74 { 75 int preferred = image.getWidth() * 2; 76 return getMeasurement(measureSpec, preferred); 77 } 78 79 private int measureHeight(int measureSpec) 80 { 81 int preferred = image.getHeight() * 2; 82 return getMeasurement(measureSpec, preferred); 83 } 84 To calculate the width and height measurements, I’ve chosen to keep the logic simple by using a simple formula to calculate the dimensions. This simple formula computes the dimensions based on the dimensions of the image. The measureSpec parameter specifies what restrictions are imposed by the parent layout. 85 private int getMeasurement(int measureSpec, int preferred) 86 { 87 int specSize = MeasureSpec.getSize(measureSpec); 88 int measurement = 0; 89 90 switch(MeasureSpec.getMode(measureSpec)) 91 { 92 case MeasureSpec.EXACTLY: 93 // This means the width of this view has been given. 94 measurement = specSize; 95 break; 96 case MeasureSpec.AT_MOST: 97 // Take the minimum of the preferred size and what 98 // we were told to be. 99 measurement = Math.min(preferred, specSize); 100 break; 101 default: 102 measurement = preferred; 103 break; 104 } 105 106 return measurement; 107 } 108 To make the customised button useful, it needs to trigger some kind of action when it is clicked (i.e. a listener). The view class already defines methods for setting the listener, but a more specialised listener could be better suited to the custom button. For example, the specialised listener could pass back information on the instance of the custom button. 109 /** 110 * Sets the listener object that is triggered when the view is clicked. 111 * 112 * @param newListener 113 * The instance of the listener to trigger. 114 */ 115 public void setOnClickListener(ClickListener newListener) 116 { 117 listenerAdapter.setListener(newListener); 118 } 119 If the custom listener passes information about this instance of the custom button, it may as well have accessors so listener implementation can get useful information about this custom button. 120 /** 121 * Returns the label of the button. 122 */ 123 public String getLabel() 124 { 125 return label; 126 } 127 128 /** 129 * Returns the resource id of the image. 130 */ 131 public int getImageResId() 132 { 133 return imageResId; 134 } 135 Finally, for our custom button class that is using a custom listener, the custom listener class needs to be defined. 136 /** 137 * Internal click listener class. Translates a view’s click listener to 138 * one that is more appropriate for the custom image button class. 139 * 140 * @author Kah 141 */ 142 private class InternalListener implements View.OnClickListener 143 { 144 private ClickListener listener = null; 145 146 /** 147 * Changes the listener to the given listener. 148 * 149 * @param newListener 150 * The listener to change to. 151 */ 152 public void setListener(ClickListener newListener) 153 { 154 listener = newListener; 155 } 156 157 @Override 158 public void onClick(View v) 159 { 160 if (listener != null) 161 { 162 listener.onClick(CustomImageButton.this); 163 } 164 } 165 } 166 } 167 the code from line number 37 to 48 seems to be missing can fill that, please send me the source file if you can. Thanks for pointing that out. I’ve just added the missing lines. My SDK is android 1.0, and eclipse is showing and error with the type ClickListener, is that OnClickListener(line # 144)? Can you please tell me how can I use this component in the layout, I tried the below mentioned code, but its not working. [com.max.testView.view.CustomImageButton android:layout_width=”fill_parent” android:layout_height=”wrap_content” app:label=”@string/hello” /] Sorry for taking more than a week to get back to you Arun. I haven’t tried using XML to place the custom button on. I’ve only done it programatically. If you require an example of how to do this, I’ve zipped up the code that I originally wrote for this little tutorial. You can find it here: home.amnet.net.au/~kgoh/AndroidButton.zip Have a look in the AndroidButton class. Don’t worry about ButtonIntent and AndroidMenuButton – there are just a couple classes that I was playing around with. I have tried the code in the link home.amnet.net.au/~kgoh/AndroidButton.zip I tried to add some additional features. I am stuck at some points. Could you mail me how the text(font type :custom font type) could be changed for the custom button under the image eg painting,etc).? Also, could you mail me about changing the state of the button to show it is pressed for example using the onPress() and show the button has been pressed? Changing the font is as easy as creating a Typeface object and setting the paint to use the font (see Typeface.setTypeface()). I’ll probably write an entry about it when I get the chance to. As for changing the state, you could possibly look at overriding onKeyDown and onKeyUp. Thanks a lot for the reply Can we set the clickable area of a button? Could you please tell me where in the code I could add the clickable area if we can set it at all?. Can we make the custom image button (in your code) according to the text we write (for example : Instead of Painting I want to add “This is a button of a painting “) and the entire text should be displayed on the button below the image. Could you please tell me where and what code should be inserted to extend the button to fit the text? Firstly, if you read the documentation the method onMeasure() is the method that is called to obtain measurements for contents. In my implementation, this gets delegated to measureWidth() and measureHeight(), so you can modify just two functions to calculate their measurements based on the text. At this stage, I’ve also have not looked into how to obtain the pixel width or height of a text label but this technique might help: Secondly, I assume you are asking how to change the text. The custom view actually takes in the text label for the button as an argument to its constructor. If you are talking about the positioning of the text, look at onDraw(). Documentation states: Method called on to render the view. It is also here that I draw the text with the call to canvas.drawText(). I tried doing this but I could not set the clickable area. Could you please send me a code for that and where to insert it? Thanks in advance for your help kahgoh 10 June, 2009 at 8:45 pm. Like I said, I haven’t tried this myself, therefore I haven’t written the code. I’m merely suggesting a possible way of attempting it, but I haven’t tried, tested or verified it myself. However, to provide a quick summary of what I was thinking of. Start by creating your own View.OnTouchListener. This will calculate and determine whether the click should be actioned or not. For calculating whether the click is within the area, the MotionEvent parameters have getX() and getY() to tell you where the click has occured. Because a calculation is being performed, dealing with buttons of any random shape would be more complex. If you are using an image with trasparent background, perhaps you can try calling Bitmap.getPixel() to get the colour at the location to decide whether it is within the clickable area. Obviously, this also implies that the listener needs to know about the bitmap. After creating your listener, you need to set it as the touch listener. This is done by calling the View.setOnTouchListener() method. You can call this method from within the customised view class or after your view is created. Hopefully, this will give you a clearer idea of what I was thinking of. Can the custom button be freely resizable instead of hard coding it ? Meaning, if the Button caption is not one, but two lines high, how does the background adapt? I mentioned in a previous reply that the height and width of the view are determined by the call to onMeasure(), which delegates to measureHeight() and measureWidth() so you can change these methods to account for the text. For more information, see this previous comment. I’ve found a good tutorial here periket2000.blogspot.com The source code link is broken in that tutorial For some reason, the ClickListener class gives me an error always. it appears that this class does not exist anywhere in the android platform nor is it defined in this code file. any suggestions? Hi, I wrote this a fairly long time ago. It is a relatively simple interface, which you can find in this file. It is actually a redundant interface, since the standard View class defines the method setOnClickListener. It pretty much does the same thing. hi..! Hi friends..!! Pls read carefully…Its useful for u people., We can add normal widgets(buttons, etc ) through XML also,means we can add view through XML also like, 1. Give layout parameters here (means view parameters, upto where u want) 2. Here we need to render view class through JAVA(using Canvas) 3. If we do like this, we need to give to constructors in “MyView” class as follows public MyView(Context context) { super(context); } public MyView(Context context, AttributeSet attrs) { super(context, attrs); } 1.Better to give layout parameters 2. Add ur widgets(like buttons, Edit text fields, etc..,) If any doubts let me know pls..actually I faced somany pblms on defining like this…!! How come this man made android view tutorial without made some screenshots for it! GEEK! How do you call this view in the code after you write it? for some reason i can’t manage to get it react Create an instance of the class, set the layout parameters and then add it to your main view. Something like this: As noted by Kiran Kala, you can use the standard components that are already available to create your UI using an XML layout or you can also add them programmatically, but if, for some reason, you need to create your own component, you could do it by creating a custom view. I’m not using the xml files because i can’t get the amount of control over the apperance and stuff like that as i can programmaticlly. I’m not sure I understood correctly where i sould insert this code, in the activity? If this to be placed on the activity’s content view, I would probably put this code with where I need to create the main view for the activity. Pingback: Button mit Doppelklick umbenennen - Android-Hilfe.de Please provide some screenshots. It would be great…. the code formatting plugin you use makes it a pain to copy paste… line numbers always come included how could we make this button class have the Touch events handled by another class? For example, in my MainActivity, I would like to create several buttons, and assign Methods to handle the interactivity. I don’t want to put the handling code inside the Button class. Define the class that you want to handle the touch event to implement the View.OnTouchListener interface. Then create an instance of your listener and use it as the parameter when calling setOnTouchListener. It should look something like this: In the the class that you want to handle the touch event: Where ever the button is defined: Political affiliations, timelines, important landmarks iin the president. Onlpine business is getting tuff, and each day businesses aree facing stiff competition. Beneficial display may be the Samsung LA46A850S1F Encyclopedia an additional vibrant recognize, Check Out The does not only generally be looked aas Tv for pc Check Out The LA46A850S1F equal choose to be considered “Online Encyclopedia”, through the process of Samsung LA46A850S1F the particular capability of our own recognized artwork, working oout prepare dinner, actual physical to applly very well as other functions. Pingback: Функция Android onFocusChanged никогда не вызывала Oh! Android
https://kahdev.wordpress.com/2008/09/13/making-a-custom-android-button-using-a-custom-view/
CC-MAIN-2019-13
refinedweb
2,488
63.9
Amazon Route 53 Programming with the AWS SDK for .NET The AWS SDK for .NET supports Amazon Route 53, which is a Domain Name System (DNS) web service that provides secure and reliable routing to your infrastructure that uses Amazon Web Services (AWS) products, such as Amazon Elastic Compute Cloud (Amazon EC2), Elastic Load Balancing, or Amazon Simple Storage Service (Amazon S3). You can also use Route 53 to route users to your infrastructure outside of AWS. This topic describes how to use the AWS SDK for .NET to create an Route 53hosted zone and add a new resource record set to that zone. Note This topic assumes that you are already familiar with how to use Route 53 and have already installed the AWS SDK for .NET. For more information on Route 53, see the Amazon Route 53 Developer Guide. For information on how to install the AWS SDK for .NET, see Getting Started with the AWS SDK for .NET. The basic procedure is as follows. To create a hosted zone and update its record sets Create a hosted zone. Create a change batch that contains one or more record sets, and instructions on what action to take for each set. Submit a change request to the hosted zone that contains the change batch. Monitor the change to verify that it is complete. The example is a simple console application that shows how to use the the SDK to implement this procedure for a basic record set. To run this example In the Visual Studio File menu, click New and then click Project. Select the AWS Empty Project template and specify the project's name and location. Specify the application's default credentials profile and AWS region, which are added to the project's App.configfile. This example assumes that the region is set to US East (Northern Virginia) and the profile is set to default. For more information on profiles, see Configuring AWS Credentials. Open program.csand replace the usingdeclarations and the code in Mainwith the corresponding code from the following example. If you are using your default credentials profile and region, you can compile and run the application as-is. Otherwise, you must provide an appropriate profile and region, as discussed in the notes that follow the example. using System; using System.Collections.Generic; using System.Threading; using Amazon; using Amazon.Route53; using Amazon.Route53.Model; namespace Route53_RecordSet { //Create a hosted zone and add a basic record set to it class recordset { public static void Main(string[] args) { string domainName = ""; //[1] Create an Amazon Route 53 client object var route53Client = new AmazonRoute53Client(); //[2] Create a hosted zone var zoneRequest = new CreateHostedZoneRequest() { Name = domainName, CallerReference = "my_change_request" }; var zoneResponse = route53Client.CreateHostedZone(zoneRequest); //[3] Create a resource record set change batch var recordSet = new ResourceRecordSet() { Name = domainName, TTL = 60, Type = RRType.A, ResourceRecords = new List<ResourceRecord> { new ResourceRecord { Value = "192.0.2.235" } } }; var change1 = new Change() { ResourceRecordSet = recordSet, Action = ChangeAction.CREATE }; var changeBatch = new ChangeBatch() { Changes = new List<Change> { change1 } }; //[4] Update the zone's resource record sets var recordsetRequest = new ChangeResourceRecordSetsRequest() { HostedZoneId = zoneResponse.HostedZone.Id, ChangeBatch = changeBatch }; var recordsetResponse = route53Client.ChangeResourceRecordSets(recordsetRequest); //[5] Monitor the change status var changeRequest = new GetChangeRequest() { Id = recordsetResponse.ChangeInfo.Id }; while (ChangeStatus.PENDING == route53Client.GetChange(changeRequest).ChangeInfo.Status) { Console.WriteLine("Change is pending."); Thread.Sleep(15000); } Console.WriteLine("Change is complete."); Console.ReadKey(); } } } The numbers in the following sections are keyed to the comments in the preceding example. - [1] Create a Client Object The AmazonRoute53Client class supports a set of public methods that you use to invoke Amazon Route 53 actions. You create the client object by instantiating a new instance of the AmazonRoute53Clientclass. There are multiple constructors. The object must have the following information: - An AWS region When you call a client method, the underlying HTTP request is sent to this endpoint. - A credentials profile The profile must grant permissions for the actions that you intend to use—the Route 53 actions in this case. Attempts to call actions that lack permissions will fail. For more information, see Configuring AWS Credentials. The example uses the default constructor to create the object, which implicitly specifies the application's default profile and region. Other constructors allow you to override either or both default values. - [2] Create a hosted zone A hosted zone serves the same purpose as a traditional DNS zone file. It represents a collection of resource record sets that are managed together under a single domain name. To create a hosted zone Create a CreateHostedZoneRequest object and specify following request parameters. There are also two optional parameters that aren't used by this example. Name (Required) The domain name that you want to register, this example. This domain name is intended only for examples and can't be registered with a domain name registrar for an actual site, but you can use it to create a hosted zone for learning purposes. CallerReference (Required) An arbitrary user-defined string that serves as a request ID and can be used to retry failed requests. If you run this application multiple times, you must change the CallerReferencevalue. Pass the CreateHostedZoneRequestobject to the client object's CreateHostedZone method. The method returns a CreateHostedZoneResponse object that contains a variety of information about the request, including the HostedZone.Id property that identifies zone. - [3] Create a resource record set change batch A hosted zone can have multiple resource record sets. Each set specifies how a subset the domain's traffic, such as email requests, should be routed. You can update a zone's resource record sets with a single request. The first step is to package all the updates in a ChangeBatch object. This example specifies only one update, adding a basic resource record set to the zone, but a ChangeBatchobject can contain updates for multiple resource record sets. To create a ChangeBatch object Create a ResourceRecordSet object for each resource record set that you want to update. The group of properties that you specify depends on the type of resource record set. For a complete description of the properties used by the different resource record sets, see Values that You Specify When You Create or Edit Amazon Route 53 Resource Record Sets. The example ResourceRecordSetobject represents a basic resource record set, and specifies the following required properties. Name The domain or subdomain name, this example. TTL The amount of time in seconds that the DNS recursive resolvers should cache information about this resource record set, 60 seconds for this example. Type The DNS record type, Afor this example. For a complete list, see Supported DNS Resource Record Types. ResourceRecords A list of one or more ResourceRecord objects, each of which contains a DNS record value that depends on the DNS record type. For an Arecord type, the record value is an IPv4 address, which for this example is set to a standard example address, 192.0.2.235. Create a Change object for each for each resource record set, and set the following properties. ResourceRecordSet The ResourceRecordSetobject that you created in the previous step. Action The action to be taken for this resource record set: CREATE, DELETE, or UPSERT. For more information on these actions, see Elements. This example creates a new resource record set in the hosted zone, so Actionis set to CREATE. Create a ChangeBatch object and set its Changesproperty to a list of the Changeobjects that you created in the previous step. - [4] Update the zone's resource record sets To update the resource record sets, pass the ChangeBatchobject to the hosted zone, as follows. To update a hosted zone's resource record sets Create a ChangeResourceRecordSetsRequest object with the following property settings. HostedZoneId The hosted zone's ID, which the example sets to the ID that was returned in the CreateHostedZoneResponseobject. To get the ID of an existing hosted zone, call ListHostedZones. ChangeBatch A ChangeBatchobject that contains the updates. Pass the ChangeResourceRecordSetsRequestobject to the client object's ChangeResourceRecordSets method. It returns a ChangeResourceRecordSetsResponse object, which contains a request ID that you can use to monitor the request's progress. - [5] Monitor the update status Resource record set updates typically take a minute or so to propagate through the system. You can monitor the update's progress and verify that it has completed as follows. To monitor update status Create a GetChangeRequest object and set its Idproperty to the request ID that was returned by ChangeResourceRecordSets. Use a wait loop to periodically call the client object's GetChange method. GetChangereturns PENDINGwhile the update is in progress and INSYNCafter the update is complete. You can use the same GetChangeRequestobject for all of the method calls.
https://docs.aws.amazon.com/sdk-for-net/v2/developer-guide/route53-apis-intro.html
CC-MAIN-2018-30
refinedweb
1,440
56.45
-- | -- @ module System.Timer.Updatable (Delay -- * Datatype , Updatable , wait , renew -- * IO wrappers , waitIO , renewIO -- * Builders , parallel , serial , replacer -- * Utility , longThreadDelay ) where import Data.List (unfoldr) import Data.Maybe import Data.Int (Int64) import Control.Concurrent (forkIO, threadDelay) import Control.Monad (when, forever) import Control.Concurrent.STM import Control.Concurrent.Killable -- | A delay in microseconds type Delay = Int64 -- | Abstract timers that can be updated. Hanging via wait function can be done by any number of threads, which is synchronization. data Updatable a = Updatable { wait :: STM (Maybe a), -- ^ wait until the timer rings, or signal Nothing if timer is destroyed renew :: Delay -> STM (), -- ^ update the delay in the timer _kill :: IO () } instance Killable (Updatable a) where kill = _kill -- | Wait in IO waitIO :: Updatable a -> IO (Maybe a) waitIO = atomically . wait -- | Renew in IO renewIO :: Updatable a -> Delay -> IO () renewIO u = atomically . renew u -- wrap the logic with a framework for signalling the time is over engine :: IO () -> (Delay -> IO ()) -> IO a -> Delay -> IO (Updatable a) engine k t w d0 = do deltas <- newTChanIO x <- newEmptyTMVarIO t d0 z <- forkIO . forever $ atomically (readTChan deltas) >>= t p <- forkIO $ w >>= atomically . putTMVar x . Just return $ Updatable (takeTMVar x >>= \r -> putTMVar x r >> return r) (writeTChan deltas) (k >> kill [p,z] >> atomically (putTMVar x Nothing)) -- | Create and start a parallel updatable timer. The "renew" action for this timer will start parallel timers. The last timer -- that is over will compute the given action. parallel :: IO a -- ^ the action to run when timer rings -> Delay -- ^ time to wait -> IO (Updatable a) -- ^ the updatable parallel timer parallel a d0 = do tz <- newTVarIO 0 tp <- newTVarIO [] let t k = do p <- forkIO $ atomically (readTVar tz >>= writeTVar tz . (+1)) >> longThreadDelay k >> atomically (readTVar tz >>= writeTVar tz . (subtract 1)) atomically $ readTVar tp >>= writeTVar tp . (p :) w = do atomically $ do z <- readTVar tz when (z > 0) retry a k = atomically (readTVar tp) >>= kill engine k t w d0 -- | Create and start a serial updatable timer. The "renew" action for this timer will schedule new timer after the running one. -- The timer will run the given action after the sum of all scheduled times is over. serial :: IO a -- ^ the action to run when timer rings -> Delay -- ^ time to wait -> IO (Updatable a) -- ^ the updatable parallel timer serial a d0 = do tz <- newTChanIO let t = atomically . writeTChan tz w = do l <- atomically $ (Just `fmap` readTChan tz) `orElse` return Nothing case l of Nothing -> a Just l -> longThreadDelay l >> w engine (return ()) t w d0 -- | Create and start a replacer updatable timer. The "renew" action for this timer will insert a new timer replacing the running one. -- The timer will run the given action after this time replacer :: IO a -- ^ the action to run when timer rings -> Delay -- ^ time to wait -> IO (Updatable a) -- ^ the updatable parallel timer replacer a d0 = do tz <- newTVarIO [] z <- newEmptyTMVarIO let t k = do atomically (readTVar tz) >>= kill p <- forkIO $ longThreadDelay k >> atomically (putTMVar z ()) atomically $ readTVar tz >>= writeTVar tz . (p:) w = atomically (takeTMVar z) >> a engine (return ()) t w d0 -- |. longThreadDelay :: Delay -> IO () longThreadDelay d = mapM_ (threadDelay . fromIntegral) $ unfoldr f d where f d1 | d1 <= 0 = Nothing | d1 < maxInt = Just (d1, 0) | otherwise = Just (maxInt, d1-maxInt) maxInt = fromIntegral (maxBound :: Int) -- Platform-dependent main = do t <- parallel (return 5) $ 10^7 forkIO $ waitIO t >>= print . (+1) . fromJust forkIO $ waitIO t >>= print . (+2) . fromJust threadDelay $ 5 * 10 ^ 6 renewIO t $ 6 * 10 ^ 6 waitIO t >>= print . fromJust
http://hackage.haskell.org/package/timers-updatable-0.2.0.2/docs/src/System-Timer-Updatable.html
CC-MAIN-2017-17
refinedweb
568
53.41
May 15, 2020 Single Round Match 786 Editorials Hope you’re enjoying online classes from the university, no internet problems, no microphone problems, no open-book exams. The preparation phase of this contest was completely long. Two setters working 24 hours per day for the last three days. Editorialist (me) writing editorial and Misof above all of us. In problem ModCounters setters changed 6 to 512, making the problem harder. I wrote the editorial for this problem again. This contest filled my inbox with a lot of correspondence. Something like one correspondence per hour I received. Anyway, it led to a nice contest, great! I rarely remember such number of correspondence for a contest on Topcoder. A funny point in the contest was, I found the Div. 2 Easy a not-so-easy problem. I wrote to setters and they informed me of the main solution. I was thinking on something like symmetry playing 😀 Coding phase finished with Um_nik leading. tourist, with a little difference, at the second place. With a considerable gap, maroon_kuri at third. DIV2 – EASY CutTheCube At first, there is only one part. Consider the final situation, there are L*H*B parts. Each move splits a part into two parts, so it adds a part. For example, when the cube is 2*2*1, there is only one part. After a lengthwise cut, there are two parts. Then a breadthwise on each of the parts leads to three parts. Then the last cut added another part and finally we have four parts. So we start from 1, each turn the player adds a part, the player who reaches L*H*B wins. So if L*H*B is even, the first player wins, otherwise, the second player wins. public int findWinner(int L, int B, int H) { if(L % 2 == 0 || B % 2 == 0 || H % 2 == 0) return 1; return 2; } DIV2 – MEDIUM SmallestRegular Hint: Move opening brackets to the beginning. The first observation is that the lexicographically smallest regular bracket expression is (((…))). To reach that, each time we move one opening bracket to the beginning. For example: (()()) ---^-- a = 0, b = 2, c = 3 ((())) Another example: ()()()() ----^--- a = 0, b = 3, c = 4 (()())() We don’t need to apply this operation on an opening bracket that doesn’t have a closing bracket before it, because it doesn’t change anything. So we iterate from 0 to N – 1 and if S[i] is an opening bracket and we have seen a closing bracket so far, do an operation in which (0, i – 1, i). This operation moves the opening bracket (in the index i) to the beginning. This way we don’t use more than N/2 operations. public int[] findLexSmallest(String S) { ArrayList<Integer> res = new ArrayList<Integer>(); int n = S.length(); boolean closing = false; for (int indx = 0; indx < n; indx++) { if (S.charAt(indx) == ')') closing = true; else { if (closing) { res.add(0); // a res.add(indx - 1); // b res.add(indx); // c } } } int[] ans = new int[res.size()]; for(int i = 0; i < res.size(); i++) { ans[i] = res.get(i); } return ans; } DIV2 – HARD SuffixDecomposition Hint: Use stack. Start from n – 1 to 0, keep the current decomposition, and add elements one by one and update the decomposition. We keep the current decomposition in a stack call st and from each block (subarray) we just keep its minimum element in the stack. Consider iteration reaches i. There is two cases to consider: - st is empty or st.top > S[i]: We create a new block containing S[i] and update the decomposition. So the current answer increased by one. - Otherwise, we need to remove several blocks. We should remove the blocks in which their minimum is less than or equal to S[i]. We add the merged block with the minimum of the previously last block. In fact, st.top doesn’t change but several elements will be removed. Time complexity is O(n), because each element added once and removed at most once. long long findTotalFun(vector<int> P, int A0, int X, int Y, int B0, int X1, int Y1, int n) { assert(check(P, A0, X, Y, B0, X1, Y1, n)); ll A[n + 5]; A[0] = A0; for (ll i = 1; i <= n - 1; i++) A[i] = (A[i - 1] * X + Y) % 1812447359; ll B[n + 5]; B[0] = B0; for (ll i = 1; i <= n - 1; i++) B[i] = (B[i - 1] * X1 + Y1) % 1812447359; vector<int> S(n); for (ll i = 0; i < P.size(); i++) S[i] = P[i]; for (ll i = P.size(); i <= n - 1; i++) S[i] = max(A[i], B[i]); stack<ll> st; ll res[n + 5]; ll suffMin = inf; for (ll i = n - 1; i >= 0; i--) { while (!st.empty() && S[i] >= st.top()) { st.pop(); } res[i] = st.size() + 1; suffMin = min(suffMin, (ll)S[i]); st.push(suffMin); } ll ans = 0; for (ll i = 0; i < n; i++) { ans += res[i]; } return ans; } DIVI – EASY SwapTheString Hint: Split the string to mod k parts. Count inversions. The first observation is that we can split the string to k parts. Each part contains indexes with a fixed modulo after dividing by k. For each part, we create a new string consists only suitable indexes. Let’s solve the problem for a part like P. We need to see how many swaps are possible. When we’ll stop swapping? We stop swapping when the string (for this part) becomes non-decreasing. So, if there exists i < j such that P[i] > P[j] they will swap sometime. So the task is to count such indexes. This problem is called counting inversions. In the general case, where elements are not alphabets but they are in arbitrary type, it could be solved using divide and conquer in O(n log n). This could be helpful. But in our case – where the elements are alphabets – Iterate from left to right and keep track of the count of each letter so far, so cnt[x] is the number of x’s we have seen so far. Suppose we are at index i, for each letter like a > P[i] add cnt[a] to the answer. Because such elements have been seen before i and their value is greater (remember the condition above). The overall complexity is O(Z * n), where Z is the size of the alphabet, 26. long long findNumberOfSwaps(string P, int A0, int X, int Y, int n, int k) { ll A[n + 5]; A[0] = A0; for (ll i = 1; i <= n - 1; i++) A[i] = (A[i - 1] * X + Y) % 1812447359; string s = P; for (ll i = P.length(); i <= n - 1; i++) s += (char)(A[i] % 26 + 'a'); string arr[k + 5]; for (int i = 0; i < k; i++) arr[i] = ""; for (int i = 0; i < n; i++) { arr[i % k] += s[i]; } ll res = 0; for (ll i = 0; i < k; i++) { ll freq[30]; memset(freq, 0, sizeof(freq)); for (ll j = (ll)arr[i].length() - 1; j >= 0; j--) { for (ll c = (arr[i][j] - 'a') + 1; c <= 26; c++) { res += freq; } freq[arr[i][j] - 'a']++; } } return res; } DIVI – MEDIUM ModCounters Hint: Think on matrix exponentiation. Optimize. Let dp[t][i] = expected number of mod-512 counters which are equal to i after t steps. The transition is straight-forward. Time complexity is 512 * k, so huge. Carefully take a look to transition between dp[t] and dp[t + 1], we can assume it as 512 * 512 matrix. In fact, the transition between i and i + 1 does not depend on i and it’s always the same. So if we apply the transition matrix on dp[i] twice, we get dp[i + 2]. It leads to a simple result. We can apply the transition matrix on dp[0] k times. Nothing changed, we are exceeding the time limit. The key is to raise the transition matrix to the k-th power. It is possible in O(512^3 * log K). Read more here. The overall complexity is O(512^3 * log K + n). Which is not enough to pass. We need to change our transition. For each i, dp[t][i] * 1/n will be added to dp[t + 1][(i+1)%512] and dp[t][i] * (n-1)/n to dp[t + 1][i]. Consider the dp[t] as a polynomial: We can reach dp[t + 1] with multiplying dp[t] in . Note that we should consider the coefficient of x^512 for x^0, in fact, we are doing operations modulo 512. One can use FFT for multiplication but it’s not needed. Now exactly the same as the above solution, we raise the polynomial to the k-th power. The overall complexity is O(512^2 * log K + n). public long[] multiply(long[] a, long[] b) { long ans[] = new long[a.length]; for(int i = 0; i < a.length; ++i) { for(int j = 0; j < a.length; ++j) { ans[(i + j) % a.length] = (ans[(i + j) % a.length] + a[i] * b[j] % mod) % mod; } } return ans; } public long[] fast_poly_pow(long[] a, int b) { if(b == 1) { return a; } long[] val = fast_poly_pow(a, b / 2); if(b % 2 == 0) return multiply(val, val); else return multiply(multiply(val, val), a); } public long fast_pow(long a, long b) { if(b == 0) return 1L; long val = fast_pow(a, b / 2); if(b % 2 == 0) return val * val % mod; else return val * val % mod * a % mod; } public long mod = (long)1e9 + 7; public long solve(int[] a, int k) { int n = a.length; long den = fast_pow(n, mod - 2); long arr[] = new long[512]; arr[0] = (long)(n - 1) * den % mod; arr[1] = den; long result[] = fast_poly_pow(arr, k); long freq[] = new long[512]; for(int i = 0; i < n; ++i) { freq[a[i]]++; } long ans = 0; for(int i = 0; i < 512; ++i) { for(int j = 0; j < 512; ++j) { ans += (long)((i + j) % 512) * freq[i] % mod * result[j] % mod; } } ans %= mod; return ans; } public int[] createArray(int[] arr, long a0, long x, long y, long mod, int n) { int ans[] = new int[n]; long a[] = new long[n]; a[0] = a0; for(int i = 1; i < n; ++i) a[i] = (a[i - 1] * x + y) % mod; for(int i = 0; i < arr.length; ++i) ans[i] = arr[i]; for(int i = arr.length; i < n; ++i) ans[i] = (int)(a[i] % 512); return ans; } public int findExpectedSum(int[] P, int A0, int X, int Y, int N, int K) { P = createArray(P, A0, X, Y, arrayMod, N); return (int)solve(P, K); } public long arrayMod = 1812447359; DIVI – HARD TwoDistance Let’s introduce a data structure which can handle the following queries: - Add an element to the multiset. - Remove an element from the multiset. - Return minimum |x – y| where x and y are elements in the multiset. Note that x and y can be equal when there are two copies of x in the set. We implement this data structure using two self-balancing binary search trees, like AVL tree or red-black tree (simply you can use std::multiset in C++). Let’s name these two sets elems and diffs. In elems, we keep the elements and in diffs, we keep the difference between adjacent elements in elems. So elems.size() = diffs.size() + 1. For example, if elems = {1, 2, 4, 6}, diffs = {1, 2, 2}. If elems = {1, 2, 2, 3}, diffs = {0, 1, 1}. When we want to add an element, say x, consider it’s neighbours in elems are l, r (l <= x <= r), we first remove r – l from diffs, then we add x – l and r – x to diffs. Finally, we add x itself to the elems. Removing process is similar. Let’s name this data structure DS. The answer to the third query is always the minimum element in diffs. Now consider a vertex v. For getting the answer for v, one can add v’s grandchildren, grandpa, and, brothers to the data structure mentioned above and get the answer. This takes too much time. Instead, consider v’s parent, p. We add p’s parent and p’s children to the DS at first. Then for each child like v, we remove v from DS, add v’s grandchildren to DS, get the answer from DS and undo the last two operations. I. e. we add v and remove its grandchildren. This way a vertex added to DS twice and removed twice. Overall time complexity is O(n log n). The code below uses another method to solve the problem. public void addMap(TreeMap<Long, Integer> map, long value) { if(map.get(value) == null) map.put(value, 1); else map.put(value, map.get(value) + 1); } public void remMap(TreeMap<Long, Integer> map, long value) { if(map.get(value) == 1) map.remove(value); else map.put(value, map.get(value) - 1); } public TreeMap<Long, Integer> getGrandChild(int i, int par) { TreeMap<Long, Integer> map = new TreeMap<>(); for(int j : adj[i]) { if(j != par) { for(int k : adj[j]) { if(k != i) { addMap(map, v[k]); } } } } return map; } long getMinDiff(TreeMap<Long, Integer> map) { long prevEle = -1; long curMin = Long.MAX_VALUE; for(long i : map.keySet()) { if(map.get(i) > 1) return 0; if(prevEle != -1) curMin = min(curMin, i - prevEle); prevEle = i; } return curMin; } public void dfs(int i, int par, int grandpar, TreeMap<Long, Integer> map) { TreeMap<Long, Integer> grandchild = getGrandChild(i, par); if(grandpar != -1) addMap(grandchild, v[grandpar]); cans[i] = min(cans[i], getMinDiff(grandchild)); if(par != -1) { for(long x : grandchild.keySet()) { if(map.lowerKey(x + 1) != null) { cans[i] = min(cans[i], x - map.lowerKey(x + 1)); } if(map.higherKey(x - 1) != null) { cans[i] = min(cans[i], map.higherKey(x - 1) - x); } } } TreeMap<Long, Integer> childMap = new TreeMap<>(); for(int j : adj[i]) { if(j != par) { addMap(childMap, v[j]); } } for(int j : adj[i]) { if(j != par) { remMap(childMap, v[j]); dfs(j, i, par, childMap); addMap(childMap, v[j]); } } } public long v[]; public ArrayList<Integer> adj[]; public long[] cans; public long solve(int n) { cans = new long[n]; Arrays.fill(cans, Long.MAX_VALUE); for(int ind = 0; ind < n; ++ind) { int m = adj[ind].size(); Integer indices[] = new Integer[m]; int ptr = 0; for(int j : adj[ind]) { indices[ptr++] = j; // System.out.println(ind + " " + j + " " + v[j]); } Arrays.sort(indices, new Comparator<Integer>() { public int compare(Integer i1, Integer i2) { return (int)(v[i1] - v[i2]); } }); long prefix[] = new long[m]; prefix[0] = Long.MAX_VALUE; long curMin = Long.MAX_VALUE; for(int i = 1; i < m; ++i) { prefix[i] = curMin; curMin = min(curMin, v[indices[i]] - v[indices[i - 1]]); } long suffix[] = new long[m]; suffix[m - 1] = Long.MAX_VALUE; curMin = Long.MAX_VALUE; for(int i = m - 2; i >= 0; --i) { suffix[i] = curMin; curMin = min(curMin, v[indices[i + 1]] - v[indices[i]]); } for(int i = 0; i < m; ++i) { cans[indices[i]] = min(prefix[i], min(suffix[i], cans[indices[i]])); if(i != 0 && i != m - 1) cans[indices[i]] = min(cans[indices[i]], v[indices[i + 1]] - v[indices[i - 1]]); } } dfs(0, -1, -1, null); long ans = 0; for(int i = 0; i < n; ++i) { ans += cans[i] == Long.MAX_VALUE ? 0 : cans[i]; } return ans; } public long findMinValue(int N, int[] edge, int[] val, int D, int seed) { int n = N; long a[] = new long[2 * n]; a[0] = seed; for(int i = 1; i < 2 * n; ++i) { a[i] = (a[i - 1] * 1103515245 + 12345) % 2147483648L; } v = new long[n]; for(int i = 0; i < val.length; ++i) v[i] = val[i]; for(int i = val.length; i < n; ++i) { v[i] = a[i]; } int e[] = new int[n]; for(int i = 0; i < edge.length; ++i) e[i] = edge[i]; for(int i = edge.length; i < n; ++i) e[i] = (int)(a[n + i] % min(i, D)); adj = new ArrayList[n]; for(int i = 0; i < n; ++i) adj[i] = new ArrayList<>(); for(int i = 1; i < n; ++i) { adj[i].add(e[i]); adj[e[i]].add(i); } return solve(n); } a.poorakhavan Guest Blogger
https://www.topcoder.com/single-round-match-786-editorials/
CC-MAIN-2020-34
refinedweb
2,706
74.59
! - Extend the schema for Exchange 2013 - Install Exchange 2013 with the latest CU (you can use the installer from the CU) - Reconfigure the virtual directories with the proper namespace within Exchange 2013 for the Client Access services - Request a new SAN certificate which includes the InternalURL and ExternalURL for your Client Access services - Setup the proper send / receive connector - Configure your databases - Configure your network settings (NAT and firewall settings) - Move a test user to validate all is well - Bulk move users (which is online between 2010 to 2013) Just an FYI, there is a lot more to this than what I explained above. You should properly size the environment using MessageStats.ps1 () and the Get-MailboxReport () reports. These numbers all you to populate the Exchange Role Calculator () which will allow you to be properly sized. The steps above are a VERY high overview of the operational steps. I would write them out, but I would need about 10-30 posts to do it. Paul Cunningham (Exchange MVP) has a nice series on this within his blog: Before, you can plan or do anything in production, it is always to do a test migration in a test lab. You can quickly setup a lab and perform a migration, take all notes etc. Next and one of the major point you need to remember are the new change in Exchange 2013, it is way different from 2010. Even small task configuration Outlook with Exchange 2013 become a night mare for lot of admins due design change in 2013. So read this: Next you need to consider about client end requirement, whether current OS and Office will support Exchange 2013 and what patches are required on client end, before you can use Exchange 2013 completely. My suggestion, first you play with exchange 2013 in lab, explore everything, then plan it, document it and deploy it. One major mistake lot of admins do, install first exchange server on low end hardware or VM and later struggle to remove it. Best, to setup first server on hardware, which you are going to use in production. As production is not meant for testing. Finally, you can find lot of guides which provides all steps, here are few: Hope this helps
https://www.experts-exchange.com/questions/28554370/Exchange-2010-Exchange-2013.html
CC-MAIN-2018-09
refinedweb
375
64.75
Hey guys! I’m fairly new to python (3 hours in tbh) and I’m tryin to write a function that is able to connect to a remote server and then execute those commands. As stated, I’m using ssh2-python. def command_exec(command): print('Executing command: ' + command) channel.write(command + '\n') print('Wrote command!') size, data = channel.read() print(size) print(data.decode()) This code works perfectly… whenever I forward a command that has stdout. For example running: command_exec(‘echo “hey”’) will work perfectly! But, running something like: command_exec(‘cd /usr/local/folder/’) will get stuck… So I’ve figured out the issue: if a command that is forwarded doesn’t return any response (doesn’t have stdout), the function will get stuck. I’ve tried finding a solution, but to no avail. Worth mentioning: I don’t want to set timeout on a command, because there are some commands that I’m gonna execute on the remote server that might take a while, so setting a timeout would be a huge mistake. Is there any solution to this? Thanks!
https://www.sitepoint.com/community/t/command-execution-on-a-remote-server-with-ssh2-python/345831
CC-MAIN-2020-05
refinedweb
181
66.54
Python test performance and measure time elapsed in seconds Three four ways of measuring time in Python: - start = time.time() - timer() - datetime.now() - advanced profiling with cProfile My personal favorite for simple time measurement of method is time. While for complex performance measurement I like to use cProfile which is giving comprehensive information on the price for several lines of code. Measure time by module time First way is by using module time. In this example we are calculating the 42-th Fibonacci number. We measure the time and print it out in ms, seconds and minutes import time def fib(i): if i <= 2: return 1; else: f = fib(i-1) + fib(i-2) return f start = time.time() print(fib(42)) end = time.time() execution_time = end - start print ('--- %0.3fms. --- ' % ( execution_time*1000.)) print("--- %s seconds ---" % (execution_time )) print("--- %s minutes ---" % (execution_time / 60)) result: 267914296 --- 92015.864ms. --- --- 92.01586437225342 seconds --- --- 1.533597739537557 minutes --- Measure time by datetime The same execution of Fibonacci measured by datetime module and returning the time in hh:mm:ss.ms from datetime import datetime def fib(i): if i <= 2: return 1; else: f = fib(i-1) + fib(i-2) return f start_time = datetime.now() print(fib(42)) time_elapsed = datetime.now() - start_time print('Time elapsed (hh:mm:ss.ms) {}'.format(time_elapsed)) result: 267914296 Time elapsed (hh:mm:ss.ms) 0:01:32.943911 Measure time by datetime Last example uses timeit and default_timer to test record the elapsed time. from timeit import default_timer as timer def fib(i): if i <= 2: return 1; else: f = fib(i-1) + fib(i-2) return f start = timer() print(fib(42)) end = timer() print("--- %s seconds ---", (end - start)/1.) result: 267914296 --- %s seconds --- 93.60394597499999 Advanced profiling with cProfile If you need to do more complex and better time measurement then you can use the module which offer Profiling - cProfile. It's mature and offers good results with few lines of code. Below we are measuring 3 methods: - fib - foo - bar As you can see from the result you have total time and separate time for each of this methods. Another advantage of the method is the information returned like: - total calls of the method - time spent total and per method - line number of the method You can see the example below testing Fibonacci with two additional functions: import cProfile def foo(): s = 0 for x in range(0, 1000): s +=x def bar(): s = 0 for x in range(0, 3000): s +=x def fib(i): foo() bar() if i <= 2: return 1; else: f = fib(i-1) + fib(i-2) return f cProfile.run('fib(20)') result: 40590 function calls (27062 primitive calls) in 2.373 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 2.373 2.373 <string>:1(<module>) 13529/1 0.008 0.000 2.373 2.373 Profiling.py:14(fib) 13529 0.578 0.000 0.578 0.000 Profiling.py:4(foo) 13529 1.787 0.000 1.787 0.000 Profiling.py:9(bar) 1 0.000 0.000 2.373 2.373 {built-in method builtins.exec} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} What to use As you can see the results for this 3 options are very close. The result depends on OS, running code, parallel execution. In order to get optimal results you need to do some test for your requirements. All these tests were done on Linux.
https://blog.softhints.com/python-test-performance-and-measure-time-elapsed-in-seconds/
CC-MAIN-2019-43
refinedweb
591
66.13
casin, casinf, casinl − complex arc sine #include <complex.h> double complex casin(double complex z); float complex casinf(float complex z); long double complex casinl(long double complex z); Link with −lm. The casin() function calculates the complex arc sine of z. If y = casin(z), then z = csin(y). The real part of y is chosen in the interval [−pi/2,pi/2]. One has: casin(z) = −i clog(iz + csqrt(1 − z * z)) These functions first appeared in glibc in version 2.1. For an explanation of the terms used in this section, see attributes(7). C99. clog(3), csin(3), complex(7) This page is part of release 3.53 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at−pages/.
http://man.linuxtool.net/centos7/u2/man/3_casinl.html
CC-MAIN-2021-25
refinedweb
134
76.82
Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver" It's already at the top of internal.h which is the first #include there... -Evan On Apr 2, 2007, at 6:53 PM, Gabriel Schulhof" > > > > > ---------------------------------------------------------------------- > --- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to > share your > opinions on IT & business topics through brief surveys-and earn cash >? > page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Gaim-devel mailing list > Gaim-devel@... > > On Mon, 2007-04-02 at 19:08 -0400, Evan Schoenberg wrote: > It's already at the top of internal.h which is the first #include > there... Truth. And, sure enough, using a #ifdef based on a config.h directive does work after including "internal.h" :o) In that case, I'm hoping you can help me with "the real" problem. For some reason, in the environment I'm using, the files gtkutil.c and gtkimhtml.c do not compile because of implicit declarations of strncasecmp. For example: gtkimhtml.c:2217: error: implicit declaration of function `strncasecmp' gtkimhtml.c:2217: warning: nested extern declaration of `strncasecmp' Now, I'm not sure how to properly include strings.h. Is it #ifndef _WIN32 #ifdef HAVE_STRINGS_H #include <strings.h> #endif /* def HAVE_STRINGS_H */ #endif /* ndef _WIN32 */ If so, and if this does not interfere with other builds, it is really /this/ that I would need at the top of both gtkutils.c and gtkimhtml.c . Currently I am making inclusion of <strings.h> conditional on a directive introduced by a downstream patch (which resides in config.h, which brought my original question :o) ). However, if there are build environments where <strings.h> does not "reach" these two files though it should, perhaps this is a candidate for an upstream fix. Please let me know, Gabriel
http://sourceforge.net/p/pidgin/mailman/pidgin-devel/thread/1175557449.13192.72.camel@localhost.localdomain/
CC-MAIN-2014-35
refinedweb
322
69.28
current position:Home>Run through Python date and time processing (Part 2) Run through Python date and time processing (Part 2) 2022-01-31 11:26:01 【Lei Xuewei】 「 This is my participation 11 The fourth of the yuegengwen challenge 9 God , Check out the activity details :2021 One last more challenge 」 ceremonial Python Column No 33 piece , Classmate, stop , Don't miss this from 0 The beginning of the article ! In the previous article, we learned a little Python The acquisition of time , This time continue to learn the time zone conversion of dates , Format and so on . What other date operations are commonly used in development ? - Time zone conversion display - Date formatting - Number of seconds And date And String conversion We often use , For example, global business shows different times according to different customers ( Format, etc. ) stay python The following two modules cover common date processing import time import calender Copy code Let's look at these two modules . Type conversion in time processing :struct_time vs str Python Create a time in , Specifically, create a struct_time Need one 9 Tuples of elements to construct . asctime The function helps us format this type of time as a string . #!) the9fields = (2021, 11, 10, 22, 55, 11, 16, 16, 16) fixed = time.struct_time(the9fields) print("fixed time:", fixed) print("type:", type(fixed)) result = time.asctime(the9fields) # similar struct_time, need 9 Tuple parameters composed of elements . print("asc time:", result) print("type:", type(result)) localtime = time.localtime() print("local time:", localtime) print("type:", type(localtime)) print("asc time:", time.asctime(localtime)) Copy code The operation effect is as follows : This ticks It's from 0 Calculate all the time , Cumulative seconds to date . You can run this program every second , Every time ticks Value plus 1( The approximate ) Specify the input to construct the time : #!) fixed = time.struct_time((2021, 11, 10, 22, 55, 11, 16, 16, 16)) print("fixed time:", fixed) Copy code The operation effect is as follows : Time and string conversion #!2.py # @Project : hello import time sec = 3600 # An hour after the beginning of the era (GMT 19700101 In the morning ) # gmtime = time.gmtime(sec) print("gmtime:", gmtime) # GMT print("type:", type(gmtime)) print(time.strftime("%b %d %Y %H:%M:%S", gmtime)) print(time.strftime("%Y-%m-%d %H:%M:%S %Z", gmtime)) # Print date plus time zone print("*" * 16) localtime = time.localtime(sec) print("localtime:", localtime) # Local time print("type:", type(localtime)) print(time.strftime("%b %d %Y %H:%M:%S", localtime)) print(time.strftime("%Y-%m-%d %H:%M:%S %Z", localtime)) # Print date plus time zone # Try another format print(time.strftime("%D", localtime)) print(time.strftime("%T", localtime)) Copy code Here are the results : For time formatting functions (strftime) It doesn't care about the time you pass in (struct_time) What time zone is it , Still output to you , It's also true . But when we write programs to get data , The time zone information must be returned to the client as it is , Or is it UI End , Finally, the display is adjusted by the local time zone setting of the client . summary Python Date processing is still quite sufficient , Practice more .
https://en.pythonmana.com/2022/01/202201311126003636.html
CC-MAIN-2022-27
refinedweb
529
52.19
Introduction Guest Author This is a guest post by Daniel Koestler, an Adobe applications developer. This post will explain how to connect your Flash, Flex, and AIR apps to Photoshop using the Photoshop Touch SDK. The author created the Photoshop Touch SDK for AS3 with help from Renaun Erickson, an Adobe developer evangelist. This part of the SDK is a SWC distributed in the freely available download. This article will tell you how to create a new project, connect to Photoshop, and send simple commands back and forth. There are additional resources at the end of the article, which will guide you through more advanced steps. What is the Photoshop Touch SDK? The Photoshop Touch SDK is a collection of APIs that allow virtually any device to connect to and control Photoshop, using any Internet or WiFi connection. For the first time, you can interface with Photoshop directly, and use this to create mobile, desktop, or web applications that are tailored to the needs of creative professionals or casual-creative users. The Photoshop Touch SDK is available for free from Adobe, and works with Photoshop CS5 12.0.4 and above. It also includes a SWC library, which contain the APIs that this article covers. This SWC library, called the Photoshop Touch SDK for AS3, allows you to write very simple ActionScript 3 code in any Flash, AIR, or Flex application, and saves you from doing tedious socket-level work. As you’ll hopefully discover, these AS3 APIs are flexible and easy-to-use, and will allow you to leverage the portability of Flash, versatility of Flex, and power of ActionScript 3 to help you realize your vision for designing creative apps. Sample Code As you follow along, you may want to refer to the sample code, which contains a project that’s been created by following this blog post. See the Additional Resources section for information about an upcoming ADC article, which will also cover more advanced topics. Table of Contents: Introduction Requirements Creating a Project Connecting to Photoshop Sending Commands to Photoshop Summary Additional Resources) () Creating a Project Step 1: Create a new Flex Mobile. Connecting to Photoshop Overview: - Create a new PhotoshopConnectionand listen for events - Call connect()on your instance of the PhotoshopConnection - After a successful connection, initialize encryption using either initEncryption()or (if you’ve saved the user’s key with getKey()), initEncryptionFromKey() After these steps have been completed, you may send and receive data with Photoshop. As we code these features into the mobile application, we’ll create data structures that will allow you to easily add functionality later in this article. Step 1: Create a singleton Model, to establish a MVC design Our application needs to create a PhotoshopConnection, but we want to store it in a location where it can be conveniently accessed by various parts of our UI (the View in the Model-View-Controller design pattern). Thus, we’ll create a Model in which to store Object references, constants, and variables. - Right click your project in Flash Builder, and choose New ActionScript Class - Enter the string “ model” as the package - Name the Class “ Model“ - Click Finish We now need to add a static variable to this class, a function called getInstance() which returns that variable, and, finally, a Bindable, public variable that will store our PhotoshopConnection. Enter the following code inside of public class Model { ... } private static var _inst:Model; [Bindable] public var photoshopConn:PhotoshopConnection; public function Model() { } public static function getInstance():Model { if ( !_inst ) { _inst = new Model(); } return _inst; } We can now reference the variable photoshopConn from either AS3 or Flex code, simply by calling Model.getInstance() and referencing photoshopConn. I.e., Model.getInstance().photoshopConn. Step 2: Instantiate the PhotoshopConnection and listen for events We’ll instantiate the PhotoshopConnection the first time the user attempts to connect, but it would be a good idea to create initialization code in your own applications, to handle things like reading the hostname and password from disk, managing the user’s key and preferences, etc. Open your views/LoginView.mxml file.> </fx:Script> You’ll see that we’ve created two TextInput components and a Button, as well as an fx:Script tag that will contain some click-handler logic. When this button is pressed,> We’ve created a function called createNewConnection(), which is called if the photoshopConn variable is null., we now have to create the functions onConnected, onEncrypted, and onError, in which, we’re ready to try and connect to Photoshop. It’s always a good idea to remove event listeners when you’re not using them, however, so create a function called cleanUp(), and remove each of those three event listeners from the photoshopConn instance. We’ll call this function once we’re ready to switch Views in the application (after successfully encrypting the connection). Step 3: Call connect(): </p.. Step 4: Initialize encryption, and move on to the next View. As the docs indicate, a successful connection will cause PhotoshopConnection to dispatch a PhotoshopEvent.CONNECTED event. Since we’re listening for this event, our function onConnected will be called. It’s here that, our code will enter the onEncrypted() event handler. At that point, we’re ready to send data to and from Photoshop. To prepare for this step: - Right click your project and select New MXML Component - Put it in the package “ views,” and name it “ HomeView“ - Click “Finish” Now, we just have to push a HomeView onto the ViewNavigator: private function onEncrypted(pe:PhotoshopEvent):void { trace("Encryption was successful. Cleaning up event listeners."); this.cleanUp(); trace("Proceeding to the 'Home' View."); this.navigator.pushView(HomeView); } We’ve also cleaned up the event listeners, which you should do wherever possible to prevent memory leaks. You should now test your project. In the next section, we’ll send some simple commands to Photoshop. Sending Commands to Photoshop At this point in your application, you’ve used the Photoshop Touch SDK to establish an encrypted connection to Photoshop. You’re ready to send and receive data. With a single function call, you can push raw bytes to Photoshop, and—if you wish we’re just beginning, we will indeed use the simplest of these method calls. We’ll create an s:Button in our HomeView that tells Photoshop to create a new document. Photoshop will respond with an id referencing the document. Step 1: Create a MessageDispatcher instance Before we can use the MessageDispatcher, we have to create a new instance of it, and give it a reference to our existing PhotoshopConnection (this allows the MessageDispatcher to use the connection that we initialized in the previous section). We’ll store this instance in the Model, just like we do with the photoshopConn variable. Thus, in your Model.mxml file, add the following: [Bindable] public var messageDisp:MessageDispatcher; The Bindable property allows us to use this variable in Flex and/or attach our own ChangeWatchers, should the need arise.. We’re now ready to use this Object. Step 2: Listen for Photoshop’s Response(s) We could send the command at this point, but, should an error occur, our application would never hear about it. Thus, it’s necessary to attach some event listeners to the PhotoshopConnection. There are three events that a number of other useful events, such as MessageSentEvent, ProgressEvent, and ImageReceivedEvent, but. Step 3: Send a Message to Photoshop, we’ll tell the MessageDispatcher to dispatch a Message to Photoshop. Model.getInstance().messageDisp.createNewDocument(); Pay particular attention to the default parameters in that function call. As the ASDocs say: we’re creating a relatively simply application, we don’t need the added flexibility that comes with managing our own transaction IDs. we no longer need. Summary At this point you’ve been shown how to: create a project; link to the Photoshop Touch SDK libraries; set up a Model-View architecture for managing the Photoshop objects; connect to Photoshop and manage encryption; and send messages while listening for responses. There are still a number of tasks that you may want your application to perform, and the SDK can help you with these. For example, you can use the SDK to: - Listen for foreground and background color changes - Listen for tool change events - Be notified when the user modifies a document - Change the brush size, the currently selected tool, or the document’s properties - Send other, custom commands These tasks are made possible by using the SubscriptionManager, TransactionManager and Photoshop’s ScriptListener plug-in. Please see Daniel Koestler’s ADC article and blog to learn about these tasks, and to get tutorials and sample code that’ll help you take your applications further. Additional Resources An ADC article covering the content of this blog post (as well as more advanced topics) will be available next week. Please check Daniel Koestler’s blog, where he’ll post the article as soon as it’s available. You may also want to follow him on Twitter: @antiChipotle. Update 6/16/2011: The ADC Article is now published. That article contains some additional information about using the Photoshop Touch SDK. You may want to download the sample code, which contains a project that has been created following the above steps. The ADC article contains code that demonstrates the SubscriptionManager, custom messages, and other, more advanced tasks. Very useful article… Thanks Question: How can I discover available Photoshop connections? Rather than providing an IP address to connect to that computer, I want to see all the available Photoshop connections to connect with. Like you guys have done with these three apps (Eazel, Color Lava, Nav). Sounds cool but i got this. [IOErrorEvent type=”ioError” bubbles=false cancelable=false eventPhase=2 text=”Error #2031: Socket Error. URL: xxx.xxx.x.xx” errorID=2031] (i am using the photoshop CS5 12.1 trial) At what point do you get that event? When you attempt to connect, or when you attempt to send a command? login attempt *** private function onError(pe:PhotoshopEvent):void { trace( pe.data ); trace(“There was an error while connecting!”); } *** [SWF] ADCTutorial.swf – 3 354 931 octets après la décompression [IOErrorEvent type=”ioError” bubbles=false cancelable=false eventPhase=2 text=”Error #2031: Socket Error. URL: xxx.xx.x.xx” errorID=2031] There was an error while connecting! *** There could be a number of things causing that IOErrorEvent, so we need more information. The Photoshop Touch SDK uses the StructuredLogTestingSDK for its internal logging. We can turn it on and see some information that’ll help you debug. You first have to set up a trace target, to have the SDK log via trace statements: var traceTarget:TraceTarget = new TraceTarget(); traceTarget.includeCategory = true; traceTarget.includeDate = true; traceTarget.includeLevel = true; traceTarget.includeTime = true; Then call two static functions on SLog; the first function will add the trace target, and the second will test the logger’s ability to output: SLog.addTarget(traceTarget); SLog.debug(this,"Logging initialized."); You may have to link to StructureLogTestingSDK.swc. You can get it here: If that link stops working, its homepage is: Pingback: John Nack on Adobe : News for Suite developers I’m always getting error 2031 when trying to connect to any socket from Flex Mobile projects, I tried to serve crossdomain.xml, security policy files – no luck. Sockets are broken in Flex Mobile, I gave up. Hey John, Would you be able to provide me with a project where you keep getting #2031? crossdomain.xml and policy files shouldn’t be necessary, though it’s possible you’re trying to connect in a way I didn’t anticipate. (Did you try the usual things, such as switching networks, disabling firewalls, manually verifying the IP, etc?) -Dan The reason for the error #2031 is the lack of enabling Photoshop to accept incoming credentials and failing to set a password. To correct this issue, you must edit your Photoshop settings. To allow remote connections: Open Photoshop. Choose the Edit menu. Choose the “Remote Connections…” option. You may leave the Service Name as is, or you can specify a different one. Enter a password, at least 6 characters in length. Check the box to “Enable Remote Connections”. Click OK. Now when you run the AIR app from this tutorial, your app will be able to connect to Photoshop, properly authenticate, and communicate. Also, be sure to enter your correct password in the Login window in this app when you run it. Alternatively, you can also edit the source to pre-populate your password by changing the text value of the text input field. Hello, I want to know how to send a Image to the Photoshop? Because I had send an Image to the Photoshop, but I don’t know how to let the Photoshop to open this Image.I don’t know Where can I find this Image. If you can give me an example, it’s best! Thank you! If you have Photoshop installed on your system, you can use File > Open from within Photoshop and navigate to your image files to open them. If you’re on a Mac, then you can also drag an image to the Photoshop icon on your Dock On Windows 7, you can rightclick an image and choose “Open With > Adobe Photoshop CS6″ Hi. One of my computers is not showing the IPv4 address inside Photoshop (office network). Any reason for that? I can’t connect with a prototype I’m working on. Another question, there is any way to get a list of a discovered connections? You know, some applications like Acquire do this. Thanks! Hi Daniel, We’d need more info like OS/platform. Does Photoshop connect to the network correctly if you choose Help>Photoshop Support Center… from Photoshop? If you haven’t already, I’d post more details on the SDK/companion app forum: Hi Jeffrey, Thanks for your reply! Yes, it is connecting correctly with the network. And apps like ‘Live View’ or ‘Acquire’ are connecting correctly with it. That’s why I was asking about ‘discovered connections’. I’m running PS 13.1.2 on OSX 10.7.5 I will check the forum
http://blogs.adobe.com/crawlspace/2011/05/connecting-to-photoshop-with-flash-flex-and-air-2.html
CC-MAIN-2015-22
refinedweb
2,356
54.42
Is this a bug? import torch t = torch.HalfTensor([0])t = torch.autograd.Variable(t) This code causes these errors, Traceback (most recent call last): File "a.py", line 4, in t = torch.autograd.Variable(t)RuntimeError: Variable data has to be a tensor, but got HalfTensor Hi,CPU half tensors do not actually exist.Using cuda HalfTensors work as expected: import torch t = torch.cuda.HalfTensor([0]) t = torch.autograd.Variable(t) Thank your for reply. Are you intend to implement cpu HalfTensor? I think It is desired torch.HalfTensor is deleted or sends warnings until it... This is very unexpected behaviour.
https://discuss.pytorch.org/t/variable-failed-to-wrap-halftensor/3220
CC-MAIN-2017-47
refinedweb
103
56.01
strxfrm - string transformation #include <string.h> size_t strxfrm(char *s1, const char *s2, size_t n); The strxfrm() function transforms the string pointed to by s2 and places the resulting string into the array pointed to by s1. The transformation is such that if strcmp() is applied to two transformed strings, it returns behaviour is undefined. The strxfrm() function will not change the setting of errno if successful. Because no return value is reserved to indicate an error, an application wishing to check for error situations should set errno to 0, then call strcoll(), an error. The strxfrm() function may fail if: - [EINVAL] - The string pointed to by the s2 argument contains characters outside the domain of the collating sequence. None. The transformation function is such that two transformed strings can be ordered by strcmp() as appropriate to collating sequence information in the program's locale (category LC_COLLATE). The fact that when n is 0, s1 is permitted to be a null pointer, is useful to determine the size of the s1 array prior to making the transformation. None. strcmp(), strcoll(), <string.h>. Derived from the ISO C standard.
http://pubs.opengroup.org/onlinepubs/7990989775/xsh/strxfrm.html
CC-MAIN-2015-18
refinedweb
188
61.26
Making a stand alone program on Linux I recently started using QT. I have version 5.6 installed on Linux Mint 17.3. I wrote a C program that just uses a simple terminal interface that crunches some numbers while a simple printf() displays status. The program runs fine while in the QT IDE. However, the compiled program does not execute when I click on it from the directory folder, nor if i type its name when in terminal and in its directory. What am I missing here? Does QT IDE have to be running to run the program? I want to simply run the program by clicking on it, or enter terminal and type its name, or from a system() call from another program. What do I need to set or change to make this work? - kshegunov Qt Champions 2016 What am I missing here? You are probably missing the whole loader conundrum, but what errors you get from the app when you try to run it in the terminal? Is it "ld can't find whateverlibrary"? - If so, you either install that library in a location known to the loader (also called dynamic linker on Linux). This is usually /libor /usr/liband you usually do that through your distro's repository. - Or, you set up the path from the command line LD_LIBRARY_PATH=/path/to/library/ld/cant/find:$LD_LIBRARY_PATH ./myexecutable. - Or, you set the rpathfield in the ELF header of he application. My advice is go with 1, and if it's impossible then switch to 3. Number 2 is mostly discouraged. Also, please, do search the forums, I remember answering a very similar question yesterday, or the day before. Does QT IDE have to be running to run the program? Nope. This is all a Linux specific issue. Kind regards. Hi, just to add to @kshegunov, if your app runs fine when launched from inside Qt Creator, but not outside of it, most likely it's Qt's dlls/.so files that are not seen find by the app. When you start your app from Qt Creator, it injects a LD_LIBRARY_PATH=/home/yourname/Qt/5.6/gcc_64/libinto your app's environment (@kshegunov's no.2 above) so that your app will find Qt's .so files and start ok. But it should run fine anyway, because when building your program, Qt inserts the same path ( /home/yourname/Qt/5.6/gcc_64/lib) into your program's rpath so it should start also outside of Qt Creator. Perhaps check that rpath with the chrpath utility? the chrpath utility show RPATH=$ORIGIN It looks like the problem is not with my program, but with it being a console program. In the QT IDE when I run, a terminal window opens and it works correctly. A text menu shows from using printf() and I choose an option what to do that gets its input from a scanf(); Standard C stuff. Outside of QT, my program is launching, but the error I get is program not found. I though that meant my program, but it appears it cannot find the terminal or command shell program. My program writes a text log file for any errors encountered. I wrote myself a message and it created the file so my program is executing before it exits at the first printf(). I am not sure what program QT is trying to open to create a console. The IDE seems to be opening a console that is part of QT instead of the Linux one. The same program compiled and run on my WIndows machine opens a command shell when run automatically. I tried executing the Linux version from the command line while in terminal as root. My program runs, creates and writes my test message file then exits at the first printf() because it cannot open whatever it is that QT opens when run from the IDE. I see, a console program. When you start a console program from inside Qt Creator, it starts a child process called qt_creator_process_stub which then runs the usual /bin/sh shell and your program with it. So it should really behave the same way as when you start your app from Terminal. Funny though, I remember having also problems with printf(), not that my console program crashed, rather that there was no output (I just now created a simple console test program with a printf("Hello world"); to see if it crashes, but of course it runs just fine, both from Qt Creator and from Terminal). Anyway, I remember getting around that printf() problem by #including <QTextStream> and setting up a QTextStream instance to stdout, putting out text this way: QTextStream cout(stdout); cout << "Hello world" << endl; Maybe it'll work you too. P.S. Funny about your RPATH just having $ORIGIN and not also :/home/yourname/Qt/5.6/gcc/lib sure you're not doing anything fancy in your .pro file, like QMAKE_LFLAGS or such? I am using a fresh install of QT using default everything. I merely copied my C source (not C++) over to my Linux machine and compiled. The program is POSIX compliant code. Maybe I need to include a QT special header to get the console to come up. Sure, it could be some missing special sauce/setting you need. You could try creating a simple "Hello World" Qt Console Application using Qt Creator's New Project wizard. If printf() works in that app, then try copy/paste your C code into it: #include <QCoreApplication> int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); // your C code goes here printf("Hello Qt\n"); // last line return a.exec(); } - kshegunov Qt Champions 2016 @Alien51 Please provide the link line from your build, the command line with which you are trying to start your program and the exact error you're getting. Otherwise we are just forced to guess.
https://forum.qt.io/topic/67250/making-a-stand-alone-program-on-linux
CC-MAIN-2018-13
refinedweb
987
72.66
JShell: The Java Shell and the Read-Eval-Print Loop With Java 9 (hopefully) near, let's take a quick look at what you can do with the JShell command line tool and how it incorporates a more functional approach to Java. Join the DZone community and get the full member experience.Join For Free Let's talk about JShell. We can explore it with the JDK 9 Early Access Release. As of now, the general availability of JDK9 is scheduled for 27 July, 2017, and the JShell feature was proposed as part of JEP 222. The motivation behind it is to provide interactive command line tools to quickly explore the features of Java. From what I've seen, it is a very useful tool to get a glimpse of Java features very quickly, which is especially useful for new learners. Already, Java is incorporating functional programming features from Scala. Consider the move to a REPL (Read-Eval-Print Loop) interactive shell for Java, just like Scala, Ruby, JavaScript, Haskell, Clojure, and Python. JShell will be a command-line tool with features like a history of statements with editing, tab completion, automatic addition of needed terminal semicolons, and configurable predefined imports. After downloading JDK 9, set the PATH variable to access JShell. Below is a simple program using made using JShell. See how we don't need to write a class with the public static void main(String[] args) method to run a simple hello world application? C:\Users\ABC>jshell | Welcome to JShell -- Version 9-ea | For an introduction type: /help intro jshell> System.out.println("Say hello to jshell!!!"); Say hello to jshell!!! jshell> Now we will write a method that will add two variables and invoke a method via JShell. jshell> public class Sample { ...> public int add(int a, int b) { ...> return a+b; ...> } ...> } | created class Sample jshell> Sample s = new Sample(); s ==> Sample@49993335 jshell> s.add(10,9); $4 ==> 19 jshell> Now, we will create a static method with StringBuilder class without importing it, as jshell does that for you. jshell> public class Sample { ...> public static void join() { ...> StringBuilder sb = new StringBuilder(); ...> sb.append("Smart").append(" ").append("Techie"); ...> System.out.println("The string is " + sb.toString()); ...> } ...> } | created class Sample jshell> Sample.join(); The string is Smart Techie jshell> I hope you've enjoyed this glance at JShell. In my next article, we will see another JDK 9 feature. Until then, stay tuned! Published at DZone with permission of Siva Prasad Rao Janapati, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/jshell-the-java-shell-read-eval-print-loop
CC-MAIN-2022-27
refinedweb
432
67.35
Robin Holt wrote:> On Wed, Apr 26, 2006 at 11:12:48AM +0200, Jes Sorensen wrote:>>> - if (status)>>> - printk(KERN_WARNING "smp_call_function failed for ">>> - "uncached_ipi_mc_drain! (%i)\n", status);>>> + (void) smp_call_function(uncached_ipi_mc_drain, NULL, 0, 1);>> This thing could in theory fail so having the error check there seems>> the right thing to me. In either case, please don't (void) the function>> return (this is a style issue, I know).> > I must be blind. Both up and smp cases for smp_call_function appear to> always return 0. What am I missing?Not on all architectures, at least PPC can return != 0 - dunno if thisis a realistic case though. If not, maybe the prototype forsmp_call_function() ought to be changed.Cheers,Jes-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2006/4/26/67
CC-MAIN-2018-43
refinedweb
145
59.4
Elasticsearch in Go: A Developer's Guide Elasticsearch is a popular datastore for all types of information. It is distributed for speed and scalability and can index many types of content which makes it highly searchable. It uses simple REST APIs for ease of access. Go has an official Elasticsearch library which makes it simple for Go developers to work with data stored in Elasticsearch programmatically. Today we’re going to take a look at how you can easily build a simple app that allows data to be added and searched in Elasticsearch using Go. Let’s get started! PS: The code for this project can be found on GitHub Prerequisites to Writing an Elasticsearch Application in Go First things first, if you haven’t already got Go installed on your computer you will need to Download and install - The Go Programming Language. A Go workspace is required. This is a directory in which all Go libraries live. It is usually ~/go, but can be any directory as long as the environment variable GOPATH points to it. Next, create a directory where all our future code will live. mkdir go-elasticsearch-example cd go-elasticsearch-example Then, make the directory a Go module and install the Go Elasticsearch library. go mod init go-elasticsearch-example go get github.com/elastic/go-elasticsearch/v8 A file called go.mod should have been created containing the dependency that you installed with go get. After this, we need to install Elasticsearch. A convenient way of doing this is to use a Docker image containing an already configured Elasticsearch. If you haven’t already got Docker on your machine Install Docker Engine. We then need to pull an Elasticsearch Docker image. This will take some time to download. docker pull docker.elastic.co/elasticsearch/elasticsearch:7.5.2 Now we need to create a Docker volume so that Elasticsearch data doesn’t get lost when a container exits: docker volume create elasticsearch The Docker command line to run an Elasticsearch container is quite long, so we will create a script called run-elastic.sh to run the Docker command for us: #! /bin/bash docker rm -f elasticsearch docker run -d --name elasticsearch -p 9200:9200 -e discovery.type=single-node \ -v elasticsearch:/usr/share/elasticsearch/data \ docker.elastic.co/elasticsearch/elasticsearch:7.5.2 docker ps The script needs to be made executable and then run. chmod +x run-elastic.sh ./run-elastic.sh Finally, verify that Elasticsearch is running: curl You should see a JSON object containing details of the server. How to Find and Understand How to Handle Data in Go We need a fairly large set of data to load into Elasticsearch. This web site STAPI, a Star Trek API contains huge amounts of data from the Star Trek universe. We will use the spacecraft data as our dataset for this application. It is always a good idea to know what the data looks like. Enter the URL into a web browser. You should see a JSON object containing page information and a list of spacecraft information. There are over 1200 spacecraft in total. PS: Bookmark this link or keep the page open for future reference. A brief introduction to data types in Go If you’re new to Go, this section covers topics that will be helpful to understand before you move forward. If you’re already familiar with Go, you can skip ahead to the next section. The STAPI site and the results of Elasticsearch searches are sent to clients as JSON objects. The Go APIs receive JSON objects like maps and lists.. Go allows maps and lists to have values of any type by declaring the type as an interface: var vessels []interface{} var craft map[string]interface{} This leads to another issue. It is impossible for the compiler to determine what the actual type of the value is. This can only be determined at runtime, making it important to know what the data structure is. If you know the type, you can use a type assertion that tells the compiler what the actual type is. In our example, vessels is a list of maps and craft is a map containing a number of attributes including a name which is a string. The type assertions become: craft0, err := vessels[0].(map[string]interface{}) name, err := craft["name"].(string) If the type assertion agrees with the actual type the err will be nil. You can omit the err return value, but if the type assertion fails then an exception will be thrown. Multiple type assertions can be used in the same expression: name := vessels[0].(map[string]interface[])["name"].(string) Finally, if you don’t know the actual type of an interface value, then you can use reflect to find it out: print(reflect.TypeOf(vessels[0])) How to Access Elasticsearch from Go First of all, we will write a simple Go program that connects to Elasticsearch and prints out server information. Create a file called simple.go containing: package main import ( "github.com/elastic/go-elasticsearch/v8" "log" ) func main() { es, err := elasticsearch.NewDefaultClient() if err != nil { log.Fatalf("Error creating the client: %s", err) } log.Println(elasticsearch.Version) res, err := es.Info() if err != nil { log.Fatalf("Error getting response: %s", err) } defer res.Body.Close() log.Println(res) } The program should be self-explanatory. Now run the program. go run simple.go You should see the server information displayed in JSON format. How to Build a Console Menu in Go We are going to build a user interface in the form of a simple console-based menu-driven application. Create a file called Elastic.go containing the following Go code: package main import ( "bufio" "fmt" "os" ) func Exit() { fmt.Println("Goodbye!") os.Exit(0) } func ReadText(reader *bufio.Scanner, prompt string) string { fmt.Print(prompt + ": ") reader.Scan() return reader.Text() } func main() { reader := bufio.NewScanner(os.Stdin) for { fmt.Println("0) Exit") option := ReadText(reader, "Enter option") if option == "0" { Exit() } else { fmt.Println("Invalid option") } } } Let’s see what this code does. The function Exit() prints out a message and terminates the program. The function ReadText is a helper function that encapsulates the three lines of Go code required to print out a prompt and read a line of text from the keyboard. The main function first creates a scanner object which reads from standard input. It then enters an infinite loop as we don’t know how many times the loop needs to execute. The menu options are printed out and then an option string is read from the keyboard. Finally, either the Exit() function is called or an error message is displayed. Now, run the program and try entering a few options. go run Elastic.go You should only see one option: “0) Exit” Exit from the program by pressing “0” and hitting the “Enter” key. How to Read Data from STAPI and Store it in Elasticsearch from Go We are going to add a menu item to load the data from STAPI and store it in Elastic search. First of all, we need to make some changes to Elastic.go. We need an import statement and create an instance of the Elasticsearch client. import ( "bufio" "fmt" "os" "github.com/elastic/go-elasticsearch/v8" ) var es, _ = elasticsearch.NewDefaultClient() Next, add another menu item to load the data: func main() { reader := bufio.NewScanner(os.Stdin) for { fmt.Println("0) Exit") fmt.Println("1) Load spacecraft") fmt.Println("2) Get spacecraft") option := ReadText(reader, "Enter option") if option == "0" { Exit() } else if option == "1" { LoadData() } else { fmt.Println("Invalid option") } } } The code isn’t ready to run yet. We still need to write the LoadData() function. For that function, we are going to read all of the spacecraft data from the STAPI site. The site only allows up to 100 entries to be read at once, so the data is spread over 13 pages. This means that we need to read each page in turn. Create a file called LoadData.go containing the following Go code: package main import ( "context" "encoding/json" "io/ioutil" "net/ "strconv" "strings" "github.com/elastic/go-elasticsearch/esapi" ) func LoadData() { var spacecrafts []map[string]interface{} pageNumber := 0 for { response, _ := + strconv.Itoa(pageNumber)) body, _ := ioutil.ReadAll(response.Body) defer response.Body.Close() var result map[string]interface{} json.Unmarshal(body, &result) page := result["page"].(map[string]interface{}) totalPages := int(page["totalPages"].(float64)) crafts := result["spacecrafts"].([]interface{}) for _, craftInterface := range crafts { craft := craftInterface.(map[string]interface{}) spacecrafts = append(spacecrafts, craft) } pageNumber++ if pageNumber >= totalPages { break } } for _, data := range spacecrafts { uid, _ := data["uid"].(string) jsonString, _ := json.Marshal(data) request := esapi.IndexRequest{Index: "stsc", DocumentID: uid, Body: strings.NewReader(string(jsonString))} request.Do(context.Background(), es) } print(len(spacecrafts), " spacecraft read\n") } So, what does this code do? First of all, it creates a variable named spacecrafts containing an empty list of maps. Map entries have string keys and the values can be of any type. It also declares a page number that starts from zero. Next, we have an infinite loop to fetch the pages of data. We will terminate the loop when the last page has been read. Then, the Go http API to fetches a page of data from STAPI, specifying the page number to fetch. The response body is then read, and the body is closed to free up resources. The response body is a JSON object which is unmarshaled into a Go map called result. The result map has two entries, a map called page, and a list of spacecraft information called spacecrafts. The page map contains information about the current page. We are only interested in the total number of pages so we extract that information into a variable named totalPages. Next, the code iterates over the spacecraft list and uses type assertion to type each entry as a map. The entry is then appended to the list of spacecraft maps. It then increments the page number and if it is the last page terminates the infinite loop using break. We now have a list containing all of the spacecraft, each entry being a map containing data about the spacecraft. Now it is time to store the data in Elasticsearch. Data is inserted in Elasticsearch by creating a map of type esapi.IndexRequest(). Data items in Elasticsearch are called documents and Elasticsearch stores documents in a collection called an index. Each document needs to be given a unique identifier within the index so we use the spacecraft uid as the unique index. For the body of the document we marshal the data for the spacecraft into JSON and use that. The actual insert operation is performed by calling the Do() function, passing it a Go context and the Elastic search client. Now, run the program and select the menu item to load the data. As the code is now in two files, both need to be specified to run the program. go run Elastic.go LoadData.go You can now verify that there is some data in the stsc index by pointing a web browser at ( Some, but not all of the data should be displayed. How to Get a Document out of Elasticsearch from Go Loading documents into Elasticsearch was quite complex due to the data conversions that were required. Getting and searching for documents is much simpler. The changes that we’ll be making require some new imports, so let’s start by updating our import statement: import ( "bufio" "bytes" "context" "encoding/json" "fmt" "os" "github.com/elastic/go-elasticsearch/esapi" "github.com/elastic/go-elasticsearch/v8" ) Elasticsearch returns documents in the form of a JSON object containing metadata and the document content. This is not very readable. We will add a function called Print() to Elastic.go which prints out some of the spacecraft information in a more readable form. func Print(spacecraft map[string]interface{}) { name := spacecraft["name"] status := "" if spacecraft["status"] != nil { status = "- " + spacecraft["status"].(string) } registry := "" if spacecraft["registry"] != nil { registry = "- " + spacecraft["registry"].(string) } class := "" if spacecraft["spacecraftClass"] != nil { class = "- " + spacecraft["spacecraftClass"].(map[string]interface{})["name"].(string) } fmt.Println(name, registry, class, status) } The function takes account of the fact that some of the fields can be nil and that type assertions are required. Documents can be requested by specifying the index and the document identifier. Let’s add another menu item to Elastic.go to get a spacecraft. The menu calls a function called Get() passing it to the reader. Next, add the function: func Get(reader *bufio.Scanner) { id := ReadText(reader, "Enter spacecraft ID") request := esapi.GetRequest{Index: "stsc", DocumentID: id} response, _ := request.Do(context.Background(), es) var results map[string]interface{} json.NewDecoder(response.Body).Decode(&results) Print(results["_source"].(map[string]interface{})) } The document is returned in a JSON object which is decoded into a map. The actual document is in the map entry _source. How to Search for Documents in Go Elasticsearch supports a number of different types of searches. Each search has a query type and a list of key, value pairs of fields to match. The result is a list of hits, each given a value indicating how good the match was. A match search looks for work matches. The search values should always be in lowercase. A name match for uss would match all spacecraft with the word uss in the name in any case including USS. A prefix search matches any word which starts with the specified string. Now, we will add searches to Elastic.go. First of all, let’s update the main() function to add searches to the menu. func main() { reader := bufio.NewScanner(os.Stdin) for { fmt.Println("0) Exit") fmt.Println("1) Load spacecraft") fmt.Println("2) Get spacecraft") fmt.Println("3) Search spacecraft by key and value") fmt.Println("4) Search spacecraft by key and prefix") option := ReadText(reader, "Enter option") if option == "0" { Exit() } else if option == "1" { LoadData() } else if option == "2" { Get(reader) } else if option == "3" { Search(reader, "match") } else if option == "4" { Search(reader, "prefix") } else { fmt.Println("Invalid option") } } } Note that the new Search() function takes the search type as a parameter. Next, add the search function. func Search(reader *bufio.Scanner, querytype string) { key := ReadText(reader, "Enter key") value := ReadText(reader, "Enter value") var buffer bytes.Buffer query := map[string]interface{}{ "query": map[string]interface{}{ querytype: map[string]interface{}{ key: value, }, }, } json.NewEncoder(&buffer).Encode(query) response, _ := es.Search(es.Search.WithIndex("stsc"), es.Search.WithBody(&buffer)) var result map[string]interface{} json.NewDecoder(response.Body).Decode(&result) for _, hit := range result["hits"].(map[string]interface{})["hits"].([]interface{}) { craft := hit.(map[string]interface{})["_source"].(map[string]interface{}) Print(craft) } } After obtaining the key and value from the user, the function constructs a data structure from the query type, the key, and the value. This then gets encoded as a JSON object. The es.Search() function is called with the index name and the query as a body. This returns a list of hits. These are iterated over and the source object is printed. Run the program and try some match and prefix searches. For example: For option 3 (“Search spacecraft by key and value”) try: Enter key: name Enter value: enterprise For option 4 (“Search by spacecraft key and prefix”) try: Enter key: registry Enter value: ncc or: Enter key: name Enter value: iks Conclusion Elasticsearch can store many types of data in documents. Each document resides in a collection called an index. Each document also has an identifier that is unique in the index. Elasticsearch has a comprehensive REST API. Go has a library that is an API on top of the Elasticsearch REST API. It makes inserting, getting, and searching for documents very easy for a Go developer. The only real complexity is handling maps and lists with different value data types. Once the Go interfaces and type assertions are understood, the complexity is resolved. If you enjoyed reading this post, you might also like these posts from our blog: - NoSQL Options for Java Developers - Build a Single-Page App with Go and Vue.
https://developer.okta.com/blog/2021/04/23/elasticsearch-go-developers-guide
CC-MAIN-2022-21
refinedweb
2,705
59.4
State-Driven UI frameworks (React, Angular and Vue) are popular for a reason. They offload a big headache — tying state to the view — to the framework. A commonly discussed “improvement” to the out of the box usage on these is using a *store. *The originator being Redux, of React. These are all children of the founding idea called *Flux. *Flux is a pattern, not an implementation. I had a chance to look into Angular’s take on this: called ngrx/store. Here’s my thoughts about how it works and whether your project needs it. Hint: probably not. The essence of modern UI frameworks like Angular is to create the interface as a manifestation of the application state. Let’s begin by looking at what this means, and then see how using a store fits in. The use of ngrx/store is simply a more rigorous extension of the essence. Every application has state. In Angular, we bind that state to the UI so that the interaction of the interface and the state is managed automatically. This eliminates the work and complexity of the developer having to manage this interaction. The actual management of the state itself becomes both more important and more capable. The most common approach to a more sophisticated state management architecture is known as the Flux pattern. A well-known implementation of this pattern is Redux, originally for the React library (Redux is not a purist implementation of Flux, but delivers the spirit of the pattern). ngrx/store is the Angular implementation of the pattern. In Flux and ngrx/store the essential idea is to extract application state to a central (single source of state “truth”), then interact with it in a constrained way via commands (in ngrx/store, called “Actions”). Components that depend on the state respond to the state changes, but are insulated from understanding how the Actions operate on the state. More important than the question of how to use ngrx/store is when to use it. The simplest solution that works is the one you want. Simplify! Only add complexity when its demanded *by the situation. ngrx/store adds complexity and overhead; therefore it must be justify itself by a clearly defined need. You should be able to answer the question in one sentence: “We are using ngrx/store in this application because *blank”. With all the preceding in mind, let’s take a closer look at how ngrx/store works and how it answers the needs defined in the list of good reasons to use it. You should be able to answer the question in one sentence: “We are using ngrx/store in this application because _”. Before we look at the Flux pattern and how store/ngrx brings it to life, it’s important to note that it is not always a necessary component in building an Angular application. Using ngrx/store brings more complexity, and that complexity should be merited by the requirements of the app being constructed. In a simple component, the store and view relate to eachother as seen in Figure 1. Figure 1: Simple Component with State When this suffices, well enough. However, as you know, an angular UI is composed of a hierarchical tree of components. These components can interact via @Input and eventing. In simple cases, these are enough to manage shared state. As applications grow, however, the inter-component interactions can become seriously cumbersome. It can become very difficult to understand and think about how events are impacting the state and how the components react to these state changes. Add to this the possibility of external actors on the state (like long-polling or server-push) and you have a strong case for using a central store like ngrx/store. This is seen in Figure 2. Figure 2: Component Tree with State Interactions The solution to this problem is to externalize the state to central place, like you see in Figure 3. Figure 3: Centralized State Just like we create a state in a component and then allow the various view elements to reflect that state, the idea here is to move the shared state out of the component itself, and into a central place where all those concerned can interact with it. This is an easy idea to understand, and you may be wondering what ngrx/store does, since the above central-state idea could be implemented by injecting a global service into the components. This is a great question to ask. In fact, you may well be able to handle your applications needs by using shared services. Moreover, if you can identify subsets of components that use the same state, you can isolate your shared service state holders to smaller segments of the application. Nevertheless, the idea of keeping all application state in a central place has a compelling simplicity to it. This is a central tenant of flux-thinking: one source of application truth. Therefore, let’s assume that you have determined your application merits a central state management solution. What does ngrx/store bring to the table beyond simply making a globally observable state? One prominent feature of ngrx/store is that it allows you to modify the state only via actions. An action is a single type of state change that components can invoke. An action is executed, and the component does not know about how the state is affected. This is a key element in isolating the components from the store. You can think of an Action as a Command (in the sense of the classic Gang of Four Pattern). import { LoadSongsAction } from './actions/songs'; import { Store } from '@ngrx/store'; import * as fromRoot from './reducers'; // The convention is to define fromRoot as our namespace for reducers import { Observable } from 'rxjs/Observable'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'], changeDetection: ChangeDetectionStrategy.OnPush }) export class SongComponent implements OnInit { public song$: Observable<song>; constructor(public store: Store<fromRoot.State>) { this.song$ = store.select(fromRoot.getSong); } ngOnInit() { this.store.dispatch(new LoadSongsAction()); } } An action is a simple command. Here’s a look at the simple LoadSongAction. import { Number } from './../models/song'; import { Action } from '@ngrx/store'; export const LOADSONGS = '[Song] LoadAll'; export const SONGDELETED = '[Song] Delete'; export class LoadSongAction implements Action { type = LOAD_SONGS; } export class DeleteSongAction implements Action { type = SONG_DELETED; } The action internally defines a constant, which convention uses a bracketed type definition followed by the activity given: ‘[Song] LoadAll’. This action can then be used as a discrete action that can be sent to the store. There are two places where actions come into the store: reducers and effects. Reducers are pure functions, meaning they don’t produce side-effects (that is, they perform all their work internally to the function itself — another flux tenant). They are responsible for taking an action that is dispatched from the app, and applying it to the state. For example, in Listing 3, we define a reducer which applies the delete action. export function songReducer(state = initialState, action: Action) { switch(action.type) { case 'DELETE_SONG': const songId = action.payload; return state.filter(id => id !== songId); default: return state; } } The numberReducer has a typical reducer signature: it gets the initialState and the action in its arguments. In this case, it takes a payload from the action, which will contain the id of the item to be deleted. The reducer then uses the id to filter the removed element from the state. Effects, as the name implies, allow for side-effects. A common use for effects is to watch for actions which require loading data. Something like Listing 4 is typical. @Injectable() export class SongEffects { @Effect() update$: Observable<Action> = this.action$ .ofType(songs.LOAD_SONGS) .switchMap(() => this.songService .getRates() .map(data => new SongsAreLoadedAction(data)) ); constructor( private currencyService: SongService, private action$: Actions ) {} } This effect watches for the LOAD_SONGS Action, and uses a song service (injected into this class) to do that work. Although this can be a useful pattern, using effects to interact with backend services can become unwieldy as applications become more complex. This is because it can become difficult to manage the subscription and unsubscription from multiple components in the effect — if the user navigates away from the view, a new event type (e.g., CANCELLOADSONGS) can become necessary. Moreover, if interleaving of requests is important to dependant components, it can become difficult to track when the data is loaded. In short, effects can become a source of sprawling logic dependency. Another approach that may scale better is to create a service that provides an observable to respond to requests for data from components, and allow that service to cache the data in the store. The decision of which approach to use it dependent on your application’s needs, and using an Effect to load data can work well for simple needs, and is an easy to understand approach. The question of when to use a store like ngrx/store must be driven by the on-the-ground facts of the application. A strong indication of needing a centralized state is extensive shared-state and/or component state interactions. This central state can be handled as an observable service at a variety of levels in the component tree, although a central state-of-record (ie, ‘one source of truth’) can simplify thinking about the application. If a centralized state solution using subscriptions becomes untenable due to complexity, or because multiple concurrent actors are affecting the state (typically, this means server push like websockets are in play), then a formalized store like ngrx/store is the solution. This allows for rigorously defining what actions are applicable to the state, isolating them into Command objects (Actions), and concentrating the actual manipulation of the state into centralized and simple functions (Reducers). My biggest beef with the store idea is that it is really anti-agile. You are adding all kinds of effort to doing anything in the app if you require everything to use the store. Especially, you are making it more difficult to change things going forward. It could kill a project that might be agile enough to survive using just normal state-driven behavior.
https://tkssharma.com/angular-with-ng-rx-store-deep-ive/
CC-MAIN-2020-45
refinedweb
1,691
54.42
log.h File ReferenceLogging system module. More... #include <cfg/debug.h> Go to the source code of this file. Detailed DescriptionLogging system module. This module implement a simple interface to use the multi level logging system. The log message have the priority order, like this: - error message (highest) - warning message - info message (lowest) With this priority system we can log only the message that have egual or major priority than log level that you has been configurate. Further you can have a differ log level for each module that you want. To do this you just need to define LOG_LEVEL in cfg of select module. When you set a log level, the system logs only the message that have priority egual or major that you have define, but the other logs function are not include at compile time, so all used logs function are linked, but the other no. To use logging system you should include this module in your drive and use a LOG_ERROR, LOG_WARNING and LOG_INFO macros to set the level log of the message. Then you should define a LOG_LEVEL and LOG_VERBOSE costant in your cfg/cfg_<your_cfg_module_name>.h using the follow policy: - in your file cfg/cfg_<cfg_module_name>.h, you define the logging level and verbosity mode for your specific module: #define <cfg_module_name>_LOG_LEVEL LOG_LVL_INFO #define <cfg_module_name>_LOG_FORMAT LOG_FMT_VERBOSE - then, in the module that you use a logging macros you should define a LOG_LEVEL and LOG_FORMAT using the previous value that you have define in cfg_<cfg_module_name>.h header. After this you should include the cfg/log.h module: // Define log settings for cfg/log.h. #define LOG_LEVEL <cfg_module_name>_LOG_LEVEL #define LOG_FORMAT <cfg_module_name>_LOG_FORMAT #include <cfg/log.h> if you include a log.h module without define the LOG_LEVEL and LOG_VERBOSE macros, the module use the default setting (see below). WARNING: when use the log.h module, and you want to set a your log level make sure to include this module after a cfg_<cfg_module_name>.h, because the LOG_LEVEL and LOG_VERBOSE macros must be defined before to include log module, otherwise the log module use a default settings. - Version: Definition in file log.h.
http://doc.bertos.org/2.2/log_8h.html
crawl-003
refinedweb
359
55.34
Hello, small question. Referring to SPI pins on Uno or Mega is easy (Mega mosi=51, miso=50). But how do you name the Mosi and Miso for the Due? Thanks, Hugo Hello, small question. Referring to SPI pins on Uno or Mega is easy (Mega mosi=51, miso=50). But how do you name the Mosi and Miso for the Due? Thanks, Hugo Use the SPI connector in the middle of the board. See the diagram in the first post at the top of this forum. MOSI/MISO isn't copied on any other pin on the Due. I have an ADXL345 accelerometer evaluation board. I am able to interface with it very well with my Arduino Uno. I can write registers to it as well as read data from the device registers without any issues. I started using Arduino Due yesterday, and I seem to not understand how to put values into the register. The page on the website is very confusing to me “” Could someone please give me an example of how to input a value on to the register? You don't name them, the SPI hardware is completely different, just look at the Due SPI examples. [ actually there don't appear to be any! - look at the docs ] I am setting up my DUE with the following code: void setup() { SPI.begin(4); SPI.setClockDivider(4,21); SPI.setDataMode(4,SPI_MODE3); SPI.setBitOrder(4,MSBFIRST); Serial.begin(9600); // Want to put in inside $0x2D the value 0x08 SPI.transfer(4, 0x2D, SPI_CONTINUE); SPI.transfer(4, 0x08, SPI_CONTINUE); // Want to put in inside $0x31 the value 0x03 SPI.transfer(4, 0x31, SPI_CONTINUE); SPI.transfer(4, 0x03, SPI_CONTINUE); } // AM I DOING THIS RIGHT? void loop() { // I want to read from 16-bit data from $0x36 & $0x37 byte response = SPI.transfer(4, 0xB6, SPI_CONTINUE); byte response1 = SPI.transfer(4, 0x00); } // AM I DOING THIS RIGHT? You probably don't want the continue on the second transfer of each register write. This controls the chip select action (stay low or revert to high). Normally each register access would be a single SPI transaction (but check the datasheet, devices differ - this should also tell you which mode to use (normally mode0 is right). Thanks MarkT. I modified: // Want to put in inside $0x2D the value 0x08 SPI.transfer(4, 0x2D, SPI_CONTINUE); SPI.transfer(4, 0x08, SPI_CONTINUE); to // Want to put in inside $0x2D the value 0x08 SPI.transfer(4, 0x2D, SPI_CONTINUE); SPI.transfer(4, 0x08); and it works. Great! Did you check the mode was right (using the wrong mode can seem to work but be either unreliable or shift your data one bit position). I am trying to get SPI communication working on Arduino Due. However I seem not to get any activity on the SPI pins. I tried to copy the example above, but so far I have no success. I made a small test program to check the SPI signals with the scope: #include <SPI.h> int testPin = 22; void setup() { pinMode(testPin, OUTPUT); // initialize SPI: SPI.begin(4); SPI.setClockDivider(4,8); SPI.setDataMode(4,SPI_MODE0); SPI.setBitOrder(4,MSBFIRST); // Am I doing this right? Anything missing? } void loop() { // I whant to write 0x55 on the SPI port SPI.transfer(4, 0x55); // am I doing this right ? digitalWrite(testPin, 1); delay(100); SPI.transfer(4, 0xAA); digitalWrite(testPin, 0); delay(100); } Question: Am I supposed to see pin 4 (nCS) going low once I issue the “SPI.transfer(4, 0x55);” command, or do I need to configure this pin in some additional way? Also I can not see andy activity on the SCLK pin and the MOSI pins of the Due. Any help or suggestions what I am missing would be very welcome.
https://forum.arduino.cc/t/spi-on-arduino-due/296422
CC-MAIN-2021-31
refinedweb
627
67.86
This is a virtual base class used to do complex text layout. More... #include <LayoutEngine.h> This is a virtual base class used to do complex text layout. The text must all be in a single font, script, and language. An instance of a LayoutEngine can be created by calling the layoutEngineFactory method. Fonts are identified by instances of the LEFontInstance class. Script and language codes are identified by integer codes, which are defined in ScriptAndLanuageTags.h. Note that this class is not public API. It is declared public so that it can be exported from the library that it is a part of. The input to the layout process is an array of characters in logical order, and a starting X, Y position for the text. The output is an array of glyph indices, an array of character indices for the glyphs, and an array of glyph positions. These arrays are protected members of LayoutEngine which can be retreived by a public method. The reset method can be called to free these arrays so that the LayoutEngine can be reused. The layout process is done in three steps. There is a protected virtual method for each step. These methods have a default implementation which only does character to glyph mapping and default positioning using the glyph's advance widths. Subclasses can override these methods for more advanced layout. There is a public method which invokes the steps in the correct order. The steps are: 1) Glyph processing - character to glyph mapping and any other glyph processing such as ligature substitution and contextual forms. 2) Glyph positioning - position the glyphs based on their advance widths. 3) Glyph position adjustments - adjustment of glyph positions for kerning, accent placement, etc. NOTE: in all methods below, output parameters are references to pointers so the method can allocate and free the storage as needed. All storage allocated in this way is owned by the object which created it, and will be freed when it is no longer needed, or when the object's destructor is invoked. Definition at line 67 of file LayoutEngine.h. This constructs an instance for a given font, script and language. Subclass constructors must call this constructor. This method does positioning adjustments like accent positioning and kerning. The default implementation does nothing. Subclasses needing position adjustments must override this method. Note that this method has both characters and glyphs as input so that it can use the character codes to determine glyph types if that information isn't directly available. (e.g. Some Arabic OpenType fonts don't have a GDEF table) This is a convenience method that forces the advance width of mark glyphs to be zero, which is required for proper selection and highlighting. This method uses the input characters to identify marks. This is required in cases where the font does not contain enough information to identify them based on the glyph IDs. This method does any required pre-processing to the input characters. It may generate output characters that differ from the input charcters due to insertions, deletions, or reorderings. In such cases, it will also generate an output character index array reflecting these changes. Subclasses must override this method. Input parameters: This method does the glyph processing. It converts an array of characters into an array of glyph indices and character indices. The characters to be processed are passed in a surrounding context. The context is specified as a starting address and a maximum character count. An offset and a count are used to specify the characters to be processed. The default implementation of this method only does character to glyph mapping. Subclasses needing more elaborate glyph processing must override this method. Input parameters: Output parameters: This method gets a table from the font associated with the text. The default implementation gets the table from the font instance. Subclasses which need to get the tables some other way must override this method. This method returns the number of glyphs in the glyph array. Note that the number of glyphs will be greater than or equal to the number of characters used to create the LayoutEngine. This method will invoke the layout steps in their correct order by calling the computeGlyphs, positionGlyphs and adjustGlyphPosition methods. It will compute the glyph, character index and position arrays. Note: The glyph, character index and position array can be accessed using the getter methods below. Note: If you call this method more than once, you must call the reset() method first to free the glyph, character index and position arrays allocated by the previous call. This method returns a LayoutEngine capable of laying out text in the given font, script and langauge. Note that the LayoutEngine returned may be a subclass of LayoutEngine. This method does character to glyph mapping. The default implementation uses the font instance to do the mapping. It will allocate the glyph and character index arrays if they're not already allocated. If it allocates the character index array, it will fill it it. This method supports right to left text with the ability to store the glyphs in reverse order, and by supporting character mirroring, which will replace a character which has a left and right form, such as parens, with the opposite form before mapping it to a glyph index. Input parameters: This method does basic glyph positioning. The default implementation positions the glyphs based on their advance widths. This is sufficient for most uses. It is not expected that many subclasses will override this method. Input parameters: This method frees the glyph, character index and position arrays so that the LayoutEngine can be reused to layout a different characer array. (This method is also called by the destructor) TRUE if mapCharsToGlyphs should replace ZWJ / ZWNJ with a glyph with no contours. Definition at line 116 of file LayoutEngine.h. The font instance for the text font. Definition at line 83 of file LayoutEngine.h. The object which holds the glyph storage. Definition at line 74 of file LayoutEngine.h. The langauge code for the text. Definition at line 101 of file LayoutEngine.h. The script code for the text. Definition at line 92 of file LayoutEngine.h. The typographic control flags. Definition at line 108 of file LayoutEngine.h.
http://icu-project.org/apiref/icu4c434/classLayoutEngine.html
CC-MAIN-2018-05
refinedweb
1,046
66.13
: September 2007 Organizing Namespaces with DDD About a week ago I posted a message to the Yahoo DDD group to see how other people were organizing their namespaces in their Domain Model. For the most part everyone had the same basic idea. You can read the … Posted in ddd 12 Comments Branching a trunk permanently I am wondering how someone else would achieve what I am trying to do here. Basically I have a development trunk for Project A. It is updated with new code on a daily basis and has frequent branches and merges … Posted in Uncategorized 2 Comments Another reason to love MonoRail W3C Validation is easy!! I won’t mention the site that I am fixing validation for because it is in a horrid state and I am quite embarassed =) The fact of the matter is, I fixed quite a few errors … Posted in monorail aspnet Leave a comment Welcome Colin Ramsay! Everyone at LosTechies would like to welcome Colin Ramsay to Los Techies! Colin has his own website where he has a number of screencasts about various topics. If you want to get learning The Castle Project quickly then check out … Posted in general 7 Comments NAnt Build Prompter This is a very basic script that a co-worker named Rabid made for me. I don’t know this syntax but he does for all the group policy stuff we have at work. Basically the way I had it setup to … Posted in Uncategorized Leave a comment 500gb MyBook external HDD and Virtual Machine I am wondering if anyone else has had experience with attempting to recognize a 500gb Western Digital MyBook HDD in either VirtualBox or VMWare. I have been trying to get my ubuntu guest OS to see the drive for 2 … Posted in general linux ubuntu 5 Comments … Posted in microsoft 10 Comments Blogging at LosTechies Jason Meridth contacted me a couple of days ago inviting me to cross-post my blog posts on LosTechies. I was very flattered to learn that anyone even read my blog posts let alone invite me to blog on their community … Posted in Uncategorized 7 Comments
http://lostechies.com/seanchambers/2007/09/
CC-MAIN-2014-15
refinedweb
359
52.23
ReJSON: Redis as a JSON Store ReJSON: Redis as a JSON Store We've created ReJSON, a Redis module that provides native JSON capabilities. ReJSON should make any Redis user giddy with JSON joy. Join the DZone community and get the full member experience.Join For Free Download "Why Your MySQL Needs Redis" and discover how to extend your current MySQL or relational database to a Redis database. In this post, we'll check out case, I was shocked when a couple of years ago I learned that the two don’t get along. Redis isn’t a one-trick pony. It is, in fact, quite the opposite. Unlike general purpose one-size-fits-all databases, Redis (AKA the “Swiss Army Knife of Databases,” “Super Glue of Microservices,” and “Execution context of Functions-as-a-Service”) provides specialized tools for specific tasks. Developers use these tools, which are exposed as abstract data structures and their accompanying operations, to model optimal solutions for problems. And that is exactly the reason why using Redis for managing JSON data is unnatural. Fact: Despite its multitude of core data structures, Redis has none that fit the requirements of a JSON value. Sure, you can work around that by using other data types; Strings are great for storing raw serialized JSON, and you can represent flat JSON objects with Hashes. But these workaround patterns impose limitations that make them useful only in a handful of use cases, and even then the experience leaves an un-Redis-ish aftertaste. Their awkwardness clashes sharply with the simplicity and elegance of using Redis normally. But all that changed during the last year after Salvatore Sanfilippo’s @antirez visit to the Tel Aviv office, and with Redis modules becoming a reality. Suddenly, the sky wasn’t the limit anymore. Now that modules let anyone do anything, it turned out that I could be that particular anyone. Picking up on C development after more than a two decades hiatus proved to be less of a nightmare than I had anticipated, and with Dvir Volk’s (@dvirsky) loving guidance, we birthed ReJSON. While you may not be thrilled about its name (I know that I’m not; suggestions are welcome), ReJSON itself should make any Redis user giddy with JSON joy. The module provides a new data type that is tailored for fast and efficient manipulation of JSON documents. Like any Redis data type, ReJSON’s values are stored in keys that can be accessed with a specialized subset of commands. These commands, or the API that the module exposes," 127.0.0.1:6379> ^C ~$ Like any well-behaved module,. What happens under the hood is that whenever you call JSON.SET, the module takes the value through a streaming lexer that parses the input JSON and builds tree data structure from it: ReJSON stores the data in binary format in the tree’s nodes and supports a subset of JSONPath for easy referencing of subelements. It boasts an arsenal of atomic commands that are tailored for every JSON value type, including JSON.STRAPPEND for appending strings, JSON.NUMMULTBY for multiplying numbers, and JSON.ARRTRIM for trimming arrays… and making pirates happy. Because ReJSON is implemented as a Redis module, you can use it with any Redis client that: supports modules (ATM none) or allows sending raw commands (ATM most). For example, you can use a ReJSON-enabled Redis server from your Python code with redis-py like so: import redis import json data = { 'foo': 'bar', 'ans': 42 } r = redis.StrictRedis() r.execute_command('JSON.SET', 'object', '.', json.dumps(data)) reply = json.loads(r.execute_command('JSON.GET', 'object')) But that’s just half of it. ReJSON isn’t only a pretty API, it also a powerhouse in terms of performance. Initial performance benchmarks already demonstrate that, for example: The above graphs compare the rate (operations/sec) and average latency of read and write operations performed on a 3.4KB JSON payload that has three nested levels. ReJSON is pitted against two variants that store the data in Strings. Both variants are implemented as Redis server-side Lua scripts with the json.lua variant storing the raw serialized JSON, and msgpack.lua using MessagePack encoding. If you have 21 minutes to spare, here’s the ReJSON presentation from Redis Day TLV. You can start playing with ReJSON today! Get it from the GitHub repository or read the docs online. There are still many features that we want to add to it, but it’s pretty neat as it is. If you have feature requests or have spotted an issue, feel free to use the repo’s issue tracker. You can always or tweet at me — I’m highly-available. }}
https://dzone.com/articles/redis-as-a-json-store
CC-MAIN-2018-47
refinedweb
787
55.34
Created on 2016-08-26 15:51 by tehybel, last changed 2017-10-08 09:54 by serhiy.storchaka. This issue is now closed. Here I will describe 6 issues with various core objects (bytearray, list) and the array module. Common to them all is that they arise due to a misuse of the function PySlice_GetIndicesEx. This type of issue results in out-of-bounds array indexing which leads to memory disclosure, use-after-frees or memory corruption, depending on the circumstances. For each issue I've attached a proof-of-concept script which either prints leaked heap memory or segfaults on my machine (64-bit linux, --with-pydebug, python 3.5.2). Issue 1: out-of-bounds indexing when taking a bytearray's subscript While taking the subscript of a bytearray, the function bytearray_subscript in /Objects/bytearrayobject.c calls PySlice_GetIndicesEx to validate the given indices. Some of these indices might be objects with an __index__ method, and thus PySlice_GetIndicesEx could call back into python code. If the evaluation of the indices modifies the bytearray, the indices might no longer be safe, despite PySlice_GetIndicesEx saying so. Here is a PoC which lets us read out 64 bytes of uninitialized memory from the heap: --- class X: def __index__(self): b[:] = [] return 1 b = bytearray(b"A"*0x1000) print(b[0:64:X()]) --- Here's the result on my system: $ ./python poc17.py bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00\xb0\xce\x86\x9ff\x7') Issue 2: memory corruption in bytearray_ass_subscript This issue is similar to the one above. The problem exists when assigning to a bytearray via subscripting. The relevant function is bytearray_ass_subscript. The relevant line is again the one calling PySlice_GetIndicesEx. Here's a PoC which leads to memory corruption of the heap: --- class X: def __index__(self): del b[0:0x10000] return 1 b = bytearray(b"A"*0x10000) b[0:0x8000:X()] = bytearray(b"B"*0x8000) --- Here's the result of running it: (gdb) r poc20.py Program received signal SIGSEGV, Segmentation fault. PyCFunction_NewEx (ml=0x8b4140 <textiowrapper_methods+128>, self=self@entry=0x7ffff7f0e898, module=module@entry=0x0) at Objects/methodobject.c:31 31 free_list = (PyCFunctionObject *)(op->m_self); (gdb) p op $13 = (PyCFunctionObject *) 0x4242424242424242 Issue 3: use-after-free when taking the subscript of a list This issue is similar to the one above, but it occurs when taking the subscript of a list rather than a bytearray. The relevant code is in list_subscript which exists in /Objects/listobject.c. Here's a PoC: --- class X: def __index__(self): b[:] = [1, 2, 3] return 2 b = [123]*0x1000 print(b[0:64:X()]) --- It results in a segfault here because of a use-after-free: (gdb) run ./poc18.py Program received signal SIGSEGV, Segmentation fault. 0x0000000000483553 in list_subscript (self=0x7ffff6d53988, item=<optimized out>) at Objects/listobject.c:2441 2441 Py_INCREF(it); (gdb) p it $2 = (PyObject *) 0xfbfbfbfbfbfbfbfb Issue 4: use-after-free when assigning to a list via subscripting The same type of issue exists in list_ass_subscript where we assign to the list using a subscript. Here's a PoC which also results in a use-after-free: --- class X: def __index__(self): b[:] = [1, 2, 3] return 2 b = [123]*0x1000 b[0:64:X()] = [0]*32 --- (gdb) r poc19.py Program received signal SIGSEGV, Segmentation fault. 0x0000000000483393 in list_ass_subscript (self=<optimized out>, item=<optimized out>, value=<optimized out>) at Objects/listobject.c:2603 2603 Py_DECREF(garbage[i]); (gdb) p garbage[i] $4 = (PyObject *) 0xfbfbfbfbfbfbfbfb Issue 5: out-of-bounds indexing in array_subscr Same type of issue. The problem is in the function array_subscr in /Modules/arraymodule.c. Here's a PoC which leaks and prints uninitialized memory from the heap: --- import array class X: def __index__(self): del a[:] a.append(2) return 1 a = array.array("b") for _ in range(0x10): a.append(1) print(a[0:0x10:X()]) --- And the result: $ ./python poc22.py array('b', [2, -53, -53, -53, -5, -5, -5, -5, -5, -5, -5, -5, 0, 0, 0, 0]) Issue 6: out-of-bounds indexing in array_ass_subscr Same type of issue, also in the array module. Here's a PoC which segfaults here: --- import array class X: def __index__(self): del a[:] return 1 a = array.array("b") a.frombytes(b"A"*0x100) del a[::X()] --- How should these be fixed? I would suggest that in each instance we could add a check after calling PySlice_GetIndicesEx. The check should validate that the "length" argument passed to PySlice_GetIndicesEx did not change during the call. But maybe there is a better way? (By the way: these issues might also exist in 2.7, I did not check.) I presume you are suggesting to raise if the length changes. This is similar to raising when a dict is mutated while iterating. Note that we do not do this with mutable sequences. (If the iteration is stopped with out-of-memory error, so be it.) An alternate approach would be to first fully evaluate start, stop, step , *and then length*, to ints, in that order, before using any of them. In particular, have everything stable before comparing and adjusting start and stop to length. This way, slices would continue to always work, barring other exceptions in __index__ or __length__. Even list suffers from this bug if slicing step is not 1. class X: def __index__(self): del a[:] return 1 a = [0] a[:X():2] There. This is a toy example that exposes the problem, but the problem itself is not a toy problem. The key point is that calculating slice indices cause executing Python code and releases GIL. In multithread program a sequence can be changed not in toy __index__ method, but in other thread, in legitimate code. This is very hardly reproducible bug. Variants B are not efficient. To determine the size of a sequence we should call its __len__() method. This is less efficient than using macros Py_SIZE() or PyUnicode_GET_LENGTH(). And it is not always possible to pass a sequence. In multidimensional array there is no such sequence (see for example _testbuffer.ndarray). FWIW. Py_SIZE is used all over listobject.c. Are you saying that this could be improved? I'm saying that PySlice_GetIndicesEx2 can't just use Py_SIZE. Actually making slicing always working is easier than I expected. Maybe it is even easier than raise an error. PySlice_GetIndicesEx() is split on two functions. First convert slice attributes to Py_ssize_t, then scale them to appropriate range depending on the length. Here is a sample patch. I like this. Very nice. What I understand is that callers that access PySlice_GetIndicesEx via the header file (included with Python.h) will see the function as a macro. When the macro is expanded, the length expression will be evaluated after any __index__ calls. This approach requires that the length expression calculate the length from the sequence, rather than being a length computer before the call. I checked and all of our users in /Objects pass some form of seq.get_size(). This approach also requires that the function be accessed via .h rather than directly as the function in the .c file. If we go this way, should he PySlice_GetIndicesEx doc say something? I reviewed the two new functions and am satisfied a) that they correctly separate converting None and non-ints to ints from adjusting start and stop as ints according to length and b) that the effect of the change in logic for the latter is to stop making unnecessary checks that must fail. Nice! The one thing I would suggest double checking with this change is whether or not we have test cases covering ranges with lengths that don't fit into ssize_t. It's been years since I looked at that code, so I don't remember exactly how it currently works, but it does work (except for __len__, due to the signature of the C level length slot): >>> bigrange = range(int(-10e30), int(10e30)) >>> len(bigrange) Traceback (most recent call last): File "<stdin>", line 1, in <module> OverflowError: Python int too large to convert to C ssize_t >>> bigrange[:] range(-9999999999999999635896294965248, 9999999999999999635896294965248) >>> bigrange[0:-1] range(-9999999999999999635896294965248, 9999999999999999635896294965247) >>> bigrange[::2] range(-9999999999999999635896294965248, 9999999999999999635896294965248, 2) >>> bigrange[0:-1:2] range(-9999999999999999635896294965248, 9999999999999999635896294965247, 2) Yet one possible solution is to make slice constructor converting arguments to exact ints. This allows to leave user code unchanged. But this is 3.6-only solution of course. I would like to know Mark's thoughts on this. As in, for arguments that have __index__() methods, do the conversion to a true Python integer eagerly when the slice is built rather than lazily when slice.indices() (or the C-level equivalent) is called? That actually seems like a potentially plausible future approach to me, but isn't a change I'd want to make hastily - those values are visible as the start, stop and step attributes on the slice, and currently describes those as "These attributes can have any type." Given that folks do a lot of arcane things with the subscript notation, I wouldn't want to break working code if we have less intrusive alternatives. Then there is a design question. I believe that after all we should expose these two new functions publicly. And the question is about function names and the order of arguments. Currently signatures are: int _PySlice_Unpack(PyObject *r, Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t *step); int _PySlice_EvalIndices(Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t step, Py_ssize_t length, Py_ssize_t *slicelength); Are there suggestions for names? Perhaps the second functions should not have prefix PySlice_, since it doesn't work with slice object. I think those names (with the leading underscore removed) would be fine as a public API - the fact that PySlice_EvalIndices doesn't take a reference to the slice object seems similar to a static method, where the prefix is there for namespacing reasons, rather than because it actually operates on a slice instance. Renamed _PySlice_EvalIndices() to _PySlice_AdjustIndices() and changed its signature. Updated the documentation and python3.def. Fixed yet one bug: implementation-defined behavior with division by negative step. Note that since new functions are used in public macro, they become a part of the stable API. Shouldn't starting underscores be removed from names? An attempt of discussing names and signatures on Python-Dev:. We can't just add API functions in maintained releases, because it will break the stable ABI. We can use them only when explicitly define the version of API. Proposed patch for 3.6 and 3.7 adds public API functions PySlice_Unpack() and PySlice_AdjustIndices() and makes PySlice_GetIndicesEx() a macro if set Py_LIMITED_API to the version that supports new API. Otherwise PySlice_GetIndicesEx() becomes deprecated. This doesn't break extensions compiled with older Python versions. Extensions compiled with new Python versions without limited API or with high API version are not compatible with older Python versions as expected, but have fixed the original issue. Compiling extensions with new Python versions with set low Py_LIMITED_API value will produce a deprecation warning. Pay attention to names and signatures of new API. It would be hard to change it when it added. I think this is the safest way. In 2.7 we should replace PySlice_GetIndicesEx() with a macro for internal use only if we want to fix an issue for builtins and preserve a binary compatibility. New changeset d5590f357d74 by Serhiy Storchaka in branch '2.7': Issue #27867: Replaced function PySlice_GetIndicesEx() with a macro. New changeset 96f5327f7253 by Serhiy Storchaka in branch '3.5': Issue #27867: Function PySlice_GetIndicesEx() is replaced with a macro if New changeset b4457fe7fdb8 by Serhiy Storchaka in branch '3.6': Issue #27867: Function PySlice_GetIndicesEx() is replaced with a macro if New changeset 6093ce8eed6c by Serhiy Storchaka in branch 'default': Issue #27867: Function PySlice_GetIndicesEx() is deprecated and replaced with Not a big deal, but the change produces compiler warnings with GCC 6.1.1: /home/proj/python/cpython/Objects/bytesobject.c: In function ‘bytes_subscript’: /home/proj/python/cpython/Objects/bytesobject.c:1701:13: warning: ‘slicelength’ may be used uninitialized in this function [-Wmaybe-uninitialized] for (cur = start, i = 0; i < slicelength; ^~~ /home/proj/python/cpython/Objects/listobject.c: In function ‘list_ass_subscript’: /home/proj/python/cpython/Objects/listobject.c:2602:13: warning: ‘slicelength’ may be used uninitialized in this function [-Wmaybe-uninitialized] for (i = 0; i < slicelength; i++) { ^~~ /home/proj/python/cpython/Objects/unicodeobject.c: In function ‘unicode_subscript’: /home/proj/python/cpython/Objects/unicodeobject.c:14013:16: warning: ‘slicelength’ may be used uninitialized in this function [-Wmaybe-uninitialized] result = PyUnicode_New(slicelength, max_char); ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /media/disk/home/proj/python/cpython/Modules/_elementtree.c: In function ‘element_ass_subscr’: /media/disk/home/proj/python/cpython/Modules/_elementtree.c:1896:50: warning: ‘slicelen’ may be used uninitialized in this function [-Wmaybe-uninitialized] self->extra->children[i + newlen - slicelen] = self->extra->children[i]; ~~~~~~~~~~~^~~~~~~~~~ /media/disk/home/proj/python/cpython/Modules/_ctypes/_ctypes.c: In function ‘Array_subscript’: /media/disk/home/proj/python/cpython/Modules/_ctypes/_ctypes.c:4327:16: warning: ‘slicelen’ may be used uninitialized in this function [-Wmaybe-uninitialized] np = PyUnicode_FromWideChar(dest, slicelen); ~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ My build used to be free of warnings. This warning is enabled via -Wall. The reason is probably that the new macro skips the slicelength assignment if PySlice_Unpack() fails. Workarounds could be to assign or initialize slicelength to zero (at the call sites or inside the macro), or to compile with -Wno-maybe-uninitialized. Good point Martin. I missed this because warnings are not emitted in non-debug build and were emitted only once in incremental debug build. Your idea about initializing slicelength in a macro LGTM. New changeset d7b637af5a7e by Serhiy Storchaka in branch '3.5': Issue #27867: Silenced may-be-used-uninitialized warnings after New changeset 17d0cfc64a32 by Serhiy Storchaka in branch '2.7': Issue #27867: Silenced may-be-used-uninitialized warnings after New changeset b8fc4de84b9a by Serhiy Storchaka in branch '3.6': Issue #27867: Silenced may-be-used-uninitialized warnings after New changeset af8315720e67 by Serhiy Storchaka in branch 'default': Issue #27867: Silenced may-be-used-uninitialized warnings after New changeset 110ec861e5ea '3.5': Issue #27867: Silenced may-be-used-uninitialized warnings after New changeset 745dda46d2e3e27206bb33188c770e1f6c73766e by Serhiy Storchaka in branch '2.7': Issue #27867: Silenced may-be-used-uninitialized warnings after New changeset e9d77e9fce477b5589c7eb5e1b4179b1d8e1fecc 'master': Issue #27867: Silenced may-be-used-uninitialized warnings after New changeset 8bd58e9c725a15854a99d19daf935fb08df77a05 by Serhiy Storchaka in branch 'master': Issue #27867: Silenced may-be-used-uninitialized warnings after New changeset 65febbec9d09101f76a04efeef6b3dc7f9b06ee8 by Serhiy Storchaka in branch 'master': Issue #27867: Silenced may-be-used-uninitialized warnings after New changeset faa1891d4d1237d6df0af4622ff520ccd6768e04 by Serhiy Storchaka in branch '3.6': Issue #27867: Silenced may-be-used-uninitialized warnings after New changeset 8bd58e9c725a15854a99d19daf935fb08df77a05 by Serhiy Storchaka in branch '3.6': Issue #27867: Silenced may-be-used-uninitialized warnings after This issue is left open because it needs to add a porting guide in What's New. See also a problem with breaking ABI in issue29943. If don't make PySlice_GetIndicesEx a macro when Py_LIMITED_API is not defined, it should be expanded to PySlice_Unpack and PySlice_AdjustIndices. PR 1023 does this for master branch. The patch is generated by Coccinelle's semantic patch. New changeset e41390aca51e4e3eb455cf3b70f5d656a2814db9 by Serhiy Storchaka in branch '2.7': bpo-27867: Expand the PySlice_GetIndicesEx macro. (#1023) (#1046) New changeset b879fe82e7e5c3f7673c9a7fa4aad42bd05445d8 by Serhiy Storchaka in branch 'master': Expand the PySlice_GetIndicesEx macro. (#1023) New changeset c26b19d5c7aba51b50a4d7fb5f8291036cb9da24 by Serhiy Storchaka in branch '3.6': Expand the PySlice_GetIndicesEx macro. (#1023) (#1046) New changeset fa25f16a4499178d7d79c18d2d68be7f70594106 by Serhiy Storchaka in branch '3.5': Expand the PySlice_GetIndicesEx macro. (#1023) (#1045) PR 1973 adds a porting guide. This should be the last commit for this issue. Please make a review and suggest better wording. Could anyone please make review of the documentation? @serhiy.storchaka: review done. New changeset 4d3f084c035ad3dfd9f8479886c41b1b1823ace2 by Serhiy Storchaka in branch 'master': bpo-27867: Add a porting guide for PySlice_GetIndicesEx(). (#1973)
https://bugs.python.org/issue27867
CC-MAIN-2018-39
refinedweb
2,627
56.86
ivnowa at hvision.nl (Hans Nowak) wrote in <39D1FC71.6768 at hvision.nl>: >y must have an initial value... try inserting y = 0 before the for. >Aside from that, you probably want to use args.values() rather than >args.keys(). > >This won't work for your second line (good="a", etc) though, because it >uses strings for values, and y is initialized as an integer. > >If you want a more generic function, you could try: > >def adder3(**args): > return reduce(lambda x, y, a=args: x+y, args.values()) > >>>> print adder3(good=1, bad=2, ugly=3) >6 >>>> print adder3(good="a", bad="b", ugly="c") >"abc" Why do you add 'a=args' to the lambda parameter list? args is in local namespace and is not a parameter of lambda but a parameter of reduce. Regards, Mike
https://mail.python.org/pipermail/python-list/2000-September/037438.html
CC-MAIN-2014-10
refinedweb
137
77.74
Represents a directed graph which is embeddable in a planar surface. More... #include <PlanarGraph.h> Represents a directed graph which is embeddable in a planar surface. The computation of the IntersectionMatrix relies on the use of a structure called a "topology graph". The topology graph contains nodes and edges corresponding to the nodes and line segments of a Geometry. Each node and edge in the graph is labeled with its topological location relative to the source geometry. Note that there is no requirement that points of self-intersection be a vertex. Thus to obtain a correct topology graph, Geometry objects must be self-noded before constructing their graphs. Two fundamental operations are supported by topology graphs: Returns the edge whose first two coordinates are p0 and p1. nullif the edge was not found Returns the edge which starts at p0 and whose first segment is parallel to p1. nullif the edge was not found For nodes in the collection (first..last), link the DirectedEdges at the node that are in the result. This allows clients to link only a subset of nodes in the graph, for efficiency (because they know that only a subset is of interest). References geos::geomgraph::DirectedEdgeStar::linkResultDirectedEdges().
https://geos.osgeo.org/doxygen/classgeos_1_1geomgraph_1_1PlanarGraph.html
CC-MAIN-2019-09
refinedweb
202
54.93
Frequently Asked Questions¶ My model reports “cuda runtime error(2): out of memory”¶ As the error message suggests, you have run out of memory on your GPU. Since we often deal with large amounts of data in PyTorch, small mistakes can rapidly cause your program to use up all of your GPU; fortunately, the fixes in these cases are often simple. Here are a few common things to check: Don’t accumulate history across your training loop. By default, computations involving variables that require gradients will keep history. This means that you should avoid using such variables in computations which will live beyond your training loops, e.g., when tracking statistics. Instead, you should detach the variable or access its underlying data. Sometimes, it can be non-obvious when differentiable variables can occur. Consider the following training loop (abridged from source): total_loss = 0 for i in range(10000): optimizer.zero_grad() output = model(input) loss = criterion(output) loss.backward() optimizer.step() total_loss += loss Here, total_loss is accumulating history across your training loop, since loss is a differentiable variable with autograd history. You can fix this by writing total_loss += float(loss) instead. Other instances of this problem: 1. Don’t hold onto tensors and variables you don’t need. If you assign a Tensor or Variable to a local, Python will not deallocate until the local goes out of scope. You can free this reference by using del x. Similarly, if you assign a Tensor or Variable to a member variable of an object, it will not deallocate until the object goes out of scope. You will get the best memory usage if you don’t hold onto temporaries you don’t need. The scopes of locals can be larger than you expect. For example: for i in range(5): intermediate = f(input[i]) result += g(intermediate) output = h(result) return output Here, intermediate remains live even while h is executing, because its scope extrudes past the end of the loop. To free it earlier, you should del intermediate when you are done with it. Don’t run RNNs on sequences that are too large. The amount of memory required to backpropagate through an RNN scales linearly with the length of the RNN input; thus, you will run out of memory if you try to feed an RNN a sequence that is too long. The technical term for this phenomenon is backpropagation through time, and there are plenty of references for how to implement truncated BPTT, including in the word language model example; truncation is handled by the repackage function as described in this forum post. Don’t use linear layers that are too large. A linear layer nn.Linear(m, n) uses memory: that is to say, the memory requirements of the weights scales quadratically with the number of features. It is very easy to blow through your memory this way (and remember that you will need at least twice the size of the weights, since you also need to store the gradients.) My GPU memory isn’t freed properly¶ PyTorch uses a caching memory allocator to speed up memory allocations. As a result, the values shown in nvidia-smi usually don’t reflect the true memory usage. See Memory management for more details about GPU memory management. If your GPU memory isn’t freed even after Python quits, it is very likely that some Python subprocesses are still alive. You may find them via ps -elf | grep python and manually kill them with kill -9 [pid]. My data loader workers return identical random numbers¶ You are likely using other libraries to generate random numbers in the dataset. For example, NumPy’s RNG is duplicated when worker subprocesses are started via fork. See torch.utils.data.DataLoader’s documentation for how to properly set up random seeds in workers with its worker_init_fn option. My recurrent network doesn’t work with data parallelism¶ There is a subtlety in using the pack sequence -> recurrent network -> unpack sequence pattern in a Module with DataParallel or data_parallel(). Input to each the forward() on each device will only be part of the entire input. Because the unpack operation torch.nn.utils.rnn.pad_packed_sequence() by default only pads up to the longest input it sees, i.e., the longest on that particular device, size mismatches will happen when results are gathered together. Therefore, you can instead take advantage of the total_length argument of pad_packed_sequence() to make sure that the forward() calls return sequences of same length. For example, you can write: from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModule(nn.Module): # ... __init__, other methods, etc. # padded_input is of shape [B x T x *] (batch_first mode) and contains # the sequences sorted by lengths # B is the batch size # T is max sequence length def forward(self, padded_input, input_lengths): total_length = padded_input.size(1) # get the max sequence length packed_input = pack_padded_sequence(padded_input, input_lengths, batch_first=True) packed_output, _ = self.my_lstm(packed_input) output, _ = pad_packed_sequence(packed_output, batch_first=True, total_length=total_length) return output m = MyModule().cuda() dp_m = nn.DataParallel(m) Additionally, extra care needs to be taken when batch dimension is dim 1 (i.e., batch_first=False) with data parallelism. In this case, the first argument of pack_padded_sequence padding_input will be of shape [T x B x *] and should be scattered along dim 1, but the second argument input_lengths will be of shape [B] and should be scattered along dim 0. Extra code to manipulate the tensor shapes will be needed.
https://pytorch.org/docs/master/notes/faq.html
CC-MAIN-2019-43
refinedweb
910
55.13
Table Of Contents Tree View¶ New in version 1.0.4. TreeView is a widget used to represent a tree structure. It is currently very basic, supporting a minimal feature set. Introduction¶ A TreeView is populated with TreeViewNode instances, but you cannot use a TreeViewNode directly. You must combine it with another widget, such as Label, Button or even your own widget. The TreeView always creates a default root node, based on TreeViewLabel. TreeViewNode is a class object containing needed properties for serving as a tree node. Extend TreeViewNode to create custom node types for use with a TreeView. For constructing your own subclass, follow the pattern of TreeViewLabel which combines a Label and a TreeViewNode, producing a TreeViewLabel for direct use in a TreeView instance. To use the TreeViewLabel class, you could create two nodes directly attached to root: tv = TreeView() tv.add_node(TreeViewLabel(text='My first item')) tv.add_node(TreeViewLabel(text='My second item')) Or, create two nodes attached to a first: tv = TreeView() n1 = tv.add_node(TreeViewLabel(text='Item 1')) tv.add_node(TreeViewLabel(text='SubItem 1'), n1) tv.add_node(TreeViewLabel(text='SubItem 2'), n1) If you have a large tree structure, perhaps you would need a utility function to populate the tree view: def populate_tree_view(tree_view, parent, node): if parent is None: tree_node = tree_view.add_node(TreeViewLabel(text=node['node_id'], is_open=True)) else: tree_node = tree_view.add_node(TreeViewLabel(text=node['node_id'], is_open=True), parent) for child_node in node['children']: populate_tree_view(tree_view, tree_node, child_node) tree = {'node_id': '1', 'children': [{'node_id': '1.1', 'children': [{'node_id': '1.1.1', 'children': [{'node_id': '1.1.1.1', 'children': []}]}, {'node_id': '1.1.2', 'children': []}, {'node_id': '1.1.3', 'children': []}]}, {'node_id': '1.2', 'children': []}]} class TreeWidget(FloatLayout): def __init__(self, **kwargs): super(TreeWidget, self).__init__(**kwargs) tv = TreeView(root_options=dict(text='Tree One'), hide_root=False, indent_level=4) populate_tree_view(tv, None, tree) self.add_widget(tv) The root widget in the tree view is opened by default and has text set as ‘Root’. If you want to change that, you can use the TreeView.root_options property. This will pass options to the root widget: tv = TreeView(root_options=dict(text='My root label')) Creating Your Own Node Widget¶ For a button node type, combine a Button and a TreeViewNode as follows: class TreeViewButton(Button, TreeViewNode): pass You must know that, for a given node, only the size_hint_x will be honored. The allocated width for the node will depend of the current width of the TreeView and the level of the node. For example, if a node is at level 4, the width allocated will be: treeview.width - treeview.indent_start - treeview.indent_level * node.level You might have some trouble with that. It is the developer’s responsibility to correctly handle adapting the graphical representation nodes, if needed. - class kivy.uix.treeview.TreeView(**kwargs)[source]¶ Bases: kivy.uix.widget.Widget TreeView class. See module documentation for more information. - Events - on_node_expand: (node, ) Fired when a node is being expanded - on_node_collapse: (node, ) Fired when a node is being collapsed - add_node(node, parent=None)[source]¶ Add a new node to the tree. - Parameters - node: instance of a TreeViewNode Node to add into the tree - parent: instance of a TreeViewNode, defaults to None Parent node to attach the new node. If None, it is added to the rootnode. - Returns the node node. - hide_root¶ Use this property to show/hide the initial root node. If True, the root node will be appear as a closed node. hide_rootis a BooleanPropertyand defaults to False. - indent_level¶ Width used for the indentation of each level except the first level. Computation of indent for each level of the tree is: indent = indent_start + level * indent_level indent_levelis a NumericPropertyand defaults to 16. - indent_start¶ Indentation width of the level 0 / root node. This is mostly the initial size to accommodate a tree icon (collapsed / expanded). See indent_levelfor more information about the computation of level indentation. indent_startis a NumericPropertyand defaults to 24. - iterate_all_nodes(node=None)[source]¶ Generator to iterate over all nodes from node and down whether expanded or not. If node is None, the generator start with root. - iterate_open_nodes(node=None)[source]¶ Generator to iterate over all the expended nodes starting from node and down. If node is None, the generator start with root. To get all the open nodes: treeview = TreeView() # ... add nodes ... for node in treeview.iterate_open_nodes(): print(node) - load_func¶ Callback to use for asynchronous loading. If set, asynchronous loading will be automatically done. The callback must act as a Python generator function, using yield to send data back to the treeview. The callback should be in the format: def callback(treeview, node): for name in ('Item 1', 'Item 2'): yield TreeViewLabel(text=name) load_funcis a ObjectPropertyand defaults to None. - minimum_height¶ Minimum height needed to contain all children. New in version 1.0.9. minimum_heightis a NumericPropertyand defaults to 0. - minimum_size¶ Minimum size needed to contain all children. New in version 1.0.9. minimum_sizeis a ReferenceListPropertyof ( minimum_width, minimum_height) properties. - minimum_width¶ Minimum width needed to contain all children. New in version 1.0.9. minimum_widthis a NumericPropertyand defaults. - remove_node(node)[source]¶ Removes a node from the tree. New in version 1.0.7. - Parameters - node: instance of a TreeViewNode Node to remove from the tree. If node is root, it is not removed. - root¶ Root node. By default, the root node widget is a TreeViewLabelwith text ‘Root’. If you want to change the default options passed to the widget creation, use the root_optionsproperty: treeview = TreeView(root_options={ 'text': 'Root directory', 'font_size': 15}) root_optionswill change the properties of the TreeViewLabelinstance. However, you cannot change the class used for root node yet. rootis an AliasPropertyand defaults to None. It is read-only. However, the content of the widget can be changed. - root_options¶ Default root options to pass for root widget. See rootproperty for more information about the usage of root_options. root_optionsis an ObjectPropertyand defaults to {}. - selected_node¶ Node selected by TreeView.select_node()or by touch. selected_nodeis a AliasPropertyand defaults to None. It is read-only. - exception kivy.uix.treeview.TreeViewException[source]¶ Bases: Exception Exception for errors in the TreeView. - class kivy.uix.treeview.TreeViewLabel(**kwargs)[source]¶ Bases: kivy.uix.label.Label, kivy.uix.treeview.TreeViewNode Combines a Labeland a TreeViewNodeto create a TreeViewLabelthat can be used as a text node in the tree. See module documentation for more information. - class kivy.uix.treeview.TreeViewNode(**kwargs)[source]¶ Bases: builtins.object TreeViewNode class, used to build a node class for a TreeView object. - color_selected¶ Background color of the node when the node is selected. color_selectedis a ColorPropertyand defaults to [.1, .1, .1, 1]. - even_color¶ Background color of even nodes when the node is not selected. bg_coloris a ColorPropertyand defaults to [.5, .5, .5, .1]. - is_leaf¶ Boolean to indicate whether this node is a leaf or not. Used to adjust the graphical representation. is_leafis a BooleanPropertyand defaults to True. It is automatically set to False when child is added. - is_loaded¶ Boolean to indicate whether this node is already loaded or not. This property is used only if the TreeViewuses asynchronous loading. is_loadedis a BooleanPropertyand defaults to False. - is_open¶ Boolean to indicate whether this node is opened or not, in case there are child nodes. This is used to adjust the graphical representation. is_openis a BooleanPropertyand defaults to False. - is_selected¶ Boolean to indicate whether this node is selected or not. This is used adjust the graphical representation. is_selectedis a BooleanPropertyand defaults to False. - level¶ Level of the node. levelis a NumericPropertyand defaults to -1. - no_selection¶ - Boolean used to indicate whether selection of the node is allowed or not. no_selectionis a BooleanPropertyand defaults to False. - nodes¶ List of nodes. The nodes list is different than the children list. A node in the nodes list represents a node on the tree. An item in the children list represents the widget associated with the node. nodesis a ListPropertyand defaults to []. - odd¶ This property is set by the TreeView widget automatically and is read-only. oddis a BooleanPropertyand defaults to False. - odd_color¶ Background color of odd nodes when the node is not selected. odd_coloris a ColorPropertyand defaults to [1., 1., 1., 0.]. - parent_node¶ Parent node. This attribute is needed because the parentcan be None when the node is not displayed. New in version 1.0.7. parent_nodeis an ObjectPropertyand defaults to None.
https://kivy.org/doc/master/api-kivy.uix.treeview.html
CC-MAIN-2021-39
refinedweb
1,367
52.97
KAutoSaveFile #include <KAutoSaveFile> Detailed Description Creates and manages a temporary "auto-save" file. Autosave files are temporary files that applications use to store the unsaved data in a file they have open for editing. KAutoSaveFile allows you to easily create and manage such files, as well as to recover the unsaved data left over by a crashed or otherwise gone process. Each KAutoSaveFile object is associated with one specific file that the application holds open. KAutoSaveFile is also a QObject, so it can be reparented to the actual opened file object, so as to manage the lifetime of the temporary file. Typical use consists of: - verifying whether stale autosave files exist for the opened file - deciding whether to recover the old, autosaved data - if not recovering, creating a KAutoSaveFile object for the opened file - during normal execution of the program, periodically save unsaved data into the KAutoSaveFile file. KAutoSaveFile holds a lock on the autosave file, so it's safe to delete the file and recreate it later. Because of that, disposing of stale autosave files should be done with releaseLock(). No lock is held on the managed file. Examples: Opening a new file: The function recoverFiles could loop over the list of files and do this: If the file is unsaved, periodically write the contents to the save file: When the user saves the file, the autosaved file is no longer necessary and can be removed or emptied. Definition at line 120 of file kautosavefile.h. Constructor & Destructor Documentation Constructs a KAutoSaveFile for file filename. The temporary file is not opened or created until actually needed. The file filename does not have to exist for KAutoSaveFile to be constructed (if it exists, it will not be touched). - Parameters - Definition at line 82 of file kautosavefile.cpp. This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Constructs a KAutoSaveFile object. Note that you need to call setManagedFile() before calling open(). - Parameters - Definition at line 89 of file kautosavefile.cpp. Destroys the KAutoSaveFile object, removes the autosave file and drops the lock being held (if any). Definition at line 95 of file kautosavefile.cpp. Member Function Documentation Returns all stale autosave files left behind by crashed or otherwise gone instances of this application. If not given, the application name is obtained from QCoreApplication, so be sure to have set it correctly before calling this function. See staleFiles() for information on the returned objects. The application owns all returned KAutoSaveFile objects and is responsible for deleting them when no longer needed. Remember that deleting the KAutoSaveFile will release the file lock and remove the stale autosave file. Definition at line 224 of file kautosavefile.cpp. Retrieves the URL of the file managed by KAutoSaveFile. This is the same URL that was given to setManagedFile() or the KAutoSaveFile constructor. This is the name of the real file being edited by the application. To get the name of the temporary file where data can be saved, use fileName() (after you have called open()). Definition at line 101 of file kautosavefile.cpp. Opens the autosave file and locks it if it wasn't already locked. The name of the temporary file where data can be saved to will be set by this function and can be retrieved with fileName(). It will not change unless releaseLock() is called. No other application will attempt to edit such a file either while the lock is held. - Parameters - - Returns - true if the file could be opened (= locked and created), false if the operation failed Definition at line 125 of file kautosavefile.cpp. Closes the autosave file resource and removes the lock file. The file name returned by fileName() will no longer be protected and can be overwritten by another application at any time. To obtain a new lock, call open() again. This function calls remove(), so the autosave temporary file will be removed too. Definition at line 114 of file kautosavefile.cpp. Sets the URL of the file managed by KAutoSaveFile. This should be the name of the real file being edited by the application. If the file was previously set, this function calls releaseLock(). - Parameters - Definition at line 106 of file kautosavefile.cpp. Checks for stale autosave files for the file url. Returns a list of autosave files that contain autosaved data left behind by other instances of the application, due to crashing or otherwise uncleanly exiting. It is the application's job to determine what to do with such unsaved data. Generally, this is done by asking the user if he wants to see the recovered data, and then allowing the user to save if he wants to. If not given, the application name is obtained from QCoreApplication, so be sure to have set it correctly before calling this function. This function returns a list of unopened KAutoSaveFile objects. By calling open() on them, the application will steal the lock. Subsequent releaseLock() or deleting of the object will then erase the stale autosave file. The application owns all returned KAutoSaveFile objects and is responsible for deleting them when no longer needed. Remember that deleting the KAutoSaveFile will release the file lock and remove the stale autosave file. Definition at line 196 of file kautosavefile.cpp..
https://api.kde.org/frameworks/kcoreaddons/html/classKAutoSaveFile.html
CC-MAIN-2021-49
refinedweb
881
56.35
[ ] Farid Zaripov commented on STDCXX-687: -------------------------------------- Merged in 4.2.x branch thus: > [gcc] use string __builtins > --------------------------- > > Key: STDCXX-687 > URL: > Project: C++ Standard Library > Issue Type: Sub-task > Components: 21. Strings > Affects Versions: 4.1.2, 4.1.3, 4.1.4, 4.2.0 > Reporter: Martin Sebor > Assignee: Martin Sebor > Fix For: 4.2.1 > > Original Estimate: 2h > Time Spent: 2h > Remaining Estimate: 0h > > The following gcc builtin equivalents of the C string functions would be useful in the implementation of std::char_traits: > __builtin_memcpy: char_traits::copy() > __builtin_memcmp: char_traits::compare() > __builtin_memmove: char_traits::move() > __builtin_memset: char_traits::assign() > __builtin_strlen: char_traits::length() > Unfortunately, as of gcc 4.2.2, there is no builtin equivalent of memchr() which is used in char_traits::find(), so using the builtins won't let us get away from #including the <cstring> header to bring in the declaration of the function (thus reducing namespace pollution caused by all the other symbols declared in the header). > There also are no builtins for the wide character counterparts of any of these functions (such as wmemcmp or wcslen). > See the following page for more details: > -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
http://mail-archives.apache.org/mod_mbox/stdcxx-issues/200804.mbox/%3C209559713.1208425521775.JavaMail.jira@brutus%3E
CC-MAIN-2017-04
refinedweb
205
51.18
How to display a splash screen in Qt Article Metadata Tested with Devices(s): Emulator Compatibility Article Keywords: QSplashScreen Last edited: hamishwillee (11 Oct 2012) Overview This code snippet demonstrates how to display a splash screen before your application loaded using QSplashScreen. may usually appear in the center of the screen. Preconditions Various Function - Makes the splash screen wait until the widget mainWin is displayed splash.finish(&window); - Draws the message text onto the splash screen with color and aligns the text according to the flags in alignment splash.showMessage("Wait..."); Source File #include <QApplication> #include <QPixmap> #include <QSplashScreen> #include <QWidget> #include <QMainWindow> int main(int argc, char *argv[]) { QApplication app(argc, argv); QPixmap pixmap("c://designer.png"); QSplashScreen splash(pixmap); splash.show(); splash.showMessage("Wait..."); qApp->processEvents();//This is used to accept a click on the screen so that user can cancel the screen QMainWindow window; window.setStyleSheet("* { background-color:rgb(199,147,88); padding: 7px}"); window.show(); splash.finish(&window); return app.exec(); } Screenshot . Splash screen is used during application startup.It's just a start up image with some messages showing to the users.It represents network connections,loading some items,etc.. as a message displayed on the Splash screen.How can it be done in the application is shown in this article and it uses simple code for Qt. --vkmunjpara 18:59, 17 September 2009 (UTC) Jahan.geo - Mistake in the line qApp->processEvents() It should be qApp.processEvents()!!!!!mind the dot after object qApp, not as pointer. jahan.geo 12:49, 21 October 2011 (EEST) Entricular - The above code is wrong and needs to be corrected see the following code below Use a program such as the GIMP to create your splashpage.png image and include the splashpage.png in your images application directory. Run the following commands: Execute the program: I am tired of people posting flawed, incomplete and unconfirmed Qt code, test your code before you post it to make sure it works so others can follow and learn. Posting snippets of code doesn't help it is very frustrating, in the future post the full source code. This code was tested and confirmed( it works ) on Ubuntu Linux 10.04 LTS with Qt 4.7.4 SDK installed. I tested the code with Windows Vista and Qt 4.7.4 and it did not work. entricular 16:21, 28 April 2012 (EEST)
http://developer.nokia.com/Community/Wiki/How_to_display_a_splash_screen_in_Qt
CC-MAIN-2013-48
refinedweb
400
58.69
Tutorial: Split View Presentation - 4 minutes to read This walkthrough is a transcript of the Split View Presentation video available on the DevExpress YouTube Channel. The tutorial covers the Split Presentation feature inspired by Microsoft Excel. It allows you to split the grid into two independently scrollable panes. You can split the grid vertically and thus edit the last grid row in one pane, while simultaneously looking at the first grid row in another pane. Same applies to columns when the View is split horizontally. This can help end-users browse and analyze data with many columns or rows. Starting Point Begin with a sample application with its Data Grid connected to a sample Microsoft AdventureWorks database – a large database with many records and data fields. Such a layout can benefit from using split view presentation. Creating Grid Split Container One way to enable this feature is to drop the GridSplitContainer control onto the form instead of the GridControl. This would create a grid control within the split container with all required settings. Since the form already has a grid in it, invoke the grid control's smart-tag and click the Add Split Container link. You won't notice the changes at design time, but your grid is now placed into a split container and you can split it into two regions. Before you proceed, go the grid View's properties, the GridView.OptionsMenu section. Make sure that the GridOptionsMenu.ShowSplitItem property is set to true. Activating Split Presentation at Runtime Run the application. Right-click the Group by Box and select Split from the context menu. By default, a splitter divides the grid vertically. You can individually scroll each pane vertically. Horizontal scrolling affects both panes. Enabling Horizontal Split Presentation Return to design time, select the GridSplitContainer and change its GridSplitContainer.Horizontal property to true. Now the Split menu item divides the grid into two panes horizontally. You can individually scroll each pane horizontally, while vertical scrolling is synchronized. Split Presentation API You can enable split presentation at application start-up or implement a custom UI element that would switch this mode on or off. This can be an item in the RibbonControl. To toggle the split View mode, call the GridSplitContainer's GridSplitContainer.ShowSplitView and GridSplitContainer.HideSplitView methods. Make sure to call GridSplitContainer.ShowSplitView in the Form's constructor so that the split container is automatically enabled at application startup. One more thing you can do is scroll the second pane so that end-users don't see the same data in both sections. You'll need to access the secondary grid using the split container's GridSplitContainer.SplitChildGrid property. Then, obtain that grid's GridControl.MainView and set its GridView.TopRowIndex property. using DevExpress.XtraBars; // ... public Form1() { // ... gridSplitContainer1.ShowSplitView(); ((GridView)gridSplitContainer1.SplitChildGrid.MainView).TopRowIndex = gridView1.RowCount - 11; } private void barToggleSwitchItem1_CheckedChanged(object sender, ItemClickEventArgs e) { BarToggleSwitchItem item = sender as BarToggleSwitchItem; if (item == null) return; if (item.Checked == false) gridSplitContainer1.HideSplitView(); else gridSplitContainer1.ShowSplitView(); } Run the application. Now you see that on application start-up the primary grid displays first data rows as it did before, but the secondary grid is now scrolled down to the bottom. Synchronization Settings Notice that by default you are free to focus different data rows in each of the grid regions. You can change that by setting the GridSplitContainer.SynchronizeFocusedRow property to true. Now focusing the row in one region will cause the other region to scroll up or down to this row and focus it as well. You can also notice that horizontal scrolling affects both panes at once. To change this, set the GridSplitContainer.SynchronizeScrolling property to false. You can now scroll the two panes independently using their individual scroll bars. Finally, any data shaping operations applied in one grid pane are reflected in the other pane. For instance, you can group data against a column and the same grouping will be applied to the other pane. Group row expand and collapse operations are also synchronized. This behavior is controlled by the GridSplitContainer.SynchronizeViews and GridSplitContainer.SynchronizeExpandCollapse properties. Switch the GridSplitContainer.SynchronizeViews property to false. Now if you group data against a column, nothing happens in the other pane.
https://docs.devexpress.com/WindowsForms/114734/controls-and-libraries/data-grid/getting-started/walkthroughs/split-presentation/tutorial-split-view-presentation
CC-MAIN-2020-40
refinedweb
699
50.84
In Patterns of Enterprise Application Architecture [PEAA] Martin Fowler tells us that the Model View Controller (MVC) splits user interface interaction into three distinct roles (Figure 1): - Model - (Figure 2). am going to assume familiarity with both Java 6 and Swing. It is very important to implement MVC carefully with a good set of tests. Benjamin Booth discusses this in his article 'The M/VC Antipattern'. [MVC Antipattern]. The problem As I have mentioned in previous articles (and probably to everyone's boredom on accu-general) I am writing a file viewer application that allows fixed length record files in excess of 4GB to be viewed without loading the entire file into memory. I have had a few failed attempts to write it in C# recently (although recent discoveries have encouraged me to try again), but it was not until I had a go in Java with its JTable and AbstractTableModel classes that I really made some progress. These two classes are themselves a model (AbstractTableModel) and view (Jtable). However, in the example I'll discuss in this article they actually form part of one of the views. The file viewer application needs to be able to handle multiple files in a single instance. The easiest and most convenient way to do this is with a tabbed view (Figure 3). I have completed the back-end business logic which models the file and its associated layout (which describes how records are divided up into fields) as a project with the interface in Listing 1. I will look in a bit more detail at most of the methods in the interface as the article progresses, but for now the important methods are getRecordReader and getLayout. getRecordReader gives the AbstractTableModel random access to the records and fields in the file and getLayout gives access to the layout which allows the view to name and order its fields. The table model implementation I have looks like Listing 2. I've omitted exception handling and a few other bits and pieces that are not relevant to the example. Basically a RecordGrid object holds a reference to a Project object and uses it to populate the table cells and column titles on request. Rows are populated one at a time from column 0 to column x, so every time column 0 is requested a new record is loaded. This reduces the amount of file access that would be required if a record was loaded every time a cell was requested. Every time a new project is created the code in Listing 3 is used to create a tab for the file. Again, exception handling has been omitted. A reference to the RecordReader is created in order to get a name for the tab by calling the getDataDescription method. The new Project object is passed to a new RecordGrid object, which is then used as a JTable model. A new scrollable pane is created using the table and in turn used to create a new tab. Finally the new table is made the currently selected tab. All straight forward and not particularly complicated or problematic. The problem comes when you want to get the Project object out of the current tab so that you can, for example, call close on it or use it to set the main window title bar. Listing 4 shows one way it can be done. It relies on the fact that every object is a component of another object. Sometimes, as shown above, requesting a component's child component returns an array of components and, although this code doesn't show it, the required component needs to be found within the array. This is messy and potentially unreliable. After writing this code I felt there had to be a better way. So I asked some people. Roger Orr came up with a much simpler solution: JTable table = (Jtable)pane.getViewport().getView(); Something still didn't feel right though. The code is querying the GUI components to navigate the object relationships. This breaks encapsulation as changing the GUI layout would break this code. There are also other ways, but none of them seemed to be the right solution either. The solution The general consensus of opinion was that I was mad to have a user interface component (the tabs) managing a data component (the project) and that I should be using the mediator version of the Model View Controller (MVC) described above. I thought I pretty much had how MVC worked nailed down because of the MFC document view model stuff (basically just a model and views) I had done early in my career, but reading up on MVC and Swing in the various books I had just left me confused in terms of implementation. Then I googled and found Java SE Application Design With MVC [JADwMVC] on the Sun website. Suddenly everything was much clearer and I set about knocking up a prototype. The main concern I had was keeping the tabbed view in sync with the model, so that when I selected a tab the currently selected project in the model was changed to reflect it correctly and when a new project was added to or deleted from the model it was also added or removed from the tabbed view. Should I remove every tab from the tabbed view and redraw when the model was updated or try and add and remove tabs one at a time in line with the model? I decided to start developing my MVC prototype and cross that bridge when I came to it. The beauty of the mediator MVC patterns is that each component can be worked on and changed individually to a greater or lesser extent. The concerns are well separated. The model The file viewer model: - Needs to handle multiple projects. - Needs to have a mechanism for adding projects. - Needs to have a mechanism for deleting projects. - Needs to have a mechanism for setting the current project. - Needs to have a mechanism for getting the current project. - Should have a mechanism for getting the number of projects for testing purposes. - Needs to fire events when properties change (e.g. a project is added, deleted or selected). The controller will have to mediate a number of these operations from views and menus to the model. So both will have similar interfaces for organizing projects within the model. The model interface looks like Listing 5. New projects will be created outside of the controller, so the newProject method takes a Project reference. Fully unit testing a model like this is relatively easy, but beyond the scope of this article. The other methods perform the other required operations, with the exception of firing events. Implementing the model is straightforward. I'll go through the data members first and then each method in turn. public class Projects implements ProjectOrganiser { private final List<Project> projects = new ArrayList<Project>(); private int currentProjectIndex = -1; ... } The model is essentially a container for storing and manipulating projects, so it needs a way of storing the projects. An ArrayList is ideal. The model also needs to indicate which project is currently selected. To keep track it stores the ArrayList index of the currently selected project. If the ArrayList is empty or no project is currently selected then currentProjectIndex's value is -1. Initially there are no projects. The addProject method adds the new project to the ArrayList and sets it as the current project: public void addProject(Project prj) { projects.add(prj); setCurrentProjectIndex(projects.size() - 1); } The getProjectCount method simply asks the ArrayList its size and returns it. The setCurrentProjectIndex method checks the index it is passed to make sure it is within the permitted range. It can either be -1 or a valid index within the ArrayList. If the index is not valid it constructs an exception message explaining the problem and throws an IndexOutOfBoundsException. If the index is valid it is used to set the current project. (See Listing 6.) The getCurrentProjectIndex simply returns currentProjectIndex. The getCurrentProject method relies on the fact that the setCurrentProjectIndex method has policed the value of currentProjectIndex successfully. Therefore it only checks to make sure currentProjectIndex is greater than or equal to 0. If it is it returns the corresponding project from the ArrayList, otherwise null (Listing 7). The deleteCurrentProject method is by far the most interesting in the model. It is also the most important method to get a unit test around. It checks to make sure there are projects in the ArrayList. If there are then it calls close on and then deletes the current project from the ArrayList and calculates which the next selected project should be. If, following the deletion, another project moves into the same ArrayList index it becomes the next selected project. Otherwise the project at the previous index in the ArrayList is used. If there are no longer any projects in the ArrayList the current index is set to -1. (Listing 8.) As you can see, manipulating projects within the model via the ProjectOrganiser interface is very straight forward. However, there is currently no way of notifying the controller when a property changes. The Java SE Application Design With MVC article recommends using the Java Beans component called Property Change Support. The PropertyChangeSupport class makes it easy to fire and listen for property change events. It allows chaining of listeners, as well as filtering by property name. To enable the registering of listeners and the firing of events a few changes need to be made to ProjectOrganiser and Projects. First, event names and a change listener registering method need to be added to ProjectOrganiser (see Listing 9). Later versions of Java have support for enums. However, property change support does not, but uses strings instead. Enums could be used in conjunction with the toString method, but this makes their use overly verbose and does not give any clear advantages. Then the Projects class needs a PropertyChangeSupport object and the methods to add listeners and fire events (Listing 10). The PropertyChangeSupport object needs to know the source of the events it is firing so this is passed in. The addPropertyChangeListener method simply forwards to the same method of the PropertyChangeSupport object. Any class that implements the PropertyChangeListener interface can receive events. The firePropertyChange method takes the name of the property that has changed, the property's old value and its new value. All of these, together with the source object, are passed to the event sink as a PropertyChangeEvent. If the old and new values are the same the event is not fired. This can be overcome by setting one or more of the old and new objects to null. We'll look at the implications of this shortly. The Projects class now has the methods to fire events, but is not actually firing anything. The controller needs to be notified every time a project is created, deleted or selected. This means the addProject, deleteProject and setCurrentProjectIndex methods must be modified: public void addProject(Project prj) { Project old = getCurrentProject(); projects.add(prj); firePropertyChange(NEW_PROJECT_EVENT,old,prj); setCurrentProjectIndex(projects.size() - 1); } The addProject method now stores a reference to the current project prior to the new project being added. The old project and the new project are both passed to the event as properties. This means that the event sink can perform any necessary clean up using the previously selected project and update itself with the new project without having to query the controller. Also, the old project and new project references will never refer to the same project, so the event will never be ignored. The setCurrentProjectIndex method fires two events. The CURRENT_PROJECT_INDEX_CHANGE_EVENT event is fired when the current project index changes and the CURRENT_PROJECT_CHANGE_EVENT is fired when the current project changes. These are deceptively similar events. Consider when a project is deleted. If the project is in the middle of the ArrayList the project in front of it moves into its index. The project changes, but the index stays the same. The CURRENT_PROJECT_INDEX_CHANGE_EVENT event passes both the old and new indexes. The CURRENT_PROJECT_CHANGE_EVENT is only passed the new project. The old value is always null. This is because, following a project deletion the old project no longer exists. The deleteCurrentProject method requires some significant changes, shown in Listing 12. Ideally, when a project is deleted the old value and the next project to be selected are be passed as the old and new values. The event needs to be fired before the project is actually removed so that any thing using it can clean up. This can make it difficult to work out which project is which. One way to be sure is to make a copy of the project ArrayList, remove the project to be deleted from it and then work out which the next selected project will be. To do this I wrote a helper class called NextProject. Overall it is more code, but it makes for a much neater solution to the deleteCurrentProject method and means the next project index only needs to be calculated once. Again, the deleted project and the next selected project will never be the same, so the event will not be ignored. That completes the fully unit testable model. The model is by far the most complex and difficult part of the MVC to implement and get right. A good set of unit tests is essential. Once you have it right the view and controller follow quite easily. The controller The controller is the mediator between the model and the views. Therefore it makes sense to develop it next; otherwise the model and view could end up so incompatible that writing a controller would be very difficult. The controller maintains a reference to the model and list of references to registered views. If the model changes it passes the event onto the views via the ProjectOrganiserEventSink interface. public interface ProjectOrganiserEventSink { public abstract void modelPropertyChange( final PropertyChangeEvent evt); } Menu items and registered views all maintain a reference to the controller and use it to pass actions to the model. Therefore the model has a number of methods that just forward to the model. The code below shows the Controller properties and constructor: public class Controller implements PropertyChangeListener { private final ProjectOrganiser model; private List<ProjectOrganiserEventSink> views = new ArrayList<ProjectOrganiserEventSink>(); public Controller(ProjectOrganiser model) { this.model = model; this.model.addPropertyChangeListener(this); }... } The Controller implements the PropertyChangeListener interface. The PropertyChangeListener interface allows the controller to receive events from the model. The Controller takes a reference to the model as a constructor parameter and uses it to register itself with the model. Views register themselves via the addView method: public void addView( ProjectOrganiserEventSink view) { views.add(view); } Events are passed to registered views via the overridden propertyChange method from the PropertyChangeListener interface: @Override public void propertyChange( PropertyChangeEvent evt) { for (ProjectOrganiserEventSink view : views) { view.modelPropertyChange(evt); } } The model forwarding methods do just that, with the exception of the newProject method. The newProject method is different because it has to create a project to pass to the model. The idea behind the Project interface is that it can be used to reference implementations for different file type and layout type combinations. Therefore I have written the NewProjectDlg class to allow the user to select the type of project they want. It then calls createNew on the project to do some project specific creation. A reference to the project can then be queried and passed to the model: public void newProject(JFrame owner) { NewProjectDlg d = new NewProjectDlg(owner,true); d.setVisible(true); Project prj = d.getProject(); if (prj != null) { model.newProject(prj); } } The internal workings of the NewProjectDlg class are beyond the scope of this article. The Controller is almost fully unit testable. The fly in the ointment is of course the NewProjectDlg. You just don't want it popping up in the middle of a test. There are a number of easy ways of getting round it, but they are beyond the scope of this article also. However, Controller is so simple that it hardly requires a unit test. Under normal circumstances I would write one anyway, but it would require some non-trivial mock objects that just do not seem worth it. The view Getting a view to receive events from the controller is very simple. It only requires the implementation of the ProjectOrganiserEventSink interface and then registering with the controller. The complexity comes with what you actually do with the view. I'm going to explain two examples. One that just updates the title of the main window when the current project changes and one that keeps tabs in sync with the model. (That was the method I decided to try and implement first. It worked as you'll see!) Main window view As I hinted at earlier, I come from an MFC background. Writing GUIs in Java with Swing is therefore an absolute dream by comparison. Instead of having to rely hugely on the wizard and get the linking right, with Swing a window can be created from a main function and a few simple objects. The main window is a good place to create and reference the controller. In order to receive events from the controller the view must implement the ProjectOrganiserEventSink interface and register itself with the model (Listing 13). When the current project changes, due to a project being added or deleted or the user selecting a different tab, the description of the project should be updated in the main window's title bar. This is done by handling the project changed event (Listing 14). When the view receives the PropertyChangeEvent it checks to see what type of event it is. If it is a CURRENT_PROJECT_CHANGED_EVENT it gets the project from the event object. If the current project is null, for example if there are no projects in the model, it sets the default title, otherwise it gets a description from the project and concatenates that to the default title. So far we have a main window that creates and handles events from a controller, but nothing that actually causes an event to be fired. To get events we need to be able to create and delete projects. One of the easiest ways to do this is via a menu. Menus are easy to create and anonymous classes give instant access to the controller. (See Listing 15.) Projects can now be added to and deleted from the model via the main window's file menu. The main window handles an event from the controller that allows it to set its title based on the currently selected project. Tabbed view Projects are not much use if they cannot be viewed or selected, so what is needed is a tabbed view capable of displaying projects (Listing 16). Swing has the JTabbedPane class that will do the job perfectly. Subclassing, as shown in Listing 16, gives a better level of encapsulation and control of the view's functionality. The view must also implement the ProjectOrganiserEventSink, maintain a reference to the controller, which must be passed into the constructor, and register itself with the controller. In order to be seen and used the DataPane also needs to be added to the main window: public class MainWindow extends JFrame implements ProjectOrganiserEventSink { ... public MainWindow() { ... this.add(new DataPane(controller)); this.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE ); this.setSize(1000, 600); } ... } As well as responding to events fired by the controller, the view also needs to notify the controller when a user has selected a different tab. This is done by writing a change listener: public void initialise() { controller.addView(this); addChangeListener(new ChangeListener() { @Override public void stateChanged(ChangeEvent arg0) { controller.setCurrentProjectIndex( getSelectedIndex()); } }); } Every time the tabbed control's state changes, for example the user selects a different tab, the change listener's stateChanged method is called. As the change listener is implemented as an anonymous class within DataPane it has access to DataPane's properties. Therefore it can query the tabbed view for the current index and pass it to the controller. The modelPropertyChange override handles events from the controller (Listing 17). As before, the event's name is used to determine what sort of event it is. The DataPane handles the new project, project deleted and project index change events, passing where appropriate, the event's new value to another method for handling. The new value is a new project that has been created, the newly selected project following a project deletion, or the new index following a newly created or deleted project. setSelectedIndex is a method inherited via JTabbedPane and does exactly what you would expect. The addProject method is implemented as follows, with exception handling omitted for clarity: private void addProject(Project project) { if (project != null) { RecordReader recordReader = project.getRecordReader(); TableModel model = new RecordGrid(project); JTable table = new JTable(model); this.addTab( recordReader.getDataDescription(), new JscrollPane(table)); } } The code should look familiar. It is almost identical to part of the code in 'The problem' section above. The only difference is that the newly added tab is not selected as the current tab. That is now set following a project index changed event. private void deleteCurrentTab() { if (getTabCount() > 0) { remove(this.getSelectedIndex()); } } The deleteCurrentTab method checks to make sure there is at least one tab to delete, gets the index of the current tab and deletes it. That completes the implementation of the DataPane view. Projects can now be added to and deleted from the application, tabs can be changed and the changes reflected in the main window title and the client area tabbed view (see Figure 4). Conclusion By implementing MVC and demonstrating how easy it is to manipulate, control and display the projects, I believe I have demonstrated the advantages of MVC over the original design I had, where the model was effectively buried in the view and difficult to get hold of when needed. The mediator pattern keeps the model and views nicely decoupled. There is slight coupling of the view to the ProjectOrganiser interface as the event names are part of the interface. If this became an issue it would be simple to move the event names to their own class. I believe this is unnecessary at this stage. I was also concerned about keeping the projects in the model and the associated tab in the view in sync. However, this problem was easily overcome: - By relying on the fact that new tabs are always added at the end of the tabbed view and new projects are always added at the end of the model ArrayList. - Using an event from the model to set the currently selected project in the tabbed view. - Using an event from the model to remove the tabs for deleted projects from the tabbed view. Finally I'll leave you with a comment from a colleague of mine: "I really like the idea of Model View Controller, but it takes so much code to do very little." I think I've shown here that there is quite a lot of code involved in the MVC design compared to the model within the view design, but the separation and the ease with which the data can be manipulated is a clear advantage. Acknowledgments Thanks to Roger Orr, Tom Hawtin, Kevlin Henney, Russel Winder and Jez Higgins for general guidance advice and patience and to Caroline Hargreaves for review. References [PEAA] Patterns of Enterprise Application Architecture by Martin Fowler. ISBN-13: 978-0321127426 [MVC Antipattern] The M/VC Antipattern [JADwMVC] Java SE Application Design With MVC
https://accu.org/index.php/journals/1524
CC-MAIN-2016-44
refinedweb
3,924
62.98
While I was thinking about the next project, lots of things came to my mind. But this time, I thought of doing something new. Going through the scraps, I found 11 servo motors that I purchased for a project 4 years ago. Let's see, Arduino, servo motor, servo motor driver, regulator. Hmm... Why not something that could walk? So, I decided to get rid of all the wheels and make a 3 legged walking robot. In this post, I will show you how to build a three legged robot (a tripod) using Arduino and servo motors.Materials Needed Arduino Nano Or any Arduino board which will be the controlling center of our project Voltage Regulator Module For providing controlled and stable DC source for the working of servo motors as well as internal chips. PCA9685 16 Channel 12 Bit PWM I2C-bus controlled Servo motor Driver Now we need some thing to drive all the 9 servos. I am using Adafruits PCA9685 16 Channel PWM Module. The PCA9685 is a 16 Channel 12 Bit PWM I2C-bus controlled Servo motor driver. The Driver can very easily connected to your Arduino, Raspberry Pi and easily programmed to control single or multiple servo motors and make your own RC plane, car, ship, quadrapod, hexapod or anything you want. To know how to control a servo motor using this driver, watch the video below. Servo Motor These are servo motors. They are widely used in the field of robotics due to its precise control of position, velocity and acceleration. These servo motors can be combined together to provide multiples degrees of freedom. In this project, we are using 3 servo motors in each leg which gives us 3 degrees of freedom. So we need a total of 9 servo motors.The Regulator Arduino and servo motor that I am using needs a voltage between 5v and 6v. Since I am using a lithium polymer battery of 11v, I will have to use regulator. The regulator I am using is LM 2596S 20083 adjustable voltage regulator. It is really easy to setup 1 yourself but If you don't know how to use, watch the below video tutorial in which I explained the usage of this regulator. In the current project, I set the operating voltage to 5.5 V. Servo motors take too much current from the power source. If we are using a single regulator, it may overheat when multiple servo motors are running. So it is always a better idea to use multiple regulators in parallel in these cases.The Body Parts and Designs The material for the leg is acrylic sheet and the body is made of foam board. I designed these body parts in coral draw and got it laser cut from a nearby workshop. I will upload the layout here. Feel free to use it. These are the different parts. For the base of the tripod, you can use any lightweight material. Here I had some foam board so I used it. You have to cut 3 holes to mount the base servo motors. Clean the edges otherwise it be difficult for the servo motor to fit in.Let's Start Logon to for more fun projects and tutorials. It would be better if you can watch the video just below for clearly understanding what is going on. Watch this: Start by fixing all the servo motors in the body parts as shown in the video. Just fix it with some nuts and bolts and tighten all the connection so that it wont fall off or shake while the tripod is moving. Use some strong glue and join 2 servo motors as shown in the video. Combining two servo motors like this will give us 2 degrees of freedom. Connect the elbow part with the other joints. Connecting the elbow part with these 2 servo motor will give each arm of the tripod, a total of 3 degrees of freedom, which is more than enough to make it walk the way we want. Now fix all the legs to the foam base as shown in the video. Fix the two regulators somewhere it wont be disturbed. I fixed these at the bottom of the foam board base. Attach it with screws and use hot glue gun so that all the connections will be secure. Fix the servo driver module in another piece of foam board and secure it with screws. Use double sided tape to fix the battery and foam board piece with the base of our tripod. At last, fix our Arduino on the top foam board near the servo driver board. All the connections in this project are very easy. Just connect the battery to our regulator input and servo driver and Arduino to the regulator output. I will share the schematics and circuit diagrams soon. This is our tripod after all the connections. It is a little bit messy. So lets clean it up. Use some zip-ties to organize all the wires. Make sure that Arduino and servo driver are near to each other. Otherwise, the connections will be little bit hard. Now its time to program our tripod. Connect the Arduino to our PC. Just go to device manager and check the device port. Now open Arduino IDE. Below is a sample sketch just to check all the motors functionality. #include <Wire.h> #include <Adafruit_PWMServoDriver.h> Adafruit_PWMServoDriver pwm = Adafruit_PWMServoDriver(); #define MIN_PULSE_WIDTH 650 #define MAX_PULSE_WIDTH 2350 #define DEFAULT_PULSE_WIDTH 1500 #define FREQUENCY 50 uint8_t servonum = 0; void setup() { Serial.begin(9600); Serial.println("16 channel Servo test!"); pwm.begin(); pwm.setPWMFreq(FREQUENCY); pwm.setPWM(0, 0, pulseWidth(100)); delay(500); pwm.setPWM(4, 0, pulseWidth(60)); delay(500); pwm.setPWM(12, 0, pulseWidth(60)); delay(500); pwm.setPWM(1, 0, pulseWidth(10)); delay(500); pwm.setPWM(5, 0, pulseWidth(170)); delay(500); pwm.setPWM(13, 0, pulseWidth(170)); delay(500); pwm.setPWM(2, 0, pulseWidth(120)); delay(500); pwm.setPWM(6,0 , pulseWidth(85)); delay(500); pwm.setPWM(14, 0, pulseWidth(110)); delay(500); } int pulseWidth(int angle) { int pulse_wide, analog_value; pulse_wide = map(angle, 0, 180, MIN_PULSE_WIDTH, MAX_PULSE_WIDTH); analog_value = int(float(pulse_wide) / 1000000 * FREQUENCY * 4096); Serial.println(analog_value); return analog_value; } void loop() { pwm.setPWM(0, 0, pulseWidth(150)); pwm.setPWM(13, 0, pulseWidth(180)); delay(500); pwm.setPWM(14, 0, pulseWidth(180)); delay(500); pwm.setPWM(1, 0, pulseWidth(0)); delay(500); pwm.setPWM(5, 0, pulseWidth(180)); delay(500); pwm.setPWM(2, 0, pulseWidth(120)); delay(50); delay(50); pwm.setPWM(14, 0, pulseWidth(110)); delay(1000); pwm.setPWM(0, 0, pulseWidth(50)); delay(1000); pwm.setPWM(4, 0, pulseWidth(120)); delay(1000); pwm.setPWM(1, 0, pulseWidth(20)); delay(500); pwm.setPWM(5, 0, pulseWidth(160)); delay(500); pwm.setPWM(0, 0, pulseWidth(100)); delay(500); pwm.setPWM(4,0, pulseWidth(60)); delay(500); } This is just a simple code just to move the servo motors. Paste the code to Arduino IDE go to boards, select correct COM port and board and upload the sketch. If the code is uploaded successfully, with in 15 seconds, all the servo motor will start to move. Congrats, you have done it! Now I am working on writing the codes for the tripod to walk, run, duck and some other complex movements and integrate various sensors into this tripod to fully automate it. Once I complete that, I will upload it in my blog. Logon to for more fun projects and tutorials. Stay tuned for more.
https://create.arduino.cc/projecthub/greenterminal/build-a-tripod-using-arduino-and-servo-motors-52336c
CC-MAIN-2022-05
refinedweb
1,250
68.06
Hi, We have added 2019DCs to a couple of our sites and noticed several intermittent network share access issues. In other sites that are still on the 2012R2, we have no issues but the two sites with a mix of 2012R2 / 2019DC, we have daily complaints about network share access. We use DFS/Namespace on all DCs and all file servers, not sure what causes this issue. Not sure if anyone notices similar issues. 10 Replies Maybe still running FRS instead of migrating to DSFR? Just a shot in the dark.... check your DFS roots, assuming you are using a a domain DFS you *must* make sure that all the roots are replicated to *all* DCs (they are set up as "Namespace Servers"). to check browse to \\dc1FQDN\ & check the shares listed, you should see the DFS root level shares listed, then drill down & make sure that you've got the appropriate sub folder links. Repeat for all DCs I imagine you'll find that some (or more) are missing on one (or more) DCs Fire up the DFS management console go to each DFS namespace & make sure you have *all* DCs listed as Namespace servers. (my guess is that you are missing the new DCs) bit of background for those that haven't played with Domain DFS Namespaces, the unc path for the shares is in the form \\AD_Domain_FQDN\namespace\sharename so any connection starts at \\AD_Domain_FQDN\ - which will *always* resolve to a domain controller, which one you hit will depend on the normal networking bits. These shares are automatically created & replicated around via the SYSVOL replication but only to the DCs which are named in the "Namespace Servers" (this is also why you shouldn't have files at the Namespace level - they will sit in Sysvol & be replicated to all DCs c: drives..) So if all the DCs are not in as "Namespace Servers" for a domain DFS you will get intermittent problems as you client connects to different DCs.
https://community.spiceworks.com/topic/2309608-added-2019-dc-has-caused-intermittent-network-share-access-issues?page=1
CC-MAIN-2021-21
refinedweb
332
69.35
Question A company’s 6 percent coupon rate, semiannual payment, $1,000 par value bond that matures in 30 years sells at a price of $515.16. The company’s marginal tax rate is 40 percent. What is the firm’s component cost of debt for purposes of calculating the WACC? Answer to relevant QuestionsYou are given the following information about a firm:The firm expects to retain $160,000 in earnings this year to invest in investment projects. If the firm’s capital budget is expected to equal $180,000, what required ...Roberson Fashion’s capital structure consists of 30 percent debt and 70 percent common equity. Roberson is considering raising new capital to finance its expansion plans. The company’s investment banker has compiled the ...The earnings, dividends, and stock price of Talukdar Technologies Inc. are expected to grow at 7 percent per year in the future. Talukdar’s common stock sells for $23 per share, its last dividend was $2.00, and the company ...Cash flows rather than accounting profits are listed in Table 13-1. What is the basis for this emphasis on cash flows as opposed to net income?Explain the decision rules—that is, under what conditions a project is acceptable—for each of the following capital budgeting methods:a. Net present value (NPV)b. Internal rate of return (IRR)c. Modified internal rate of ... Post your question
http://www.solutioninn.com/a-companys-6-percent-coupon-rate-semiannual-payment-1000-par
CC-MAIN-2017-13
refinedweb
234
60.01
The game of Minesweeper requires a player to determine the location of mines hidden randomly throughout a two dimensional grid i.e. minefield. Each of the grid locations is initially covered by a tile. The player may open a tile, flag a tile as a mine location, or set a tile as a question mark. Clues describing the number of adjacent mines to a tile are displayed when the player opens a tile. A player wins by opening all of the non-mine tiles (see Figure 1). A player loses when they open a tile containing a mine (see Figure 2) If you are unfamiliar with the game here are some references that will get you started: This assignment was created by Jeff Lehman in the Mathematics and Computer Science Department at Huntington College and posted on the Nifty Assignments website. It has been modified for use for CS 312. Mines Array A two-dimensional array can be used to represent the mines and clue values for a minesweeper game. Integer values are used to represent mines and clue values. Figure 3 shows a sample mines array for Figure 1. The array has nine mines (shown in red). Clue values are included for all mines. Figure 3: Mines Array for Figure 1 Tiles Array A second two-dimensional array can be used to represent the status of the tiles for a minesweeper game. Integer values are used to represent a tile that is opened, closed, flagged as a mine, or set as question mark. Figure 4 shows a sample tiles array for Figure 3. All tiles in the first row (row 0) have been set to open (value 0). All tiles in the second row (row 1) have been set as a question mark (value 2). All tiles in the third row (row 2) have been flagged as mines (value 3). All tiles in the remaining rows (rows 3 to 8) have been set to closed (value 1). Figure 4: Tiles Array for Figure 1 Board The mines and tiles arrays are used to determine what the player will see as they play the minesweeper game. The characters 'X', ' ', '?', 'F', and '*' can be used to create a rough text representation of minesweeper game board. When the game is "won" all mines should be seen as 'F'. When the game is lost the mine that loses the game is shown as '!', the remaining mines are shown as '*', and any mines that were incorrectly flagged will be displayed as '-'. Figure 5 shows the values the player would see for the mines and tiles arrays from Figure 3 and Figure 4 if the game was lost. At this point the status of the game is "lose". The mine that lost the game was opened at position (0, 8). Figure 5: Board values for Figure 1 The following table summarizes the board values for the tiles array, mines array, and current game status. For this assignment you will implement a Minesweeper class to support a complete working version of a Minesweeper game. While you are not creating a GUI interface in this assignment, you will be creating attributes and methods that will support a GUI interface. You may add additional attributes or methods if necessary. All attributes must be private. Methods maybe public or private. You must use recursion for the method markTile() to open all blanks and clues when a tile is opened that covers a blank. A sample test program TestMineSweeper.java is provided. Modify the test program to demonstrate each of the methods in your MineSweeper class. For this assignment you will be working with a partner. Both of you must read the paper on Pair Programming before you get started. Your programs must have a header of the following form: /* File: TestMineSweeper.java Description: Student Name: Student UT EID: Partner's Name: Partner's UT EID: Course Name: CS 312 Unique Numbers: TestMineSweeper.java file. The proctors should receive your work by 11 PM on Friday, 03 May 2013. There will be substantial penalties if you do not adhere to the guidelines. GUI Test After testing each of your methods using text output, you must test your Minesweeper class using the GUI test program. The .class and image files for the GUI test program are provided. Step 1 - Copy each of the following supporting GUI class and image files into the same directory as the TestMineSweeper.class file Step 2 - Run the GUI test program using your minesweeper.class java minesweeperGUI Minesweeper Demo A minesweeper demo program is provided as an executable .jar file. (minesweeperDemo.jar) Step 1 - Copy the following jar file minesweeperDemo.jar Step 2 - Run the GUI demo Type the following from the command line java -jar minesweeperDemo.jar Outline of class minesweeper class minesweeper { // Attributes public int[][] mines; //mines and clue values public int[][] tiles; //tiles covering mines and clues private String status; //game status - play, win, lose //Constructors public minesweeper() // default constructor 9 by 9 board public minesweeper(int newRows, int newCols) // non-default constructor // Public Methods public String getStatus() //current game status - play, win, lose public int getRows() //number of game board rows public int getCols() //number of game board columns public int getMines(int r, int c) //mine array value at position r,c public int getTiles(int r, int c) //mine array value at position r,c public char getBoard(int r, int c) //board value for position r,c public void markTile(int r, int c, int t) //change tile status public String toStringMines() //mines array as String public String toStringTiles() //tiles array as String public String toStringBoard() //game board as String private void initGame(int newRows, int newCols) //set-up game private void resetTiles() //set all tiles closed private void placeMines() //place random mines private void calculateClues() //calculate clue values private boolean validIndex(int r, int c) //verify index private boolean gameWon() //determine if game is won } Outline of class TestMineSweeper public class TestMineSweeper { public static void main (String[] args) { // create new minesweeper instance 2 rows by 5 columns minesweeper game = new minesweeper(2, 5); // display mines System.out.println ( game.toStringMines() ); // display tiles System.out.println ( game.toStringTiles() ); // display board System.out.println ( game.toStringBoard() ); // mark tile at (0, 0) as Open game.markTile (0, 0, 0); // mark tile at (0, 1) as Question Mark game.markTile (0, 1, 2); // mark tile at (0, 0) as Mine game.markTile (0, 2, 3); // display tiles System.out.println ( game.toStringTiles() ); // display board System.out.println ( game.toStringBoard() ); } } Sample Output #1 for TestMineSweeper.java 92100 29100 11111 11111 XXXXX XXXXX 02311 11111 *?FXX XXXXX Sample Output #2 for TestMineSweeper.java 29100 92100 11111 11111 XXXXX XXXXX 02311 11111 2?FXX XXXXX
http://www.cs.utexas.edu/~mitra/csSpring2013/cs312/assgn/assgn13/assgn13.html
CC-MAIN-2015-27
refinedweb
1,120
71.14
Closed Bug 280603 Opened 17 years ago Closed 17 years ago "New Updates Avail" popup in bottom right-hand corner pops up endlessly (random occurrence) Categories (Toolkit :: Application Update, defect) Tracking () People (Reporter: ben, Assigned: mconnor) References Details (Keywords: fixed-aviary1.0.1) Attachments (3 files, 2 obsolete files) User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.8b) Gecko/20050122 Firefox/1.0+ Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.8b) Gecko/20050122 Firefox/1.0+ Happened once so far. The "New Updates Available" (green jigsaw) box is popping up in an infinite loop. It is chewing a fair amount of CPU in the process. Clicking on the box/window just pops them up faster and increases CPU load. Four windows open, with 5-20 tabs each. Nothing I can do will get rid of this window. Reproducible: Didn't try Process Explorer notes firefox.exe is in state Wait:WrUserRequest, and context-switching 300-1000 times a second. MSVCRT.DLL is also performing a lot of cswitches, cycling between Wait:UserRequest and Ready. This is the only easy way to describe what is happening. Suspected of DoS'ing UMO. Issue to be determined. Severity: normal → critical Version: unspecified → Trunk update.mozilla.org is currently down, and based on network traffic I highly suspect it's because of this bug. We've effectively been under a DDoS attack since exactly midnight GMT on Feb 1. The following seems to be at fault: Note the use of getUTCDay (which is day of the week) instead of getUTCDate (which is day of the month) This means update checks aren't happening at all after the first week of the month is over, and can potentially behave REALLY weird during that first week of the month if the day of the month and the day of the week line up just right. Status: UNCONFIRMED → NEW Ever confirmed: true Flags: blocking-aviary1.1? Flags: blocking-aviary1.0.1? Version: Trunk → unspecified much thanks to mconnor for finding the chunk of code where this lived. Version: unspecified → Trunk Assignee: bugs → mconnor Status: NEW → ASSIGNED There may be more to this bug than just the date calculation... Why did Firefox think there was an update available when there wasn't? And why does it think there's one available when the server is unreachable (bug 280607)? Comment on attachment 173049 [details] [diff] [review] use getUTCDate correctly who knows, but r=ben@mozilla.org on the patch. I think asa is managing branch approvals. Attachment #173049 - Flags: review+ Attachment #173049 - Flags: approval-aviary1.0.1? *** Bug 280607 has been marked as a duplicate of this bug. *** (In reply to comment #6) > Why did Firefox think there was an update available when there wasn't? Or was there? The reporter mentioned it was the green jigsaw icon that was popping up... that's the extension updates, not the application update, right? Extensions and themes can have their own update URLs. OS: Windows 2000 → All Hardware: PC → All This is a little bit of a longshot, but I'll throw it out anyway: Could this be more fallout (in some way) related to the switch to namespaced expat? Flags: blocking-aviary1.1? Flags: blocking-aviary1.1+ Flags: blocking-aviary1.0.1? Flags: blocking-aviary1.0.1+ Comment on attachment 173049 [details] [diff] [review] use getUTCDate correctly a=asa. Attachment #173049 - Flags: approval-aviary1.0.1? → approval-aviary1.0.1+ 278274 is a dupe of this *** Bug 278274 has been marked as a duplicate of this bug. *** landed on 1.0.1 branch Status: ASSIGNED → RESOLVED Closed: 17 years ago Resolution: --- → FIXED The trunk still has this problem (Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8b) Gecko/20050206 Firefox/1.0+) It would be nice if this patch would be checked in to the trunk as well. Requesting reopening. lxr shows that this is fixed on trunk. I searched bonsai for the checkin, but it's not there in the Seamonkey trunk. And even if the fix is checked in, it's not working: the bug still appears in yesterday's build. Using the CVS Log link at the top of LXR... *sigh* __And even if the fix is checked in, it's not working: the bug still appears in yesterday's build.__ Or does it work for you? This is not fixed in Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8b) Gecko/20050211 Firefox/1.0+ for me and other testers. Reopening Status: RESOLVED → REOPENED Resolution: FIXED → --- ok, so we're not continously checking anymore, but we're still repeatedly notifying. Will investigate further. *** Bug 282411 has been marked as a duplicate of this bug. *** test, set machine's clock to right before the problem time period (midnight GMT of the 1st day of any month, so that would be 4pm PST). eg, 01-Feb-2005 at 15:55. the launch Firefox and see what happens. please correct me if this test case isn't the right way to verify this bug. PST would be the day before, 4pm on Jan 31. Whiteboard: need patch I have not been able to reproduce this. If anyone has detailed steps to increase my odds of seeing this, please post them here. Setting the time isn't working for me and the few extensions that needed updates did not seem to be checking for them automatically. Sample extension to test with. Steps to reproduce: 0. Make a new profile, just in case 1. Install this extension 2. Restart Firefox (to finish the install) 3. Go to about:config and set update.interval to 500 4. Wait half a second for the updates available notification (This bug should manifest - the notification will show up again right after going away) 5. Reset update.interval Note that day of month, etc. do not seem to matter - tested 2005-02-17 Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8b) Gecko/20050217 Firefox/1.0+ (This extension has a custom update.rdf with a chrome:// URI, so no servers to worry about) (In reply to comment #27) > Steps to reproduce: > 0. Make a new profile, just in case > 1. Install this extension > 2. Restart Firefox (to finish the install) > 3. Go to about:config and set update.interval to 500 Thinking intuitively, I agree that setting update.interval to such a small value (from 3600000 to 500) will cause update notifications to fire very rapidly. But why is changing the user pref to what I would assume is an obviously insane value the proper way to reproduce what's supposed to be a legitimate bug? Is it the only way to reproduce it? If so, I'd hesitate to call the problem legitimate. In conflict with this line of thinking, though, is that the m.o sysadmin group reported seeing an extraordinary increase in the amount of traffic to UMO during the first days of the month (beginning at approximately midnight UTC 2/1). > Note that day of month, etc. do not seem to matter - tested 2005-02-17 > Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8b) Gecko/20050217 > Firefox/1.0+ Here are some questions for you: * When you reset your update.interval to the default value, do you see this problem? * When you set your system's date to the first day of the month do you see it? * Does this problem go away when you change your system's date to a later day in the month? We need more data and feedback about system configurations that hold this bug and what effect it causes, both on the client side and on the server side. And, really, we need the data soon! We're near the end of the line for Firefox 1.0.1 fixes and this one's big on our radar. The original reporter, beryan, filed this bug at 18:55 1/31 (which was past 2/1 UTC). To beryan: * What was your update.interval set to at that time? * What was app.update.interval set to? * What are they set to now? * Did you have any extensions installed? * If so, did any of those legitimately have new versions available then? * What was/is your app.version set to in about:config? We haven't been able to reproduce the endless popup bug locally. What setting triggers the popup slider to appear for users? Also, even with mconnor's patch we see a number of accesses to UMO and we aren't certain that his patch, while reducing the number of accesses to UMO, cuts those accesses down to an accessible load level for us. There are aviary1.0.1 builds available right now. These can be found in: We'd appreciate it if beryan, Anton, Nickolay, and Mook tested those builds and let us know if they show the bug for them or not (without changing the update intervals from their default). Even if you've tested against trunk builds, it helps us to know the problem exists on the aviary1.0.1 branch for you still. Thanks. Sorry, better steps to reproduce (to force an update check): 1. Set extensions.update.enabled to false (default true) 2. Set extensions.update.enabled to true Ethereal reports one hit to the server (per extension/theme) only. I.e., the problem (the notifier showing up immediately after going away) does not depend on update requests to the server. So something is wrong independently of checking too often. Interestingly, I can only reproduce this on the trunk - the 1.0.1 branch does not have the problem with the notifier. So if everyone else agrees on this point, at least it won't need to hold 1.0.1 back. Occurs on: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8b) Gecko/20050217 Firefox/1.0+ Does not occur on: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.5) Gecko/20050217 Firefox/1.0 (In reply to comment #29) > Ethereal reports one hit to the server (per extension/theme) only. I.e., the > problem (the notifier showing up immediately after going away) does not depend > on update requests to the server. So something is wrong independently of > checking too often. Thanks for providing this data point, Mook. Could you try reproducing this bug using the build at: This build is from before mconnor's patch was committed. Specifically I'm interested in hearing if your ethereal trace shows more than just the one access to the UMO service. No matter what I do I can't get it to access the update site more than once (per reset of *.update.enabled). I had changed it locally to check instead; easier to filter. I do see one access each time I set/reset *.update.enabled prefs. That's with the old build; and yes I did try resetting the clock to Feb 1 23:xx PST. It seems to be blocked by *.update.interval (independent of update.interval, which seems to control how often the decision to check or not check is made). Also, the bug (as described in the summary, and as I've been seeing it) does not occur in 20050203-1.0.1branch either. (For reading the code - wouldn't the old code just force the app to check at the first week of the month, but no allow more checks than normal, anyway? I.e., the second time it checks would always be within the first seven days of the month. But then again, I know nothing :p) Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.5) Gecko/20050203 Firefox/1.0 Okay, this is a little scary. The fix in the bug, while correct, will mean that we'll start seeing MORE traffic to UMO, however the initial spike at the start of the month should go away . If you've been using the app, your last updated date will be set to something in the first week of the previous month. So with update.interval (the interval at which Firefox decides whether to check for updates) set to one hour, within an hour of the new month starting, you will hit the updates URL/URLs because Firefox thinks its been three weeks. In reality, it could have been only an hour ago, if you started using Firefox in the last week of the month. So as we tick across timezones into the new month, we compress 24 hours of potential traffic into an hour, since while in theory we'd be staggered by the 24 hour interval, it comes up for everyone at the same time (the only thing saving us here is that not everyone is online at the same time). Then, fortunately, things start to decline until after the first week, where most people have an established last updated date that's late enough in the week that they won't update again that month, barring a late Saturday session, for example. There's also the extensions factor, since due to this bug, we'll probably only update once a month, because of the one week interval for extension update checking. However, this is N requests per client, where N is the number of extensions/themes installed. So in addition to the theoretical time bomb of millions of users hitting UMO for app update requests, we also have N requests on top of that for users with extensions. Taking an estimate of 3 million users using an average of 5 extensions/themes per client, that's another 15 million requests that'll hit the server in that week, and probably most/all in the first day. That first spike will get saved as their last update time, so the next week we'll get hammered by an echo spike. But none of this explains how the original reporter beryan got his slider flood. When I experienced this (1.0 final, now using trunk builds), I had lots of tabs open at the time, so just ignored it for a while. But whilst that notification was going off, I couldn't change panels within options. See bug 278016 for UMO being able to receive multiple items in one call See bug 278014 for Firefox sending a single request instead of multple addon checks. Please note that this isn't the same as application update checking. Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8b) Gecko/20050212 Firefox/1.0+ (MOOX M2) Windows XP I am able to reproduce this everytime without additional settings on one profile (but profile is partialy damaged). Here it is: when I start profile, I get green arrow for updates. If I surf, but not update, after certain time I get sliding message, as described in this bug. After a time, this profile became more problematic - now when I start it with firefox.exe -p, Firefox is locked, so I must close it and start Firefox normaly. It is probably related to the bug. I have done more tests on this profile, and I see that this is in connection with some of the extensions. First, I couldn't start unlock Firefox in safe mode. Then, when I disabled all extensions, Firefox always starts locked up. I think I had 4 extension, but all I can remember is this: Undo Close tab Text link Google image (the name could be a bit different - it allows to view images directly by clicking on thumbs) One more datum: when I click on arrow for updates, it claims that there are updates for Undo Close Tab, but it is impossible to update. Hope some of these explanation can help to find the possible cause of the error on reporter's computer. (In reply to comment #36) > After a time, this profile became more problematic - now when I start it with > firefox.exe -p, Firefox is locked, so I must close it and start Firefox normaly. > It is probably related to the bug. That's true: Once the popup starts sliding in over and over again, you can't close firefox normally. The window will disappear, but the process will remain running. There's no way to shut down firefox except for killing it. *** Bug 282773 has been marked as a duplicate of this bug. *** My results are the same as Mook's. The bug as I see it happens with trunk builds both before and after mconnor's checkin: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8b) Gecko/20050128 Firefox/1.0+ Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8b) Gecko/20050218 Firefox/1.0+ But not with 1.0 branch builds: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.5) Gecko/20041107 Firefox/1.0 Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.5) Gecko/20050218 Firefox/1.0 (from) (Although I did have some issues with that popup, those may be specific to my system) I was trying to reproduce by toggling extensions.update.autoUpdateEnabled (and later just extensions.update.enabled) on a clean profile with the testcase extension installed. This bug is reproducible, no matter what date is set. I haven't yet tried to reproduce this by setting date, so I can't verify that the original bug was fixed. (I'm working off the summer here, not the download spike to UMO) Fallout from bug 267089 nsIAlertsService was changed to use nsIObserver, which means that the nsUpdateObserver was getting alertfinished / alertclickcallback; but it assumed that it was only observing the update stuff, and proceeded to show the alert again. Attachment #174781 - Flags: review?(mconnor) (In reply to comment #32) To clarify from mconnor here, the app looks to see if it needs to update again or not once an hour. During the first week of the month, it was using the day of the week instead of the day of the month as "now". The days of the week in this function are numbered from 0 to 6 with 0 being Sunday. February 1st fell on a Tuesday. The value for Tuesday is 2. So when it would do the date check, it would grab the last time it did a date check "Feb 1 at midnight" and compare it to it's fake version of "now" using the day of the week, so it thinks "now" is "Feb 2 at 1 AM", and says "oh, more than a day has passed since the last update" and it does another one. So as long as that person was online, and as long as the day of the week value was more than the day of the month value, they were hitting us once an hour. As for the capacity of the UMO server, don't worry about March. We have more than enough capacity in place now to handle a spike four times the size of the one that hit us in February. I'm also suspecting that this bug, as reported, is actually a separate issue, and the timing of it being filed and the parity of symptoms between the client behavior and server behavior caused us to errantly hijack this bug for the day-of-the-week issue when it probably wasn't related. Comment on attachment 174781 [details] [diff] [review] Possible patch woo, I suck! thanks for cleaning up after my most excellent reviewage. Attachment #174781 - Flags: review?(mconnor) → review+ Is this patch something that would fix a problem that occurs on the aviary branch, or just on the trunk? Is it something we want to consider for Firefox 1.0.1? Whiteboard: need patch OK, I see now. Bug 267089 landed on the trunk, but didn't update the one implementation of nsIAlertListener in JS -- but the interface change was such that the JS code still worked, but called the observe method instead of the methods that were implementing nsIAlertListener, and the observe method was not set up to handle this (since it had no default case in the switch, which it probably should have -- with a dump and return -- like many observers have assertions in C++ implementations). So this patch is not relevant to the aviary 1.0(.1) branches. Comment on attachment 174781 [details] [diff] [review] Possible patch As I said in my previous comment, it would probably be good if the switch had a default case that does whatever the JS equivalent of an assertion is (probably dump and return or throw). Being a little more defensive in methods like this is a good thing (although in C++ we have the ability to do it without any runtime cost in non-DEBUG builds). This is why assertions are good and we try to write a lot of them to document and enforce expectations. (That said, if you write such a default case, you need to ensure that there aren't any other topics that *are* expected.) Comment on attachment 174781 [details] [diff] [review] Possible patch Asking for SR (If this gets SR+, please check in for me; I don't have CVS access) dbaron: This was never landed on the branch, so it's not applicable. The blocking+ is for the download spike problem (which is independent). Comment on attachment 174781 [details] [diff] [review] Possible patch SR isn't needed for toolkit. I'll land this with dbaron's suggestion. *** Bug 283179 has been marked as a duplicate of this bug. *** Attachment #173049 - Attachment is obsolete: true Attachment #174781 - Attachment is obsolete: true previous patch includes a fix for bug 282752, somewhat related and replacing the initial patch (attachment 173049 [details] [diff] [review]) with a much faster call. Landed only on trunk, the initial patch will do for the 1.0.1 branch. Status: REOPENED → RESOLVED Closed: 17 years ago → 17 years ago Resolution: --- → FIXED Product: Firefox → Toolkit
https://bugzilla.mozilla.org/show_bug.cgi?id=280603
CC-MAIN-2021-49
refinedweb
3,711
75.1
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. select customer and get specific fields value I'm trying to get 'title' automatically when customer will select. can anyone help me? class res_partner(osv.osv): _name = 'project.task' _inherit = 'project.task' _columns = { 'partner_id': fields.many2one('res.partner', string='Employees', groups='base.group_user'), 'title':fields.char('Tte'), } def onchange_partner_id(self, cr, uid, ids, partner_id, context=None): if partner_id: partner_id = self.pool.get('res.partner').browse(cr, uid, title, context=context) return {'value': {'title': partner_id.title,}} return {'value':{}} xml:: <field name="partner_id" on_change="onchange_partner_id(title,partner_id)"/> <field name="title" on_change="onchange_partner_id(title,partner_id)"/> nahain, First, You don't need to pass 'title' as an argument in on_change of partner_id at xml field defination... then, under onchange_partner_id, u are having partner_id as an argument, but while browsing you are using 'title', which is not defined anywhere in function defination... so please change title to partner_id in browse, as: partner_id = self.pool.get('res.partner').browse(cr, uid, partner_id, context=context) and in xml as: <field name="partner_id" on_change="onchange_partner_id(partner_id)"/> and no need of any onchange function in title field... Hope it help! Hi, Pawan. Thank you for your reply, I tried with change as you suggest me but still not working, no error and not getting data in title field. Hi, Pawan. I change "partner_id" to "customer_project" (just changed the id) and customer_project= self.pool.get('res.partner').browse(cr, uid, customer_project, context=context), and in xml as: now its Working!! About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/select-customer-and-get-specific-fields-value-99021
CC-MAIN-2017-22
refinedweb
295
50.84
(One of the summaries of the one-day 2014 PyGrunn conference in Groningen in the Netherlands). Jeff Knupp wrote an ebook about writing pythonic code. The subtitle of his talk is towards comprehensible and maintainable code. “Idiomatic python” doesn’t mean “idiotic snake”. It means “pythonic code”. Code written in the way the Python community has agreed it should be written. Who decided this? Well, all the python developers through the code they write, share and criticize. The patterns you see there. Who really decides is sometimes the BDFL (Guido) or a PEP. Why would you? Three reasons: Readability. This helps people read your code. You keep the “cognitive burden” low. If I have to think during reading your code, reading your code is harder. I don’t want to remember things if it isn’t necessary. “Cognitive burden” is the best measure of readability. Obligatory Knuth quote, paraphrased: write code to explain to a human what we want the computer to do, don’t write just for the computer. Maintainability. Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live. Correctness. If you’re the only one that can read your code, correctness is irrelevant! Idiomatic code doesn’t always do the right thing, It isn’t faultless. But it makes is possible for someone else to read your code and actually spot the mistake. A bonus reason: “people will stop laughing at your code”. Do it even when you write small scripts. “I’m not a programmer, I just write scripts” sounds the same as “I’m not a thief, I just steal small things”. Do it even when you’re coming from Java and want to use bad, java-inspired, names. If you write python as java, other python programmers won’t be able to maintain your code. Part of writing idiomatic code means staying up to date with changes in the code itself. You can do import statistics in python 3.4, giving you statistics.mean(some_list)! Likewise with list comprehension when it appeared. It makes your code more readable, but you have to know you can use it. Same with for index, value in enumerate(values). Many people don’t know enumerate() and increment some index value… “I’m paid to write code, not to read it” is faulty. You are the one doing the most reading of the code you yourself wrote. How many times did you look at code you wrote last month without knowing why you did something? The point in all this: python is getting more popular every day. Great. But everyone learning to write pythonic code is essential to the language’s success and their and your own success. Think about the quality of the patches for your open source projects! Think about the errors that can be):
https://reinout.vanrees.org/weblog/2014/05/09/idiomatic-python.html
CC-MAIN-2022-21
refinedweb
480
76.72
+ 1 comment note that if we swap any two of 2nd,3rd,4th stacks then the minimal distance to the solution remains unchanged so WLOG after applying a move we can sort the 2nd,3rd,4th stacks by the size of their top disk. by doing so, we have chosen a unique representative for each class of equivalent nodes. this reduces number of nodes to about 1/6 of the naive BFS from collections import deque def legal_moves(x): for i in range(len(x)): if x[i]: for j in range(len(x)): if not x[j] or x[i][-1] < x[j][-1]: yield (i, j) def is_goal(x): return all([len(x[i]) == 0 for i in range(1, len(x))]) def bfs(x): def tuplify(z): return tuple(tuple(t) for t in z) def do_move(g, m): y = [list(t) for t in g] y[m[1]].append(y[m[0]].pop()) # WLOG sort 2nd-4th stacks by order of largest disk y[1:4] = sorted(y[1:4], key=lambda t: t[-1] if t else 0) return tuplify(y) visited = set() start = (tuplify(x), 0) q = deque([start]) visited.add(start) while q: node, depth = q.popleft() if is_goal(node): return depth for move in legal_moves(node): child = do_move(node, move) if child not in visited: visited.add(child) q.append((child, depth+1)) # load the representation from stdin N = int(raw_input()) A = [[] for i in range(4)] R = [int(t) for t in raw_input().split()] for i in range(N): A[R[i]-1] = [(i+1)] + A[R[i]-1] print bfs(A) Editorial in Hackerrank should contain a better explaination. There is not even a single comment line in the editorial code. Just a few comment lines, if added to the code can bring a huge impact for understanding solution. + 2 comments algorithm used in editorial section is not very clear and code is really unreadable. Can somebody please explain the algorithm or any different approach they adopted? + 1 comment Hardest medium problem in hackerrank! lol Optimization: BFS from top to bottom and from bottom to top, one level each. This will cut the tree in a half. Concerning the editorial: there actually are at most 6 new states that can be generated by any parent state - disk configuration. The largest of the four top disks (if applicable) cannot move, and the others have 1,2 (also if applicable) and 3 possible moves , disregarding pruning. Sort 33 Discussions, By: Please Login in order to post a comment
https://www.hackerrank.com/challenges/gena/forum
CC-MAIN-2022-27
refinedweb
426
59.94
Although I have passed Codecademy loop lesson, but I still don’t understand how to use loops correctly. In other words, I really have a big problem in using loops . For example: def over_nine_thousand(lst): all_sum = 0 for i in lst: all_sum += i return all_sum print(over_nine_thousand([8000, 900, 120, 5000])) # prints 8000 instead of 14020 In the code above, I want to sum all list numbers with function called over_nine_thousand that takes in lst as a parameter but print(over_nine_thousand([8000, 900, 120, 5000])) only prints the first element 8000. Why is this happening ?? please help me understand how loops actually work and how to use range() function because whenever I write loops, they don’t work correctly.
https://discuss.codecademy.com/t/loops-questions/550483
CC-MAIN-2022-33
refinedweb
119
57.2
This article will help you get your project to use Apache ANT and Apache Ivy as your build/dependency manager tool. Just to clarify this is not an Apache ANT or Ivy tutorial, here you will only find the information needed to setup this tools to work with the latest version of RichFaces. The first step is to have your enviornment ready. you will need an installation of ANT properly configured, as well as an installation of Ivy. Follow the instruction on each project site to configure these tools. Creating the structure of the project After having ANT and Ivy installed, you will need to create a new project. For this tutorial we will use the directory structure generated for a JSF project using Eclipse+Jboss Tools. The structure used is as follows. MyProject/ src/ WebContent/ META-INF/ pages/ resources/ templates/ WEB-INF/ lib/ faces-config.xml web.xml index.html Now, in order to use Ivy to handle all dependencies for us, we will need to create two files which tell Ivy how to retrieve the dependencies. ivy.xml File In our case we will create the ivy.xml file inside the project's root directory (that is under MyProject/). the ivy.xml file is the one that specifies the dependencies you need in order to build or execute your program (that includes testing of course). Create your ivy.xml file and write the content of the snippet below into it: <ivy-module <info organisation="org.apache" module="IvyTestProject" /> <dependencies> <dependency org="commons-lang" name="commons-lang" rev="2.0" conf="default"/> <dependency org="commons-cli" name="commons-cli" rev="1.0" conf="default" /> <dependency org="javax.servlet" name="servlet-api" rev="2.4" conf="default"/> <dependency org="javax.servlet" name="jstl" rev="1.2" conf="default"/> <dependency org="org.apache.myfaces.core" name="myfaces-api" rev="2.0.4" conf="default" /> <dependency org="org.richfaces.ui" name="richfaces-components-ui" rev="4.1.0.Final" conf="default" /> <dependency org="org.richfaces.core" name="richfaces-core-impl" rev="4.1.0.Final" conf="default" /> </dependencies> </ivy-module> Note that last two dependency definitions are richfaces-core and richfaces-ui dependencies, other dependencies are needed for a JSF application. In this case we are using MyFaces as the JSF implementation. Now it is time to tell ivy where to find those dependencies we just defined. For this, we must create a new file that can be placed wherever you want in your file system (or outside if you prefer so). The only rule here is that we must be able to reach that file from our build script. For this tutorial we are going to put this file inside the project's root directory (that is MyProject/ ) as that way we wont need to define the location in build.xml. ivysettings.xml File Create a new file and name it ivysettings.xml. Here we will specify the resolvers that will retreive our dependencies. In case you need more information on how to do so, it's recomended to take a look at the oficial Ivy documentation. For our project we will need to define a chain resolver which will contain a series of maven based repositories from where it will download the files. <ivysettings> <settings defaultResolver="chain-example" /> <resolvers> <chain name="chain-example"> <ibiblio name="ibiblio" m2compatible="true" /> <ibiblio name="Jboss" m2compatible="true" root="" pattern="[organisation]/[module]/[revision]/[artifact]-[revision](-[classifier]).[ext]" /> <ibiblio name="JSF" m2compatible="true" root="" /> <ibiblio name="java.net" m2compatible="true" root="" /> </chain> </resolvers> <modules> <module organisation="javax.servlet" name="jstl" resolver="ibiblio" /> </modules> </ivysettings> In the second line we have specified the resolver that will be the default one (in this case chain-example); chain-example is a resolver which specifies a series of resolvers to look for your dependencies. In our case we have defined all our resolvers to be ibiblio based with maven 2 compatibility mode (m2compatible parameter). The first ibiblio line is the default ibiblio resolver for ivy which points to the official maven 2 repository. The second one is the one that we are more interested in. It is the Jboss public repository which contains the RichFaces arctifacts as well as the BOM. Eventhough Ivy was not developed with Maven in mind, it integrates very well with maven repositories. Having this will tell Ivy to look for the RichFaces dependency in the all of those resolvers. After ivy finds the RichFaces arctifacfs in the Jboss repository, it will also look for the BOM which defines dependencies for RichFaces itself needed at runtime, and it will retrieve them as well. Integrating Ivy with ANT Everything is ready to integrate our ivy configuration with our build script. The first thing to do is to set ANT to use ivy as a dependency resolver by defining the ivy namespace in your project's build script, then you can create a task that will execute ivy's retrieve command. Take a look at the following snippet and use it as a base for you build.xml file. <project xmlns: ... <property name="ivy.lib.dir" value="WebContent/WEB-INF/lib/"/> <target name="retrieve"> <ivy:retrieve /> </target> <target name="run" depends="retrieve"> ... <!-- do your stuff here --> ... </target> ... </project> Notice three things. First, we defined the ivy namespace in the <project> tag. Second, we defined an Ivy property which will tell Ivy where to put the retreived dependencies (in this case inside /WebContent/WEB-INF/lib/ ). And third, we have created a retrieve task that executes the Ivy retrieve command. Now you can define your default task to be dependant of the retrieve task so everytime you build your application, ivy looks for your dependencies, and if they are not present, It will download them for you.
https://community.jboss.org/wiki/ConfigureYourRichFaces4ProjectToUseANTAndIVYForDependencyManagement
CC-MAIN-2015-48
refinedweb
958
56.96
Filter Effects for Windows and Windows Phone 8.1 Filter. The target was to create universal app for Windows and Windows Phone 8.1 by sharing as much code as possible between the two. This was achieved by sharing all the Lumia Imaging SDK specific code, and most of the rest of the application code. The user interface (UI) layouts are defined in separate XAML files. UI specific code is contained in code-behind (in C# files). These files are also shared, and only small parts of the code in them vary depending on the platform. Compatibility - Compatible with Windows 8.1 and Windows Phone 8.1. - Tested with Nokia Lumia 630, Nokia Lumia 2520, and Windows 8.1. - Developed with Visual Studio 2013 Express. - Compiling the project requires the Lumia Imaging SDK. Design The biggest differences in the UI design between Filter Effects for Windows and Windows Phone can be noticed on the preview page, where the larger screen real-estate is used on Windows. In Windows Phone, the selection of the filter is performed by switching between the pivot items, each representing a different filter. In Windows, the selector is implemented using a single ListViewcontrol that is populated with preview images (see FilterPreviewViewModel class). In addition, there is enough room to display the filter dependent controls used for adjusting the filter properties below the large preview image. In Windows Phone, the controls are shown on top of the preview image as an overlay and hidden once the manipulation of the controls ends. Figure 1. Controls in Windows Figure 2. Controls in Windows Phone Architecture overview The architecture of the Windows version is essentially the same as in the Windows Phone version. Implementation-wise, the Lumia Imaging SDK-specific code has remained the same (although there are some improvements that will be applied to Windows Phone version). The implementation of the UI has been rewritten due to different APIs between Windows and Windows Phone. Retrieving supported camera resolutions The } Capturing photos The simplest way to capture a photo and store it to a memory stream is the following. Note: If you are using an existing stream that may have already been used, you will need to reset it. In the case of this example, the stream used is the MemoryStream type owned by a singleton class, DataContext: Scaling image stream to preview resolution, enhancing the user experience. Scaling the image data from one memory stream to another requires many lines of code, but is still straightforward and does not require much time, even when scaling larger pictures: Create a bitmap containing the full resolution image. In the following snippet, the image data is in the originalStream, which is of type MemoryStream. Construct a JPEG encoder with the newly created InMemoryRandomAccessStream as target. Copy the full resolution image data into a byte array. Set the scaling properties (scaleWidth and scaleHeight define the desired size). Set the image data and the image format settings to the encoder. Let the encoder do its work and copy to the desired output MemoryStream (in this case scaledStream). Processing the image data // ... } // ... Saving the processed image The image saving implementation starts in the PreviewPage.SaveButton_Click method. Note that here we use the full resolution image. A file picker control is used to get the user input of the desired location. We use a predefined file name, but the user can change this when in the file picker UI. The use of the file picker is one of the few parts in the code where it differs between the platforms. On Windows Phone, using the file picker is slightly more complex than; } Here's what happens in and the following occurs. the FileManager and PreviewPage classes. Adding a new filter You can modify the existing filter or you can easily add a new one. For a new filter, just implement the abstract base class. The only method you need to implement is SetFilters. Tip: For an easy start, copy the source of any of the existing filters. To add the new filter to the collection, just add a new line to CreateComponents method of PreviewPage class. Creating a custom control to modify the filter properties Different filters have different properties. Therefore,. A concrete filter class, derived from AbstractFilter, has to know how to populate the controls. Creating controls in code-behind the CreateControl method that is important. Control = grid sets the Control property defined in the abstract base class and, since it is not null, the UI (defined in PreviewPage.xaml) will now display it. Using a custom UserControl Perhaps a more sophisticated way to create controls for the filters is by creating a custom UserControl. This approach is used with SurroundedFilter. The only downside is that you will have the implementation spread in more than one place. The user control in this case is implemented in the FilterEffects.Filters.FilterControls namespace and the class is named HdrControl. The implementation here. Modifying filter properties on the fly When you want to modify the filter properties so that the changes can be previewed instantaneously while maintaining smooth user experience, you are faced with two problems: - If the filter property value changes when the rendering process is ongoing, InvalidOperationException is thrown. - Rendering to a bitmap that is already being used for rendering may lead to unexpected results. One could think that catching the exception thrown in the first problem would suffice, but then the user might start wondering why the changes he wanted to make did not have an effect on the image. In addition to the poor user experience (UX), you would still have to deal with the second problem. To solve both problems, a simple state machine can be implemented. In AbstractFilter class we have defined three states and declared a property, State, to keep track of the current state. The transitions are as follows: - Wait to Apply until a request for processing is received. - Apply to Schedule when a new request is received while processing the previous request. - Schedule to Apply when the previous processing is complete and a pending request is taken to processing. - Apply to Wait when the previous processing is complete and no request is pending. The state is managed by two methods in AbstractFilter class: Apply, which is public, and Render, which is protected. As you can see,: Some of the error handling is omitted from the snippet above. Pay also attention to the part in the beginning of the method; namely this part.ize the property change regardless of the filter or the type of the property. Here, for example, is the code for when the user adjusts the Brightness property of the lomo filter in SixthGearFilter. Downloads This example application is hosted in GitHub, in a single project for Windows Phone and Windows, where you can check the latest activities, report issues, browse source, ask questions or even contribute yourself to the project.
https://msdn.microsoft.com/en-us/library/dn859584.aspx
CC-MAIN-2018-30
refinedweb
1,154
55.24
Recently I received an electronic copy of a new publication from Packt Publishing, one of the most active publishing companies in the area of (Oracle related) SOA technology. I was asked to review this book – and having enjoyed various earlier Packt titles (such as the recent OSB Cookbook and SOA Suite 11g Developer book ), I gladly accepted this invitation. The anthology format This book is special in that it was never intended to be a single book: it is composed from chapters that were published before, in 8 different earlier publications by Packt. That in itself is an interesting premise: a ‘compendium’ or ‘a book formed by drawing existing content from several related Packt titles. In other words, it is a mash-up of published Packt content – Professional Expertise Distilled in the true sense. Such a compendium of Packt’s content allows you to learn from each of the chapters’ unique styles and Packt does its best to compile the chapters without breaking the narrative flow for the reader.’ This idea might be valuable. Offer a medley of ‘samplings’ from various books, perhaps to bring together all tips in a certain area (say security or governance) or all content for specific roles or to help readers make a choice between several books he or she may be interested in. However, this book does not really achieve nor even seem to have the intention for that. The chapters are not showcases for the books they are taken from (there is no indication in the chapter to explain which book they are taken from and what more that book has to offer). The target audience for the book is not very clear either: many chapters are probably food for architects, but then some are more targeted at developers while others are so high level and introductory that perhaps IT managers or business folk new to the areas of SOA and integration are best served by them. The preface warns the reader that “the chapters in this compendium were originally written and intended as a part of various separate Packt titles, so you might find that the information included in this instance is more akin to that of a stand-alone chapter, rather than creating step-by-step, continuous flowing prose.” That is a fair warning. And since all authors are knowledgeable and write well, the chapters were quite similar in style. But of course, a book composed of chapters takes from various books cannot be read as a back-to-back-story but instead will be more like an essay-bundle or a reference guide. I feel that bundle of essays is the best way to describe the most logical way to process this book. Some chapters might have had their use as reference material too – the more technically detailed ones – but these have suffered the most from the progress of time and the evolution of technology since first they were written. Some of these books were released as long ago as 2007, and that is one of the problems I have with this book. While it contains a lot of valuable content, it also contains many sections that are simply outdated and not relevant anymore. Chapter 14 for example relies on tutorials and software that the reader should download from Oracle Technology Network (OTN); however, that content has been unavailable for several years. When this compendium was put together (December 2011), a simple check by the editor would have made that clear. Many other references in the later chapters are to SOA Suite 10g and JDeveloper 10g, releases now quite long in the past, as is OpenESB – an open source project that does no longer exist (and did not in December 2011, the time of compiling this book). The chapters on JBI (Java Business Integration) have lost most of their relevance – given the events around Sun Microsystems. Content Having said all that, you may wonder what is in the book? Let dive in a little. Chapters 1 and 2 have a lot of overlap. They both introduce many terms, acronyms, challenges and patterns from the world of integration at application, enterprise and inter-enterprise level. In an at times almost scientific manner – with lots of literature references – a fairly factual overview and explanation is given of many aspects of EAI and SOA. These chapters were probably used in their respective original books as the foundation to build the book on. As such, there is a bit of repetition between the two. They also do not get very practical but rather discuss concepts in conceptual (!), theoretical way. Chapter 1 talks about the Trivadis Integration Architecture Blueprint that reappears in chapter 3. Nowhere is explained what exactly that is and why it is mentioned (probably one of the consequences of taking these chapters from their original context). Chapter 3 discusses Base Technologies – a list of standards (primarily though not exclusively) from the world of Java/J(2)EE) and more detailed architectural concepts for example for transactions. It is a fairly short chapter, no more than a primer for readers new to integration architecture with very readable, one page introductions to for example OSGi, JCA and JBI. Chapter 4 is a deep dive compared to the previous chapters. It discusses XML as the vehicle for most integration implementations. As such it discusses tools for creating XML Schemas and tips for designing XML Schemas. Note: at this stage, Web Services have hardly been introduced; it remains a little unclear how the XML should be exchanged and where the XML Schema Designs exactly come in. The tips on namespace definitions and XSD design are very useful – and probably the most concrete and relevant ones in the entire compendium. I like this chapter in that sense. I am surprised that the term ‘canonical schema’ is not used (but that is probably related to the provenance of this chapter) and while XSLT is discussed, XQuery is strangely absent. Standards like JAXB and JAX-WS are mentioned but not introduced, which makes the discussion of the programmatic creation and processing of XML documents somewhat incomplete. Funny enough, this chapter does not mention JDeveloper or Oracle tools at all – while other chapters seem only to recognize Oracle’s technology. It does however contain this line on StAX: “The StAX project was started by BEA with support from Sun Microsystems, and the JSR 173 specification.” [sic] Chapter 5 is introduced with the objective to “Leverage the orchestration capability of Oracle BPEL Process Manager to enable standards.based business process integration that complements traditional EAI middleware.” It describes in great detail how BPEL can be used to execute a business process and orchestrate the interaction with two existing ERP systems (Siebel CRM and SAP) in this example through the interaction points exposed by Tibco and webMethods respectively. The exact technology and tools used in this chapter are quite outdated by now – and all the detailed instructions on how to to things can safely be ignored. However, the overall approach and the design of this integration solution and even by and large the role of BPEL in it is still applicable (even though of course BPEL is just one of many ways to approach it). Chapter 6 seems to be doing something very similar as chapter 5; this chapters does a step by step approach for “integrating PeopleSoft 8.9 CRM with Oracle Applications 11i using BPEL.” I do not feel like chapter 6 adds more than a repeat of chapter 5 and again the tools used in this chapter are no longer in use. Chapters 7 and 8 are well written, probably the best and most clear introduction to JBI I have ever seen and read. Unfortunately, the relevance of JBI especially in association with NetBeans and GlassFish & OpenESB is a little uncertain at this point, to put it mildly. Unfortunately, OpenESB does not longer exist, the JBI support in GlassFish is dead and latest releases of NetBeans do not support JBI either. I fear JBI is on its way out and these chapters are probably not worth the reader’s time anymore. Chapter 9 is titled “SOA and Web Services Approach for Integration” is a bit back to chapters 1 and 2: introduction of IT challenges, reiteration of integration patterns, before it comes to stating the case for Web Services and listing a series of “application patterns”. The chapter finishes off with a discussion on the design of interoperable web services – including concrete WSDL snippets and XSD fragments as well as the concrete implementation of WebServices and Clients using both Java/JEE and C#/.NET. This part of the chapter fits in well with chapter 4, and is among the most concrete (and still relevant) sections of the book. Chapter 10 has again some overlap – with chapter 9. It is titled “Service- and Process-Oriented Approach to Integration Using Web Services”. It introduces the challenges in integration and explains why EAI is not enough and then introduces the Enterprise Service Bus. ESB as an architecture pattern and an infrastructure is discussed at length. The functions represented in an ESB are made clear and the various flows of messages through and process executions in the ESB are illustrated. Until deep into this chapter, no specific ESB product is introduced and the discussion is purely conceptual – and good too. At some point, the author reveals his colors: JBI is the container that he has in mind and the Service Engine architecture of JBI is the way he envisions the ESB concepts to be implemented. Still, most even of this part of the chapter is relevant in the presence of other ESB products as well. The chapter is a long one – 90 pages – and I think it is pretty good. It should probably have come a little earlier on in the book and some of its contents make parts of at least chapters 1 and 2 redundant. For anyone contemplating service based integration and the introduction of an ESB, it is excellent reading and that applies as well to current users of ESB technology who want to validate their current approach and look for improvements. Chapter 11 is a fairly detailed example of using Oracle Service Bus for achieving decoupling. The chapter starts with a fine expose on what types of coupling exist and what strategies can be applied to reduce the extent of coupling. One good example is a search operation in a web service that not only accepts search criteria but also meta-data that specify the ‘batch of search results’ to return – to retrieve search results in multiple sets of say 10 or 20 records. The chapter presents the WSDL and XSD for such a service that can offer functionality that does not rely on state being retained by the service yet helps to simulate a multi-request conversation. The second part of the chapter introduces OSB. Unfortunately and a little surprising, it does not explain the implementation of the ‘search with state’ service discussed in the chapter, but a very basic service instead (without demonstrating a test invocation of the service, which would have made it more tangible). Of course it is only the reader’s first introduction to OSB – so very complex examples are not in order. As a newbie reader by the way, I would be thoroughy confused at this point: we have seen BPEL being used for these integration patterns, read a dicussion of a JBI based service container, seen basic Java and C# baed Web Service implementations and now this OSB example. The rationale for when to use which tool and the disparity between the tools is a tough act to follow. Chapter 12 is about the next level of integration – from low level pure XML based EAI (chapter 4) and synchronous service interactions without or with ESB via the process (or at least orchestration) based complex ERP integration with BPEL (in chapters 5 and 6) it introduces BPMN for true Business Process design and implementarion. Its title is Integrating BPEL with BPMN using BPM Suite . Note that this chapter is very much about the Oracle BPM Suite 11g, not so much about BPMN in general. It is one of the most recent chapters (I believe from the SOA Suite 11g Developer book by Matt and Anthony). It builds on the elaborate introduction of the SOA Suite 11g design time and run time environment and the development of SCA composite application they provide earlier on in that book. For the readers of only the compendium, it may be a big leap. Additionally, the example in chapter 12 of designing and implementing a BPMN process builds on services developed earlier on in the book this chapter is taken from. Readers of just the compendium will be wondering where certain services – for example the EmployeeTravelStatus service – appear from. Of course in order to deploy and test the Composite Application on the BPM/SOA Suite 11g run time, the reader needs instructions on where to get the software and how to get it installed. However, this chapter does not provide such instructions nor does it refer the reader in any way. Although I am impressed – as someone well versed in SOA Suite and BPM Suite – how much material the authors have been able to squeeze into this chapter, it must be a challenging roller coaster ride for a reader not already familiar at least with the latest Oracle tooling for SOA. Note the chapter states “Oracle SOA Suite 11g PS2 introduces an interesting new feature—BPMN 2.0 execution engine” which is not entirely the best way of putting it: on top of SOA Suite 11g, one has to acquire an impressively priced additional license for BPM Suite 11g in order to get access to the BPMN based functionality. Chapter 13 – SOA Integration—Functional View, Implementation, and Architecture – starts with a little reiteration: chapter 1 and 2, a little bit of chapter 3, and many other chapters besides are repeated to some extent in chapter 13. What this chapter adds is the link with legacy applications running on mainframes and how to integrate with those. It does so using technology from Oracle – and a fairly old generation of the Oracle stack – all predating the BEA acquisition and subsequent migration of Oracle’s middleware to WebLogic Server. Most of the solutions described though will still be valid, using today’s set of products. The chapter also describes mainframe interaction ‘the IBM way’ (briefly) because, as it writes, “No chapter on Legacy SOA Integration can be written without considering the IBM mainframe hardware and software that have an impact on Legacy SOA integration.” Chapter 14 seems to be the direct successor to chapter 13: “We are now going to show an example in detail for—Web Enablement. We will use JSP, JDBC, the Oracle Legacy Adapter, Oracle Application Server, Java EE Connector API, and XA transaction processing to show a two-phase commit across an Oracle database and VSAM on the mainframe.” This chapter demonstrates how – again, using somewhat outdated technology- a Java web application can be created with data taken from the mainframe. The reader is refered to OTN to find white papers and tutorials that help set the scene for this chapter. Unfortunately, those resources are no longer available at this moment. Another set of tools that this chapter relies on quite heavily is the Relativity tool set (RMW Application Analyzer, RMW SOA Analyzer, RMW Architect, RMW Business Rule Manager) ; this set helps analysis of mainframe COBOL applications for the purpose of exposing them in a SOA environment (or at least that is my understanding). The URLs provide in this chapter are unfortunately no longer valid. Details on Relativity can be found though at:. The chapter provides a very detailed overview of inspecting a legacy COBOL application and describes how to open it up with Oracle Legacy Adapter to allow its integration through a JCA connection opened from a Java Web application (without using ESB, OSB or BPEL). For readers with a COBOL background or interest, I suppose this is all valuable. To the rest of us, it is too much detail in my opinion. After chapter 14 there is another section, one of the most interesting of the book in fact. However, for some reason that does not become clear, it is not a chapter but an Appendix instead. Appendix A – Establishing SOA Governance at Your Organization. The topic is important enough to warrant a real chapter – or perhaps Appendix means place of honor. The discussion to my taste should have started with a discussion of the objectives – what do we want to achive with governance. Other than that, I consider it a good discussion of the roles involved and the areas and topics involved. It primarily explains what should be taken care of, not necessarily how that could be done. But still, Governance is frequently overlooked so I am glad with this appendix. To the end of the appendix, some tools and technologies that may aid in implementing Governance are briefly discussed, on a conceptual level, not naming any specific tools (except when Microsoft BizTalk and Windows Communication Foundation are briefly – and quite redundantly as far as I am concerned- are mentioned). Best of Packt The suffix to the title – Best of Packt – is simply not true. The good folks at Packt do themselves an injustice with this tagline. I have read recent publications from Packt in the area of SOA that are excellent and contain material that is better than what is in this publication. I can only surmise that since these titles are still current, Packt decided it to be commercially unsound to use (much) content from these titles in this ‘Best of Packt’ compendium. Instead, less current content was used. It seems that this publication was not so much seen as an opportunity to update and refine existing content, and correct mistakes or outdated sections, but rather to make some additional money from work otherwise forgotten. The chapter title for chapter 14 is one example of that lack of attention to details that I find quite annoying: Summary The short of it: given the size of the book (700 pages) and the many parts that are useful it is still reasonable value for money. However, if Packt really wanted to publish ‘the best of Packt on SOA and Integration’ they could have provided us with much better, more up to date content and ensured that the pieces have a better mutual fit with less overlap. The book’s homepage: E-book sells at 13.69 euro ($9.99 for the Kindle-version), the printed version at 35.09 euro (that is $49.99 in the Amazon-store –)
https://technology.amis.nl/oracle/book-review-do-more-with-soa-integration-best-of-packt-december-2011-various-authors/
CC-MAIN-2021-43
refinedweb
3,125
56.08
Raising GnuPG key size limits and making ideal .conf files. Here is a link to a bash script that increases the GnuPG key size limit beyond 4096 bits. The page also provides an ideal GnuPG .conf file. Please provide input and recommended changes. Ultimate-GPG-Settings @me: "It points out that the recently uncovered Android pnrg "error" was the work of an Intel employee (Yuri Kropachev) (2) back in 2006." A link on the Android PNRG fiasco would be useful:]]> For those of us who are looking to eradicate all traces of RDRAND from our systems. It turns out Intel's RDRAND was added to GCC's libstd++ on 9th September, 2012. Your system may be affected (or is it infected?). For removal, a recompile of the library might be in order. 2012-09-09 Ulrich Drepper <drepper@gmail.com> Dominique d'Humieres <dominiq@lps.ens.fr> Jack Howarth <howarth@bromo.med.uc.edu> PR bootstrap/54419 * acinclude.m4: Define GLIBCXX_CHECK_X86_RDRAND. * configure.ac: Use GLIBCXX_CHECK_X86_RDRAND to test for rdrand support in assembler. * src/c++11/random.cc (__x86_rdrand): Depend on _GLIBCXX_X86_RDRAND. (random_device::_M_init): Likewise. (random_device::_M_getval): Likewise. * configure: Regenerated. * config.h.in: Regenerated. Also, check out the following post from the liberationtech (1) list. It points out that the recently uncovered Android pnrg "error" was the work of an Intel employee (Yuri Kropachev) (2) back in 2006. 1. liberationtech 011604 2. harmony 872]]> Improved patch to replace openssl prng with bytes directly from /dev/random #include <fcntl.h> static int ssleay_rand_bytes(unsigned char *buf, int num, int pseudo) { int r, fd; int n = 0; if (num <= 0) return 1; if ((fd = open("/dev/random", O_RDONLY)) >= 0) { do { r = read(fd,(unsigned char *)buf+n, num-n); if (r > 0) n += r; } while (n < num); close(fd); return(1); } return(0); }]]> As a follow on comment regarding removing rdrand from both Linux and openssl, and using a DVB-T hardware dongle as a source of entropy, here is a quick patch to bypass openssl's crazy RNG with a direct call to /dev/random. Adjust as needed, improvements welcome. This is for openssl-1.0.1 on a debian based system. 1) crypto/rand/md_rand.c - rename or delete the existing ssleay_rand_bytes() function and add: #include <fcntl.h> static int ssleay_rand_bytes(unsigned char *buf, int num, int pseudo) { int r, fd; if (num <= 0) return 1; if ((fd = open("/dev/random", O_RDONLY)) >= 0) { r = read(fd,(unsigned char *)buf, num); close(fd); if (r == num) return(1); } return(0); } 2) build openssl to /usr/local ./config shared --prefix=/usr/local --openssldir=/usr/local/ssl -DOPENSSL_NO_RDRAND 3) install patched openssl (sudo make install) 4) modify /etc/ld.so.conf (if needed) - I added /usr/local/lib64 as first line then ran ldconfig 5) verify modified crypto library is used ldd /usr/sbin/openssl libcrypto.so.1.0.0 => /usr/local/lib64/libcrypto.so.1.0.0 Hope this helps,]]> Somebody mentioned Bitmessage above. I would like to point out that it isn't actually anonymous against ISP:s, WiFi hotspot owners or NSA since public key requests and replies can be tracked (it doesn't *route* traffic at all). I2P with Bote mail is however interesting. Bote mail is DHT based serverless mail, with optional random time mail relay delays.]]> If you have a recent Intel chip with RDRAND (it will be listed in /proc/cpuinfo), you can disable its use by the Linux kernel and openssl library with the following two steps: 1) Disable rdrand in the linux kernel Add "nordrand" to /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="nordrand" 2) Disable rdrand in openssl Add the following to /etc/environment OPENSSL_ia32cap="~0x4000000000000000" After a reboot, you can verify removal of rdrand by checking /proc/cpuinfo and "openssl engine -t". A cheap solution to the entropy problem is a $14 usb DVB-T dongle and the following software: rng-tools Someone should take a look at the openssl rng -- which under Linux is seeded by /dev/urandom and then does its own crazy thing. A quick patch to bypass would be welcome.]]> @Clive Robinson I agree, the first task of any real TRNG is to maintain the integrity of the actual entropy stream and enable suitable measurements (FFT's DFT's etc) to be made on the real entropy pool. Without real entropy all you end up with is a circular proof of the "whiteness" of the SHA algorithm output, unfortunately this "whiteness" is maintained without any real input entropy (even a linear counter feeding an SHA algorithm will look good after the algorithm. Effectively this makes all entropy tests that occur after the first SHA stage nothing but tests on the known and well proven whiting effect of seeded block / stream cyphers and hashes. It might pass all the test but its still not random. For TRNG's I personally try to achieve a worst case source randomness of about 12bits. That is one part in 4096, not great by crypto standards but never the less not actually an easy task. For a system with 100mV of permissible supple ripple (other signal) I need to have about 60dB Power supply rejection ratio. This is possible with good amplifier design but it is not trivial. BTW In most CMOS Inverter based jittery ring oscillator TRNG designs the PSRR of each element is typically less than 20dB (about 1 part in 4) beyond that all you have is complexity that obscures the weakness of your source. @ RobertT, I've not attributed much to Intel's RNG design team after I realised that accepted custom and practice was not on their objectives list, thus it appears that common sense was likewise not on their list. As you know entropy sources tend to be "delicate flowers" and need much "care and attention" if they are to "give of their best". Accepted custom and practice back last century was to provide direct access to the entropy source buffer output so you could "test" it whilst in operation to see if it was failing or comming under undue influance etc as well as being able to use your own de-bias and filtering etc. For whatever reason Intel's team decided to only let you get at the output from a hash function thus only limited tests at best could be carried out. As I've said a few times befor a hash function does not magicaly create entropy thus it's "magic pixie dust" thinking which at best only obsficates poor design. Others however have pointed out that it could easily hide a fully determanistic generator. And they have a point even a simple counter xored with a constant will look random on the other side of a hash function. If they don't have a good analog team then I dread to think what is actualy comming off their "thermal noise" source and how they get it up to logic levels without it suffering "undue influance". As you say "complex but it's not white noise"[1], and as Hanlon's razor has it, "Never attribute to malice that which is adequately explained by stupidity". @ AC2 As I said I'd only heard that Linus had throw the toys out of the pram, and decided to "only use" the Intel RNG. The piece you quote aludes to other entropy sources which is thus not "only use" but "also use", which is a whole different can of worms. However his statment makes me pause for thought on his knowledge, True RNG's are not CS-PRNGs thus cryptography has little to do with their design and use. Further from an entropy point of view a fully determanistic generator effectivly has zero entropy, thus cann't be used to improve the entropy in the pool, just stir what's there, which can actually be a problem that reduces the entropy in the final output. To see why you have to consider how the safe guards on the entropy pool work. Obviously each time you read from the pool you leak a little bit of information about it's state, and part of this leak is some of the entropy in the pool. Thus each time you read from the pool you decrease the entropy there, no matter how much you stir the pool the entropy does not go up. What makes the entropy go up is the real entropy hidden in the faux entropy of the entropy sources entering the pool. Now obviously you need to somehow "rate limit" the number of reads so that the loss of entropy is less than or equal to the gain of real entropy from the sources. But how do you do this, well theres three basic ways, 1, Cap the number of reads per unit of time. 2, Cap the number of reads to a fraction of the input bandwidth. 3, Estimate the entropy in the pool by some measure of the pool or sources. The first is very easy and the second only marginaly harder, however neither is actually connected to the actual change in entropy in the pool, thus have to be set conservativly to avoid draining the pool of entropy. The third method is difficult at best because it requires seperation and measurment of real entropy from the faux entropy that just stirs the pool. One reason it is difficult is there is no reliable measurment that can tell between real and faux entropy, there are just measurments that can show one aspect of faux entropy being present to a certain probability. So you end up with a whole battery of tests that almost invariably cannot give real time results. So most safe guards will be set to the first method with a few using the second method. Often they will be set far to optomisticaly due to not being able to measure the real entropy even remotly reliably. So having a determanistic input which has no real entropy but produces faux entropy that passes tests will have the side effect of setting the safe guards way to optomisticaly. Does this matter? Ordinarily it would be considered an almost philosophical question, however when it comes to security it's very real as lives have been lost to poor security. Thus we take a leaf out of the crypto play book to get some assurance, we don't use the raw output from the pool, we instead put it through one or more crypto primatives such as a block cipher or hash and use their output. This gives us, as an assurance, the percieved strength of the crypto primative we use. Thus what we realy end up with is a tweakable CS-PRNG where the tweaking comes from a mixture of real and faux entropy. In most but not all cases this is acceptable. [1] For those that don't know what "white noise" is welcome to the world of engineers where there are more types of noise than you can shake a stick at. Noise is generaly considered an undesireable side effect of basic physics which interfears with a wanted signal. In general engineers classify noise by it's statistical properties or by the physical effect that causes it. For use in TRNGs the most desired property in white noise is "independance" and it is this that distinquishes real from faux entropy. Faux entropy always lacks independance in some way, the question though is can it be detected or eliminated. For more info have a look at RdRand is used exclusively for memory address randomization in the Linux kernel, so technically an adversary could defeat ASLR and know exactly what the memory layout is. /* * Get a random word for internal kernel use only. Similar to urandom but * with the goal of minimal entropy pool depletion. As a result, the random * value is not cryptographically secure but for several uses the cost of * depleting entropy is too high */ static DEFINE_PER_CPU(__u32 [MD5_DIGEST_WORDS], get_random_int_hash); unsigned int get_random_int(void) { __u32 *hash; unsigned int ret; if (arch_get_random_int(&ret)) return ret; [...] ]]> OpenBSD they have used /dev/random for basically everything, which is good. The PRNG pool management has a set of nested data recursions that mix newly collected randomness from interrupts and other sources with the timing of extractions. It's probably the best PRNG out there. In light of this clusterstorm of NSA spying and shoddy Linux security practices over the years I'm probably going to switch everything I can over to OpenBSD, and the linux development machines I run offline will have to start doing deterministic builds, because now we can't even trust the compiler since it's feasible all signatures and keys to repositories and source downloads have been altered. I love turning 5hr builds into 10+hr builds thanks NSA. @Clive Linus was happy to include Intel's rdrand and responded to a recent petition to get it removed with the following ". " @Clive Robinson I'd be careful to not attribute Intel's RNG design incompetence to anything other than incompetence. Intel has never had a competent Analog, RF or mixed signal design group and without this DNA in the groups background you'll find that even first order effects like PSRR (Power Supply Rejection Ratio) on the RNG get lost in the PseudoNoise stage and are never directly measured, tested or even testable on a fully working chip. Part of the problem is that most real life random noise exhibits a bathtub type curve, at low frequencies (where most of the noise is) it is dominated by 1/F noise (typical 1/f today corner is about 100Khz). Above this frequency the noise will typically be between 1nV/sqrtHz and 10nV/sqrtHz, At very high frequencies we find the noise level hooking up again BUT this is a VERY difficult area to work in because sample accuracy/inaccuracy folds back into amplitude noise, in other words phase noise (jitter) becomes amplitude noise. Unfortunately in the most useful region the low magnitude of the noise means that second order effects like PSRR and substrate noise tend to dominate the overall results. This causes many people to resort to directly using "noisy" self oscillating LFSR's these cells primarily fold clock jitter into pseudonoise. its really complex BUT its NOT whitenoise. @ Randell Jesup, Yup many RNGs were quite bad (and many still are). I've been designing TRNGs and CS-PRNGs off and on for over a third of a century mainly on the hardware side using individual components for the noise sources and limited functionality IC's (analog OP-Amps and 74xx TTL chips) to provide a sufficiently usable source for a micro controler to do the software bits. I've learnt a lot in that time, mainly just how badly other people do it and how they don't take the time to lift real entropy out of the faux entropy... But it's somewhat surprising to reflect that most programers don't know even today the advantages and disadvantages of various RNGs and which one is most appropriate for their application and why, even some quite famous programers... @ Dirk Praet, I was vaguly aware that Linus had thrown the toys out of the pram over the RNG in the linux kernel. I only had to hear the words "Intel onboard..." to get a shudder down the spine and a queasy feeling. To put it simply I've said repeatedly for something like fifteen years that the Intel "on chip" RNG went about things all wrong ( search this blog for my name and "magic pixie dust" to see some of them) and thus no confidence should be held about the quality of output.]]> @ Clive But if we are talking kernel level malware the best thing to leak would be the input to the one way functions which is sometimes an "entropy pool" or apparently in the current Linux setup the output of the Intel Chip (supposed) TRNG I suppose you are aware of an ongoing discussion at Reddit about this: @ everybody else: has anyone bothered to sign Bruce's new public key yet ?]]> @ Clive Robinson: When I was building a netrek client for the Amiga in the days before this new-fangled "http" thing, people had decided they were annoyed enough with other people compiling clients that auto-targeted, etc, that they started releasing clients with RSA-derived certs (with permission of RSA). I went to get my Amiga client signed, and before publishing the public key I logged into a netrek server to make sure it kicked me out and didn't crash the client. To my surprise, it let me in and said "jesup (HPUX xxxx)" (or something like that). Apparently the random number generator used in the keygen on the Sun 3/50 I ran it on wasn't so hot.... GIGO (or in this case, non-GI, non-GO)]]> @ ATN, Right by random data you are realy talking about the TRNG output. The answer is yes if you can work the one way functions backwards or if the TRNG is badly designed run them forwards. But if we are talking kernel level malware the best thing to leak would be the input to the one way functions which is sometimes an "entropy pool" or apparently in the current Linux setup the output of the Intel Chip (supposed) TRNG. In either case the entropy pool should be stired at different rates from milisecs to weeks. If I was going to take output from /dev/random I would absolutly not use it "raw" I would take a number of readings and using the time / process id / user keypress timing to shuffle parts of the readings around as well as flipping bits and use the result as the key and seed input to AES-256-CTR. If done right this will help break any determanistic link between RND data and actual usage as KeyMat etc.]]> @Clive Robinson > Your question is lacking in what you mean by random data. I mean the random data that software has used up to now. In Linux, I mean what has been read from /dev/random. There does not seem to be a lot of software which uses random numbers - most of the program want to be consistent and produce the same output from the same input. So it does not look like an impossible task for a "virus" to leak all the data that has been produced by the random generator and send that to another computer. Now, if you have all the random data used by "gpg --gen-key" and the public key, can you deduce (later on, on a big computer) the private key? Same, if you are using an automatic password generator and know every numbers read from the random number generator, can you deduce the password which has been generated? @kevinm -- cool, thanks.]]> @Jacob I still use Skein/Threefish implementation because they work well on Android. As for PGP keys, generate a gigantic password because of the ease they can break bitcoin keys that were generated using brain wallet. People with 20char passwords are finding their coins stolen from the block chain. Also read tobtu/hashcat forums where they have been slicing through lastpass and 1password.]]> @ Foxtrot "I don't understand Public Key encryption well enough to ..." You might want to dig up a copy of "PGP - Pretty Good Privacy" by Simson Garfinkel. Very good read. I don't know if it is still in print. My copy, 1st edition, is dated 1995.]]> @ ATN, Your question is lacking in what you mean by random data. However the answer to "is there a way to recover the private key from the public key" is yes in theory but currently impracticle in practice. What we do know is there is no proof that it can not be done extreamly quickly, just that nobody has published either a proof or a method, so the game is open either way currently (as far as we know publicaly). What we also know with keys using PQ primes as the base, is if you know one you know the other, and that there are very fast tests that show if a public key shares a prime with another public key. So if your pubkey is tested against all the other pub keys on the internet it can quickly be found if your key has a prime in common. This would not realy matter if we had good random number generators but invariably we don't thus with embedded systems that generate pubkeys on first start up the amount of entropy is often less than desired by a very large margin. Tests have shown that there are a lot of pubkeys out there that do share common primes so much so that it defies probability of it happening unless the random generator is crocked. Further investigation has indicated that the source of these improbable pubkeys tends to be common to the same software used... Whilst pubkeys with common primes is bad enough on it's own it raises another issue which is "limited search space". For the same prime to be selected by two different bits of kit the probability of it happening is related to the range of random numbers produced. The smaller the range the greater the probability of it occuring, thus the greater the odds of tailoring a simple search to match other pubkeys. It's an odds on bet that the NSA know of every weak random number generator there is in commercial equipment and have charecterised their limitations and in some cases built systems to exploit this. Worse is the fact that the systems most likely to have crocked random number generators are those that are embedded devices such as network devices such as firewalls, switches and routers, which is exactly the sort of gear the NSA are beleived to target... But there is another issue which is kleptography [1] which is especialy bad in generating public keys due to the very high level of redundancy possible. Basicaly you can write a program to generate public keys that using a hidden public key in the software of about a third of the length of the public key being generated, can hide the starting point of the search for the first prime used in the users generated public key. The problem is even if the software prints out the two primes and the generated public and private keys, there is no way the user can analyse them to find the use of the hidden public key. However as the software writer you (may) have access to the private key coresponding to the hidden public key and you can use this to decipher the search start point, you then feed this into the rest of your prime search algorithm and out pops the first of the primes used to generate the public key pair in a matter of a few seconds or minutes, you then use this to find the second prime from the users public key and thus have their private key... [1] The idea originated from Adam Young and his academic supervisor Moti Yung,]]> Not being a specialist, simple question: is there a way to recover the private key from the public key if you have the last month worth of random data generated by the PC? Same question, is there a way to recover the intermediate keys (after uncrackable authentication has been done) to read the content of messages knowing the last few days of random data?]]> @Bruce I sense a need for an update to Practical Cryptography/Cryptography Engineering as soon as the dust settles.]]> @Bruce 1. Do any of the Snowden documents change your views on two factor authentication? 2. Related to that, what is your view of the value of soft pki certificates v. smart cards with embedded private keys? @LossOfTrust And even if you could somehow verify the security of an open-source project with 100% accuracy by looking at its code, you still don't know whether you can trust the compiler, or the compiler's compiler, or the OS, or the CPU, or... Obviously, perfect security does not exist, so you'll just have to make a decision which product you deem most likely to be trustworthy. That is probably more effective than complete paranoia.]]> Why have you not signed your new key with your old key? This is basic stuff. Recently Cory Doctrow claimed that 1025 bit asymmetric encryption was twice as hard to crack as 1024 bit asymmetric encryption. I'm starting to get sick of this kind of thing. @Foxtrot For a really basic concept of how public key encryption is used, you can start with this video here.]]> Bruce, when you participated in the SHA-3 competition, was there any hint of any gov agency's attempt, influence or desire to weaken the Skein or the winning Keccak algorithm?]]> @newbie question Good question, the difference is due to the key being signed. When you import the key from the MIT keyserver you see some text that begins with "gpg: key EDACEA67: "schneier " 10 new signatures" @Dirk Praet Not trolling; he simply never replied to my earlier requests, and his response in that thread was missed by me. Thank you for pointing it out. I can't trust any commercial encryption product, as the NSA may have weakened the code. And I can't trust TrueCrypt, which might be an NSA project. Open Source encryption software might similarly have been weakened. And the NSA could have decryption capabilities beyond what experts examining Open Source software know. So basically, there is no reliable source of security software.]]> @R Read. Whilst Bruce didn't state which version of Windows he uses, he does state that he currently uses it. As such, it is questionable as to how secure his system really is. Now wondering if Windows can even be configured so that if the right Bluetooth device is not connectable, at bootup, the system automatically goes into "failure mode", the way that one can configure Android devices, and Linux boxes to "fail", if the appropriate Bluetooth device does not connect to it.]]> @Private I had thought of that as well and was a bit surprised that he did not. Then again the $5 wrench solution negates that too. @SteveJ If you doubt the validity of his new RSA public key, you can ask him to sign it with his previous RSA private key. Actually he should have done that by his own judgement]]> Can someone explain the difference between ascii strings between the ---BEGIN ... --- and --- END .... --- blocks in the armored public key linked to in the current post by Bruce and the one under EDACEA67 at. The first few lines are the same, after that are differences due to version? Or are they different keys and the similar initial stuff is possibly email info?]]> I suppose this other 'Curious' guy/girl above might be fishing for some kind of admission from Bruce in having handled documents as such. Or I am slightly paranoid here in thinking that such a concern would matter at all. Having said this, I do not have an overview of what everyone writes here so I feel abit dumb expressing my concern here (because it might have been irrelevant in the first place).]]> @abcdefg - I wasn;t sure about point 2. I couldn't remember if the CA could only certify a site cert or they could completely fake the original site cert for a new site. @ Impossibly Stupid You should assume that Bruce himself is fully cooperating with the NSA. He has been unwilling or unable to say that he has *not* received a NSL You're probably just trolling to live up to your handle, but Bruce did answer this question a couple of threads ago: . @ Curious I wonder if you could comment on some of the specifics you used for creating your new key But he did in reply to a similar question from @Sami Lehtinen to the top of this thread. If you guys can't be bothered to read up on the whole thread and those related, please go somewhere else. You're wasting Bruce's time and that of many other people who actually care about what is going on. @NobodySpecial But Mr Bruce (allegedly) Schneier - the only way I know this is really you is the SSL cert for the site. Which is issues by Symantec who have been fully cooperating with the NSA . 1) What you need is certificate pinning (or public key pinning). See here: Without pinning, HTTPS fails to MITM attacks which replace the cert with another one also accepted by the browser. 2) As far as I know, a CA issuing a cert only knows its public key, not the private key. The latter is known to the website only. Additional hint to point 1: If you use certificate pinning, you can disable the default browser certificate validation via CRL and especially OCSP, to prevent tracking of visited websites by CAs. Also, there are many other browser configuration tweaks which can be used to harden security and privacy. And are they not the responsibility of the website owner, but of the user.]]> Given that your access is to a public blog with public posts, I'm not sure why you'd worry too much about the cipherspec.]]> @Lol If you are using * Firefox: open about:config, search for RC4 and disable the booleans * Opera: go to settings-->security-->protocols and uncheck RC4 * ?? and then you get AES256/DHE,1024,RSA,SHA]]> My connection to this site using RC4-128. Well... @Perseids - I think it's rather Mr Schneier who is a "person of interest". It seems likely that people with interesting stories about surveillance would contact him - presumably using encryption. Bruce, now that you had a good look at certain documents and are now replacing your keys shortly afterwards I wonder if you could comment on some of the specifics you used for creating your new key. GnuPG on Windows, what version and binary/source-compiled? Did you do something different in the ways you chose algorithms, key-lengths, random number, OS used and how you will store your key? Just curious :)]]> so, how should an up to date gpg.conf look like? I pruned mine from using shorter symmetric keys default-recipient-self default-key XYZ987 hidden-encrypt-to 123ABC #try-all-secrets s2k-digest-algo SHA512 s2k-cipher-algo TWOFISH enable-dsa2 personal-cipher-preferences TWOFISH AES256 CAMELLIA256 personal-digest-preferences SHA512 SHA384 SHA256 personal-compress-preferences BZIP2 ZLIB ZIP #cipher-algo TWOFISH AES256 CAMELLIA256 #force longer keys? force-mdc armor default-preference-list TWOFISH AES256 CAMELLIA256 SHA512 SHA384 SHA256 BZIP2 ZLIB ZIP]]> It appears that you are forcing everyone to access your blog via https. I am unable to read it now with my old system/browser, as they do not share any keys (it's that old.) However I don't care if anyone knows I'm reading your blog, and the content is publicly available anyway. In this case, Shouldn't you let the user choose whether they want to use https or unencrypted http? Shortly after a public announcement that Bruce is working with the leaked documents, a new set of keys makes there way on to his website. What assurances do we have that it is even Bruce updating the website? How do we know he hasn't been whisked off to some secret facility and was forced to create new keys? I'd suggest a video of Bruce reading the fingerprints, but that wouldn't really prove anything either. Could be relevant?]]> @NobodySpecial "But Mr Bruce (allegedly) Schneier - the only way I know this is really you is the SSL cert for the site. Which is issues by Symantec who have been fully cooperating with the NSA ." You should assume that Bruce himself is fully cooperating with the NSA. He has been unwilling or unable to say that he has *not* received a NSL. There is no reason to think he can be trusted, and the same is true of anyone else who can't speak freely. @Foxtrot As long as your not putting your private key online (you can put your public key, like Bruce and I have done online, just not your private key) then it's more secure then not using it. @dafs Bruce S is now working with Snowden docs. He's already revealed a lot about his new workflow. But telling you where he stores his private keys probably isn't in the cards. I am curious about how long it will be before the FBI get warrants to seize non-journalist's devices who are working with Snowden docs. Schneier's will be the most secure of the bunch. But, what is his exposure for simply having the docs in his possession, which he admits, even though the are not accessible? The same protections afforded journalists will not be extended to him, though he may start positioning himself as a journalist to cut them off at the pass. I am actually surprised the publishers holding control of these docs don't insist new eyes only access and review on their premises. Forewarned is forearmed. (Bruce, feel free to not approve this comment and/or delete, if you wish.]]>
https://www.schneier.com/blog/archives/2013/09/my_new_gpgpgp_a.xml
CC-MAIN-2016-36
refinedweb
5,577
65.96
Dependencies¶ This driver depends on: Please ensure all dependencies are available on the CircuitPython filesystem. This is easily achieved by downloading the Adafruit library and driver bundle. Usage Example¶ Of course, you must import the library to use it: import busio import adafruit_amg88xx The way to create an I2C object depends on the board you are using. For boards with labeled SCL and SDA pins, you can: import board You can also use pins defined by the onboard microcontroller through the microcontroller.pin module. Now, to initialize the I2C bus: i2c_bus = busio.I2C(board.SCL, board.SDA) Once you have created the I2C interface object, you can use it to instantiate the AMG88xx object amg = adafruit_amg88xx.AMG88XX(i2c_bus) You can also optionally use the alternate i2c address (make sure to solder the jumper on the back of the board if you want to do this) amg = adafruit_amg88xx.AMG88XX(i2c_bus, addr=0x68) Pixels can be then be read by doing: print(amg.pixels): Table of Contents¶ Examples
https://circuitpython.readthedocs.io/projects/amg88xx/en/latest/
CC-MAIN-2019-18
refinedweb
166
62.58
Created on 2010-01-03 19:13 by rgammans, last changed 2010-03-08 15:35 by flox. This issue is now closed. The following sequence causes isinstance to raise an exception rather than to return False. >>> class foo: ... pass ... >>> import collections >>> isinstance(foo,collections.Callable) True >>> isinstance(foo(),collections.Callable) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.6/abc.py", line 131, in __instancecheck__ return (cls.__subclasscheck__(subclass) or File "/usr/lib/python2.6/abc.py", line 147, in __subclasscheck__ ok = cls.__subclasshook__(subclass) File "/usr/lib/python2.6/_abcoll.py", line 117, in __subclasshook__ if any("__call__" in B.__dict__ for B in C.__mro__): AttributeError: class foo has no attribute '__mro__' >>> Confirmed with all others "isinstance(..., collections.Hashable)" and similar. According to the documentation, we might expect the same behavior as for new-style class. Well the fix is easy for old-style classes, since we just have to use hasattr(obj, '__call__') in that case. Wow, critical issue, are you sure? Here is the patch, with tests. IMO, the tests may be ported to 3.x. "subtype == _InstanceType" can probably be replaced with "subtype is _InstanceType". Otherwise, the patch looks good to me. fixed Fixed in r78800. Additional tests backported to 3.x.
http://bugs.python.org/issue7624
CC-MAIN-2014-15
refinedweb
215
63.76
Hey Guys : Here The problem is :to have a program to check the -(sys.argv) is long enough to have more than 3 elements if not then print "input more " -check (sys.argv) contain a specific elements for ex(P) ,& if P is found in the given elements then it should also check for P+3(that mean the 4th elements after p) if also P then print "fount at )then the position at which the 1st P is found !! the program can be run from the :Administrator: command Prompt .. well I need a little guidence to solve this : here what i have done so far though I wont think this is enough at all !! import sys for i in (sys.argv): if "p p" in (sys.argv): print yes the below example ,I managed to solved it myself & (Not related to the above problem )to be run from command Prompt /Administrator : the problem was to type what ever you type after executing the program in the command prompt & using (import sys & also sys.argv) : for example try to point where is your file is : c:\users\robert\desktop\name of your file.type anything after executing the above it will appear like this : 3 arguments & then it enumerate the name & length of each ... import sys print "No.of arguments is ",len(sys.argv) print" " print "Now I will Print the Arguments & Length of each";print " " for i in (sys.argv): print i,", length ",len(i)
https://www.daniweb.com/programming/software-development/threads/290332/need-python-help
CC-MAIN-2016-50
refinedweb
246
80.41
UPDATE – Relevant for S/4HANA 1709, 1610, 1511 One of the most common questions we receive from customers as part of the S/4HANA RIG team is how to Modify Currencies in Standard Fiori Launchpad KPI tiles. Standard delivery of KPI tiles configuration shows currencies in EURO, if you are working in an implementation project where currencies are different than EURO you will need to modify the KPI tiles to show the currency you need. Depending on the version you are using, you can decide to follow different approaches described in the following documents: Step-by-step guide to modify currencies in standard KPI tiles – All versions Step-by-step guide to modify currencies in standard delivered KPI tiles – S/4HANA 1605 or higher. If you are following the “Copy KPI” approach, consider that as a development best-practice you will need to create copies of the KPI’s which you will later modify to suit your needs. Once you have modified your KPI and displayed it in your Launchpad you may face issues assigning authorization objects to end-users. This is a common problem as developers are usually assigned full authorizations in the backend system. In the following table, you will find the most used backend authorization objects required by KPI tiles Hi, Jorge. I’m following up Step-by-step guide to modify currencies in standard KPI tiles but y have this error: /SSB/CORE/ – 302 = Evaluation E.1490790264178 not saved; cannot find related indicator What do I have to do? References: With Namespace = ‘C’, in query to Source DDL “/SSB/DESIGNTIME_INDICATORS”, not exist, but exit in S (SAP) namespace a record. Thank in advance. Hi Edwar, Looks like there was a problem while copying and saving the evaluation. First, make sure there are no authorization errors in the frontend server usign tx:SU53; second, try copying the evaluation again, after successfully copying the evaluation refresh the web page clearing browser cache and confirm that the evaluation exists. Regards, -JB- Hi, Jorge. Thank you for response. But, SU53 is clean. I test again, changing to internet explorer, before I used Chrome, but error persist What do I have to do, to save a record with Namespace ‘C’ ??? Hi Edwar, I was able to solve the issue by copying the KPI along with all of it’s evaluations into a new KPI and then changing the currency in the new KPI’s evaluation. Best, Aashrith. Hi Jorge. I followed the steps mentioned in the document. But the customizing request does not contain newly create evaluations. Any idea what could be the reason. How should i transport the evaluations to client-200 ? My system landscape client/100 – Front End Server client/200 – Backend Server Thanks. Naga Great guide. Very useful for me. Thank you Jorge Baltazar. Hi Jorge Thanks for the nice article. We were able to modify the currencies using the step by guide. We have couple of follow up questions on that, could you please let us know how to handle them Hi Saubhagya, Unfortunately I am not aware of any workaround, you would need to create as many copies of the evaluation as needed, both for the “main” and “associated” KPI’s. Regards, JB Hello Jorge, I came here from this other related blog: But I’m asking here since this one is most recent. I’m in HANA 1511. I’ve been following the guide with no problems and I’m at the last step of adding the new tile from the catalog (step 4.2 in the PDF document). However I’m not able to see my new tile. I thought it was a problem with the chosen catalog in the new tile (should it be the Technical catalog as it is shown when displaying the standard tile in the KPI Workspace? or the Business catalog as it is shown when adding the standard tile in personalization?). But in either case mine is not shown. Could it be a catalog cache not refreshing or some similar issue? Do I have a way to force a refresh? Many thanks! Hi All, Even after changing currency from EUR to INR using KPI workspace app.Still the issue is same. I modified standard CDS code(default to INR) using hana studio.Then it started to work. But when I did a transport in SAP to a different environment the issue is back again with “EUR” currency. Need help form you guys. Regards Prasad Yallapu Carbynetech. Hi Prasad, Hardcoding the CDS input parameters is not the best option, that is why you can customize them from the KPI workspace. To obtain help from SAP you should create an incident and follow-up from there as it appears your issue requires further analysis that what can be provided in a blog thread. Regards, JB OK I’ll reach SAP team for further assistance. Thanks for the info.
https://blogs.sap.com/2017/03/08/fiori-for-s4hana-modify-currencies-in-standard-kpi-tiles/
CC-MAIN-2018-30
refinedweb
817
64
I came across the need to make use of shape recognition in one of the projects in my workplace. I was quite surprised to find out that there wasn't much usable code or examples from the net that I could make use of. The tricky part was that I had only two weeks or so to focus on the image processing/blob analysis part. Hence I designed my own shape/blob recognition algorithm which encompasses a simple core idea, is simple and effective. Do note that I'm not writing an industrial-strength shape recognition code that is based on published research literature. I have contacted a few reputable image-processing software vendors concerning their products and I can tell you that they indeed have very refined tools for specific needs such as defect detection, cell analysis etc. However, I believe there may be some folks out there who'd need some basic shape recognition functionality integrated into their projects but do not have the luxury to buy such products. Therefore, I hope the code contributed here can provide a starting point for some basic usage or a platform for further development. Before we plunge into how the code works, here's just a short preamble to how the shape recognition process works. The logic for shape recognition is derived from a "knowledge database". To train the shape recognition function, you only need to prepare sample images containing the shapes you want the function to learn and pass it into the program. There are a couple of sample image files in the project zip file to demonstrate the recognition of randomly oriented shapes. An additional note on making your own training images - the images have to be of the size of the bounding rectangle of the shape, i.e. make sure your shape fits exactly within your image boundaries. I have also written a method called trimImage() in the ImageProc class which you can use to create a tool for making such images. trimImage() ImageProc We discussed about training the shape recognition function previously but without specifying the recognition criteria. Common attributes would be color, size (scale) and shape. There is really a lot to study into for color and shape recognition. I'm forsaking details into color segmentation from say, a photo-image possibly. And also shape recognition using line hulls, hue invariants... etc. I built the shape/blob recognition functionality around simple input conditions - colors are already segmented and have well-defined color separation at edges. You are free to improve it if your situation is trickier like shadows, gradated edges at the input stage etc. More details are given at the source code on how I did shape, size and color recognition. Credit is given to Eran Yariv's code which I've used for rotating images and the Vidcapture Project for handling PPM images. My code dumps information like the centroid of the blob (location) and also whether the shape exists in the given working bitmap. You can easily dump other information like pixel count, bounding box and color etc., based on your application needs. I prefer to code in C++ because of the control I can have. You can easily port it to VB or any other UI-based software framework once you understand how the shape recognition class works. Refer to the Main.cpp to see how the code runs. There's really nothing intricate about getting it to run. #include "ImageProc.h" int main( void ) { ImageProc* ipObj = new ImageProc(); // Training ipObj->loadTrainingImage( "training.ppm" ); ipObj->loadWorkingImage( "working.ppm" ); // Recognition Color key( 0, 0, 255 ); // set the color of the blob you want to capture ipObj->catchBlobs( key ); ipObj->detectShape(); Output ipObj->markBlobCentroid(); // output map for verification // Cleanup and exit delete ipObj; return 0; } I find the challenge is really in the post-processing of the real-life images that you want to work with. If you do not capture images under very stringent lighting conditions to ensure color uniformity (killing shadows too) and also with high-resolution devices and further image "cleaning" processes, chances are that you would not even come near to some sane shape recognition stage. To put it simple, shape recognition breaks down if your image is not "clean" enough..
http://www.codeproject.com/Articles/10402/Blobby-Shape-Blob-Recognition-Code?msg=4381527
CC-MAIN-2014-35
refinedweb
711
61.26
This is the mail archive of the guile@cygnus.com mailing list for the guile project. robertb@continuumsi.com writes: > > one immediately apparent problem is using `gh_eval_str'. you > > probably want to simply save the return value of > > `gh_new_procedure', an SCM. > > Well, doing that defeats the purpose of trying not to have global > variables! Perhaps I was too idealist to believe that I could avoid > use of global variables in Guile. It was so easy in X-Windows! in your example, does this code gh_set_ext_data(gh_new_procedure1_0("ReadSymbols", ServerReadSymbols), (void*)lib); get called multiply? each `gh_new_procedure1_0' call returns a new procedure object. `ServerReadSymbols' is already global in the C function namespace and "ReadSymbols" is already global in the string pool. you are starting with global data and then calling a function that gives you a new pointer each time called. probably you will want to emulate libguile practice of making `ServerReadSymbols' file static and then in the init procedure (called only once), saving the SCM. then you can pass this SCM around w/ resorting to `gh_eval_str' (which consults yet another global namespace...). thi
http://www.sourceware.org/ml/guile/1999-05/msg00071.html
CC-MAIN-2019-22
refinedweb
181
66.44
Cannot trace error in python pcapy wrapper I’m using python pcapy in a docker container using this piece of code: from pcapy import open_live, findalldevs import sys import traceback p = open_live("eth0", 1024, False, 100) dumper = p.dump_open("test.pcap") devices = findalldevs() print dumper, devices while True: try: print p.next() except Exception as e: print dir(e), e.message, e.args[0] traceback.print_exc(file=sys.stdout) break When I run it I get the following exception: Traceback (most recent call last): File “test_pcap.py”, line 12, in print p.next() PcapError I’ve tried to play with the arguments by changing to different maximum packet sizes and setting promiscuous to True. I’ve tried to get any message from the exception, but it seems the message is empty. I also skimmed through pcapy source code: since the exception in the PcapyError object is empty and the other PcapErrors in the next function are explicit strings, it implies we are falling into the condition in which buf is empty. It seems pcap_geterr returns an empty string because pp->pcap has been closed and the pointer to the pcap exception no longer exists (take a look into the doc). When I run using the loop() method, everything works fine: # Modified from: import pcapy from impacket.ImpactDecoder import * # list all the network devices pcapy.findalldevs() max_bytes = 1024 promiscuous = False read_timeout = 100 # in milliseconds pc = pcapy.open_live("eth0", max_bytes, promiscuous, read_timeout) # callback for received packets def recv_pkts(hdr, data): packet = EthDecoder().decode(data) print packet packet_limit = -1 # infinite pc.loop(packet_limit, recv_pkts) # capture packets I really don’t know the source of the problem or what else to do for debugging it. EDIT I cannot find any error using strace. This is the grep for error in strace output: strace python test_pcap.py 2>&1 1>/dev/null | grep -i error read(6, “\0\0\0t\3\0\0\0intt\n\0\0\0ValueErrort\23\0\0\0_”…, 4096) = 995 getsockopt(3, SOL_SOCKET, SO_ERROR, [0], [4]) = 0 getsockopt(5, SOL_SOCKET, SO_ERROR, [0], [4]) = 0 getsockopt(5, SOL_SOCKET, SO_ERROR, [0], [4]) = 0 EDIT2 I also tested pcap.h by calling to pcap_next myself: // Modified from: #include <pcap.h> #include <stdio.h> int main(int argc, char *argv[]) { pcap_t *handle; /* Session handle */ char *dev; /* The device to sniff on */ char errbuf[PCAP_ERRBUF_SIZE]; /* Error string */", "eth0", errbuf); return(2); } while (1) { /* Grab a packet */ packet = pcap_next(handle, &header); /* Print its length */ printf("Jacked a packet with length of [%d]\n", header.len); /* Print contents */ printf("\tPacket: %s\n", packet); /* And close the session */ } pcap_close(handle); return(0); } To compile, write it to test_sniff.c and run: gcc test_sniff.c -o test_sniff -lpcap And I was able to capture packets successfully. So I don’t really know where the problem is… Other info to reproduce behaviour - Docker version: Docker version 1.5.0, build a8a31ef - Docker image is the Docker default Ubuntu - Python2.7 2 Solutions collect form web for “Cannot trace error in python pcapy wrapper” pcapy doesn’t use Python socket module. It won’t raise socket.timeout which is raised if timeout has been enabled by previous socket.settimeout call. socket.settimeout is used to set a socket into blocking, non-blocking or timeout state. In pcapy, the timeout argument of open_live is passed to poll syscall at least in Linux, should differ by OS where poll is not available. Reader.next call raises PcapError if there’s no packet to return because it hasn’t captured any packets yet. It’s not an error, just an indication like StopIteration. It can be ignored and Reader.next has to be called again. Reader.loop won’t return until it has at least one packet to return or an error occurs. The following code captures 10 packets and exits. from pcapy import open_live, findalldevs, PcapError p = open_live("eth0", 1024, False, 100) dumper = p.dump_open("test.pcap") devices = findalldevs() print dumper, devices count=0 while True: try: packet = p.next() except PcapError: continue else: print packet count += 1 if count == 10: break the answer is pretty simple: p.next() will throw on timeout your timeout is 100ms (last parameter of open_live) so your except should handle the timeout case and you may want to increase the timeout time or set it to 0 for infinite edit: you simply expected socket.timeout but PcapError is thrown instead. socket.timeout is the exception thrown by the socket code in the python lib so it is python specific. It is getting wrapped up (maybe just with new versions of pcapy) or it jsut stands for a different kind of timeout (TCP socket related) see example pcapy code: example
http://dockerdaily.com/cannot-trace-error-in-python-pcapy-wrapper/
CC-MAIN-2018-34
refinedweb
781
55.64
Hi folks, I’m pretty new to Arduino and I’m trying to figure out how to get this display to work. I have the latest LiquidCrystal_I2C library installed and I’m trying to run the included Hello World program, but no characters are showing up on my display (this guy). I’m just getting two rows of white blocks and two blank rows. The backlight is on and I’ve tried adjusting the contrast. Code is as follows: #include <Wire.h> #include <LiquidCrystal_I2C.h> LiquidCrystal_I2C lcd(0x20,16,2); // set the LCD address to 0x20 for a 16 chars and 2 line display void setup() { lcd.init(); // initialize the lcd // Print a message to the LCD. lcd.backlight(); lcd.print("Hello, world!"); } void loop() { } No errors upon compiling or uploading. I’m pretty sure I’ve got the display wired up correctly, with 10k pull up resistors for both SDA and SCL to 5V. SDA is going to Analog 4, SCL is going to Analog 5. This is on an Arduino Uno SMD, OSX 10.6. Any ideas? Thanks!
https://forum.arduino.cc/t/nothing-showing-up-on-i2c-lcd/117852
CC-MAIN-2021-31
refinedweb
180
74.59
IS TENCENT one of the world’s greatest internet firms? There are grounds for scepticism. The Chinese gaming and social-media firm started in the same way many local internet firms have: by copying Western success. QQ, its instant-messaging service, was a clone of ICQ, an Israeli invention acquired by AOL of America. And unlike global internet giants such as Google and Twitter, Tencent still makes its money in its protected home market. Yet the Chinese firm’s stockmarket valuation briefly crossed the $100 billion mark this week for the first time. Given that the valuation of Facebook, the world’s leading social-media firm, itself crossed that threshold only a few weeks ago, it is reasonable to wonder whether Tencent is worth so much. However, Tencent now has bigger revenues and profits than Facebook. In the first half of this year Tencent enjoyed revenues of $4.5 billion and gross profits of $2.5 billion, whereas Facebook saw revenues of $3.3 billion and gross profits of $935m. The Chinese firm’s market value reflects the phenomenal rise in its share price. A study out this week from the Boston Consulting Group found that Tencent had the highest shareholder total return (share-price appreciation plus dividends) of any large firm globally from 2008 to 2012—topping Amazon and even Apple. Tencent has created a better business model than its Western peers. Many internet firms build a customer base by giving things away, be they search results or social-networking tools. They then seek to monetise their users, usually turning to online advertising. Google is a glorious example. Other firms try to make e-commerce work. But as the case of revenue-rich but profit-poor Amazon suggests, this can also be a hard slog. Tencent does give its services away: QQ is used by 800m people, and its WeChat social-networking app (which initially resembled America’s WhatsApp) has several hundred million users. What makes it different from Western rivals is the way it uses these to peddle online games and other revenue-raising offerings. Once users are hooked on a popular game, Tencent then persuades them to pay for “value-added services” such as fancy weapons, snazzy costumes for their avatars and online VIP rooms. Whereas its peers are still making most of their money from advertising, Fathom China, a research firm, reckons Tencent gets 80% of its revenues from such kit (see chart). This year China has overtaken America to become the world’s biggest e-commerce market, in terms of sales. It is also now the biggest market for smartphones. This means it may soon have the world’s dominant market in “m-commerce”, purchases on mobile devices. Tencent’s main rivals in Chinese m-commerce are Baidu, which dominates search on desktop computers (helped by the government’s suppression of Google) and Alibaba, an e-commerce giant now preparing for a huge share offering. All three have gone on acquisition sprees, in an attempt to lead the market. The big worry for investors is the cost of this arms race. Alibaba recently invested $300m in AutoNavi, an online-mapping firm, and nearly $600m in Sina Weibo, China’s equivalent of Twitter. Baidu has been even more ambitious, spending $1.85 billion to buy 91 Wireless, the country’s biggest third-party store for smartphone apps, and $370m for PPS, an online-video firm. Tencent may have an edge over its two rivals in m-commerce because of the wild popularity of WeChat, which is used on mobile phones. But to ensure it stays in the race, it is also spending heavily. On September 16th it said it will spend $448m to acquire a big stake in Sogou, an online-search firm; it plans to merge its own flagging search engine (aptly named Soso) into the venture. It had previously invested in Didi Dache, China’s largest taxi-hailing app, and is rumoured to be interested in online travel and dating firms too. The three Goliaths are buying up innovative firms because they are too big and bureaucratic to create things themselves, mutter some entrepreneurs (presumably not those being bought out handsomely). A more pressing worry for Tencent’s shareholders is that its lavish spending, on top of heavy investment in improving its unimpressive e-commerce offerings, will eat into profits. Worse, the m-commerce arms race risks distracting it from gaming and value-added services, the cash cows that are paying for everything else. A $100 billion valuation might then seem too rich. Excerpts from the print edition & blogs » Editor's Highlights, The World This Week, and more »
http://www.economist.com/news/business/21586557-chinese-internet-firm-finds-better-way-make-money-tencents-worth
CC-MAIN-2014-41
refinedweb
774
62.27
There are three ways to read leads: TSV download, bulk-read, and with Webhooks. You can read leads or realtime updates by: ad_id, campaign_id, as long as you have at least advertiser level permissions on the ad account associated with the lead ad. manage_pagesand ads_managementscope. To retrieve lead information once a lead ID is received via Webhooks you also need to request the Lead Retrieval API leads_retrieval permission as well as manage_pages permission and submit your app to App Review. We will stop sending data collected in Lead Ads forms via Webhooks to apps in Dev Mode. We begin enforcing this change on February 1, 2019. You can manage user rights with Page roles. You can directly query a specific lead generation form. Note the field is labeled leadgen_export_csv_url although the only supported format is TSV. use FacebookAds\Object\LeadgenForm; $form = new LeadgenForm(<FORM_ID>); $form->read(); from facebookads.adobjects.leadgenform import LeadgenForm form = LeadgenForm(<LEADGEN_FORM_ID>) form.remote_read() curl -G \ -d 'access_token=<ACCESS_TOKEN>' \<FORM_ID> Response: { "id": "<LEAD_GEN_FORM_ID>", "leadgen_export_csv_url": "<FORM_ID>", "locale": "en_US", "name": "My Form", "status": "ACTIVE" } Optionally you can filter the URL response to download leads for a specified date range. Use from_date and to_date in a POSIX or UNIX time format, expressing the number of seconds since epoch. For example, to download leads for the time period starting 2016-01-13 18:20:31 UTC and ending 2016-01-14 18:20:31 UTC:<FORM_ID>&type=form&from_date=1482698431&to_date=1482784831 Note: from_datenot set, or is a value less than the form creation time, the form creation time is used. to_datenot set, or is a timestamp greater than the present time, current time is used. If any entries lack Ad IDs or Adgroup IDs in the TSV, it may be due to the following reasons: is_organicin the TSV displays 1 in this case. Otherwise the value is 0. The leads exists on both ad group and form nodes. This returns all data associated with their respective objects. Because a form can be re-used for many ads, your form could contain far more leads than an ad using it. To read in-bulk by ad: use FacebookAds\Object\Ad; $ad = new Ad(<AD_ID>); $leads = $ad->getLeads(); from facebookads.adobjects.ad import Ad ad = Ad(<AD_ID>) leads = ad.get_leads() APINodeList<Lead> leads = new Ad(<AD_ID>, context).getLeads() .execute(); curl -G \ -d 'access_token=<ACCESS_TOKEN>' \<AD_ID>/leads To read by form:" ] }, ], ... } ], "paging": { "cursors": { "before": "OTc2Nz3M8MTgyMzU1NDMy", "after": "OTcxNjcyOTg8ANTI4NzE4" } } } Store locator question is not any different than any other question. A store locator question will also have a field ID that's going to mapped against them during the form creation. They are also going to be sent similarly as other questions. The value passed will come from the Store Number of the selected location. For example let's say you have a store locator question with selected_dealer as the field ID. To fetch the leads in bulk you can call:" ] }, { "name": "selected_dealer", "values": [ "99213450" ] } ], ... } ], "paging": { "cursors": { "before": "OTc2Nz3M8MTgyMzU1NDMy", "after": "OTcxNjcyOTg8ANTI4NzE4" } } } The field_data does not contain the responses to optional custom disclaimer checkboxes that the user would have filled out. To retrieve the responses, you can use the custom_disclaimer_responses field. curl \ -F "access_token=<ACCESS_TOKEN>" \ "<API_VERSION>/<LEAD_ID>?fields=custom_disclaimer_responses" Response: { "custom_disclaimer_responses": [ { "checkbox_key": "optional_1", "is_checked": "1" }, { "checkbox_key": "optional_2", "is_checked": "" } ], "id": "1231231231" } This example filters leads based on timestamps. Timestamps should be Unix timestamp. use FacebookAds\Object\Ad; use FacebookAds\Object\Fields\AdReportRunFields; $ad = new Ad(<AD_ID>); $time_from = (new \DateTime("-1 week"))->getTimestamp(); $leads = $ad->getLeads(array(), array( AdReportRunFields::FILTERING => array( array( 'field' => 'time_created', 'operator' => 'GREATER_THAN', 'value' => $time_from, ), ), )); curl -G \ --data-urlencode 'filtering=[ { "field": "time_created", "operator": "GREATER_THAN", "value": 1516682744 } ]' \ -d 'access_token=<ACCESS_TOKEN>' \<AD_ID>/leads If the form has customized field IDs, the fields and values returned will be the specified fields and values. Get real time updates when leads are filled out. See Lead Ads with Webhooks, Video. Many CRMs provide real time updates to migrate Lead ads data into the CRMs. See Available CRM Integration. The ping for real time updates is structured as follows. Read more at Real Time Updates, Blog. Multiple changes can come in through the ping in the changes array. array( "object" => "page", "entry" => array( "0" => array( "id" => 153125381133, "time" => 1438292065, "changes" => array( "0" => array( "field" => "leadgen", "value" => array( "leadgen_id" => 123123123123, "page_id" => 123123123, "form_id" => 12312312312, "adgroup_id" => 12312312312, "ad_id" => 12312312312, "created_time" => 1440120384 ) ), "1" => array( "field" => "leadgen", "value" => array( "leadgen_id" => 123123123124, "page_id" => 123123123, "form_id" => 12312312312, "adgroup_id" => 12312312312, "ad_id" => 12312312312, "created_time" => 1440120384 ) ) ) ) ) ) You can use leadgen_id to retrieve data associated with the lead: The response: { "created_time": "2015-02-28T08:49:14+0000", "id": "<LEAD_ID>", "ad_id": "<AD_ID>", "form_id": "<FORM_ID>", "field_data": [{ "name": "car_make", "values": [ "Honda" ] }, { "name": "full_name", "values": [ "Joe Example" ] }, { "name": "email", "values": [ "joe@example.com" ] }, { "name": "selected_dealer", "values": [ "99213450" ] }], ... } To subcribe to leadgen event, your server should respond with HTTP GET requests as described in Receiving API Updates in Real Time with Webhooks. After your callback URL is set up, subscribe to the leadgen webhook in your Apps's Dashboard or through an API call: To subscribe through the API you need an app access token, not a user access token: curl \ -F "object=page" \ -F "callback_url=" \ -F "fields=leadgen" \ -F "verify_token=abc123" \ -F "access_token=<APP_ACCESS_TOKEN>" \ "<API_VERSION>/<APP_ID>/subscriptions" Generate a single, long-lived page token to continuously fetch data without worrying about it expiration: The response: { "data": [ { "access_token": "[REDACTED]", "category": "Pet", "name": "Puppy", "id": "153125381133", "perms": [ "ADMINISTER", "EDIT_PROFILE", "CREATE_CONTENT", "MODERATE_CONTENT", "CREATE_ADS", "BASIC_ADMIN" ] }, ] } This long-lived page token has no expiration date and you can hard-code it in simple RTU integrations to get leads data. With the access_token for the page you need to subscribe, make the call below to authenticate an app for your page. You need to have at least MODERATE_CONTENT permission to the page in order to perform this action..
https://developers.facebook.com/docs/marketing-api/guides/lead-ads/retrieving/
CC-MAIN-2018-39
refinedweb
957
52.9
Windows. New Templates Before looking at the templates, first ask yourself what real-world problems you’re going to solve by upgrading. Game developers, who primarily work with XNA, might want to switch to Direct3D because XNA is no longer actively being developed by Microsoft. You still have the option to create an XNA app, but it won’t take advantage of the new features in Windows Phone 8. Also, native Web developers finally have a chance to get in on the action, as Internet Explorer 10 is preinstalled on the phone. But apart from game and Web development, Windows Phone 8 contains a plethora of new features for XAML/C# and Visual Basic developers. Here are the new templates included in the Windows Phone 8 SDK: - Windows Phone XAML and Direct3D App: This is a project for creating a Windows Phone managed app with native components. Upon first launch, you’ll notice that it comes with two projects: a Windows Phone 8 project and a Windows Runtime (WinRT) Component in C++. - Windows Phone HTML5 App: This is a project for creating a Windows Phone app that primarily uses HTML content. This template is often confused with the Windows Library for JavaScript (WinJS) version of Windows Phone, which it is not. It’s simply using a WebBrowser control to display HTML5 content. - Windows Phone Unit Test: This project contains unit tests that can be used to test Windows Phone apps. This template is added after you install the Visual Studio Update 2 RTM release. Upgrading Your Existing Project Sure, the new project templates help new app development, but what about an existing XAML-based app already built with Windows Phone 7.1? The good news is that you can upgrade such apps to Windows Phone 8 by right-clicking the Windows Phone 7.1 project in Visual Studio 2012 and selecting Upgrade to Windows Phone 8. You’ll receive a prompt that cautions you that this upgrade can’t be undone and doesn’t update any referenced projects. You’ll want to make sure that your app is backed up before proceeding. You can also upgrade to Windows Phone 8 by selecting Project Properties, clicking on the Application page, selecting Windows Phone OS 8 from the dropdown list and saving your changes. Also, if you still have a Windows Phone 7 project lying around, you’ll be prompted to upgrade it to Windows Phone 7.1 before you can upgrade it to Windows Phone 8. Again, I recommend you back up your project before proceeding. After your app has been upgraded to Windows Phone 8, you can use the new tooling and SDK features. Now I’ll look at all the new emulator options found in Windows Phone 8. New Emulator Options In Windows Phone 7.1, you can deploy to only two emulator types with the target screen size of 480x800 (WVGA). The only difference in the emulator images is the amount of RAM (512MB or 256MB). Windows Phone 8 has added two new screen sizes: 768x1280 (WXGA) and 720x1280 (720p). You also have the option of downloading the Windows Phone SDK Update for Windows Phone 7.8 (found at bit.ly/10pauq4) to add additional emulators to test how your apps will run on Windows Phone 7.8 devices. Because the Windows Phone XAML and XNA app template targets Windows Phone OS 7.1, you can still test your app on a Windows Phone 8 emulator. You can see a list of all the old and new emulators available in Figure 1. Figure 1 Emulator Options in Windows Phone 8 .png) With the various emulators available, you no longer have to be dependent on having physical hardware to see your app running on the numerous Windows Phone 7 and 8 devices out there. The new Windows Phone 8 emulators are real virtual machines (VMs) and are one of the best improvements made to the SDK. Note: You’ll need Hyper-V, which is only in Windows 8 Pro or Enterprise, for the new emulators. For more details, see the Windows Phone Dev Center page, “System requirements for Windows Phone Emulator,” at bit.ly/QWhAA2. Also, please keep in mind that with the powerful processors in modern PCs, you should test your app on a physical device before submitting it to the marketplace to gauge real-world performance. Now that you’ve seen how the new templates can benefit different sets of developers, and looked at the new emulator options and how easy it is to upgrade your existing project to Windows Phone 8, it’s time to start tackling the other important issues that Windows Phone 7 developers have faced. The Simulation Dashboard When a Windows Phone app is running, a variety of things can interrupt the UX: slow response, having no Internet access, incoming call reminders, the app failing to restore its state after the phone has been locked, and more. In Windows Phone 7.1, you’d probably have to write code that simulated these situations; now you can handle them with the brand-new Simulation Dashboard, as shown in Figure 2. Figure 2 The Simulation Dashboard Included in the Windows Phone 8 SDK .png) You can access this menu by selecting Tools | Simulation Dashboard from Visual Studio 2012. Using the Simulation Dashboard, you can test in advance of your app going to the marketplace just how well it will perform under different situations. By enabling Network Simulation and selecting a network speed, you can test various cellular data networks as well as Wi-Fi or scenarios where no networks are available. Particularly interesting are the Signal Strength options, which affect the packet loss rate and network latency. With these options at your fingertips, you should be able to create a Windows Phone 8 app that performs well in a variety of scenarios. Any app that targets Windows Phone 7.1 or 8 is deactivated once the lock screen has been enabled. It’s then activated again once the device has been unlocked. Inside of the Simulation Dashboard, you have the ability to easily lock or unlock the screen to test how your app handles activation or deactivation. You may optionally press the F12 key to show the lock screen. Finally, you get to use “trigger reminders,” which will simulate an alarm, reminder, phone call, text message or toast notification. Again, you can use these to test how your app handles activation or deactivation. Windows Phone App Analysis While the Simulation Dashboard is helpful in providing real-world scenarios that might happen to a user once your app is running on his phone, it doesn’t help with the performance of your app. This is where you can make use of Windows Phone app analysis, which can be found at the Debug | Start Windows Phone Application Analysis menu. This tool provides app monitoring, which helps evaluate the start time and responsiveness as well as profiling. This helps you evaluate either execution- or memory-related issues in your app. The profiling execution option includes advanced settings that enable you to do things such as visual profiling and code sampling, while the memory options allow you to collect memory allocation stacks and object references. Both of these options will result in a graph displayed in Visual Studio 2012 as well as a time-stamped .sap file added to your project. With the generated graphs, you can drill down into specific start and stop times and see the observation summary that Visual Studio 2012 has generated. The Windows Phone Application Analysis tool is an integral part of your quality assurance process. Store Test Kit After you’ve tested your app under different user scenarios and tested app performance with the help of the Windows Phone Application Analysis kit, you need to test your app to make sure it’s certifiable in the Windows Phone Store. This is a vital step, as 30 minutes now can save you several days of lost time if the app fails something that would’ve been caught by using this kit. The kit can be easily accessed by right-clicking on your app and selecting Open Store Test Kit. Windows Phone 7.1 also included this functionality, but it was called the Marketplace Test Kit. New and improved tests that target Windows Phone 8 have been added. Upon first launch, you might see a message with a blue background at the bottom of your screen saying, “Store test cases have been updated. Would you like to install the updated test cases?” You can select Update and download new or modified tests. This is useful because you always know you’re working with the latest available tests. At the left of your screen are three tabs: Application Details, Automated Tests and Manual Tests. Application Details makes sure the image resources adhere to the guidelines in the Windows Phone Store. This includes the Store Tile as well as the app screenshots for WVGA, WXGA and 720p if your project supports these resolutions. Automated Tests check for XAP package requirements, iconography and screenshots. All you need to do to invoke this feature is click on the Run Test button. The final tab contains manual tests; as of this writing, there are 61 manual tests you can perform. You have to manually indicate whether the test passed or not, but full documentation shows how to do so. Manual tests include those for multiple-device support, app closure, responsiveness and so on. Localization Made Easy With Windows Phone 7, there was a missed opportunity by many app developers in localizing their apps. This was often due to the fact that they had little or no help with translating from one language to another. The recent release of the Multilingual App Toolkit and new project templates solved this problem. The default Windows Phone 8 template guides you through localization with built-in comments in the MainPage.xaml file, and it also structures your app with a helper class and Resources folder. Microsoft added the Multilingual App Toolkit that was originally in Windows 8. Once the Multilingual App Toolkit for Visual Studio 2012 (bit.ly/NgggGU) is installed, it’s as simple as selecting Tools | Enable Multilingual App Toolkit. After the toolkit has been enabled for your project, select Project | Add Translation Languages to see the languages available, as shown in Figure 3. Figure 3 The Translation Languages Dialog Included with the Multilingual App Toolkit .png) You can filter on the language you want and then press the OK button. It will automatically add the proper language files to your Resources folder. One file in particular that you’ll want to pay attention to is the one with the .xlf extension. This is an industry-standard XML Localization Interchange File Format (XLIFF) file that gives you granular control over any pseudo-translation. Double-clicking on it will bring up the Multilingual Editor that will allow you to translate from one language to another by simply clicking the Translate button. You can see an example of this in Figure 4. Figure 4 Translating from One Language to Another .png) In Figure 4, you can see that it automatically translated several words for me. Once a translation is complete, you can sign off or pass it on to a human translator for review. In this example, the only words that needed review were “MEINE TELERIK APP” because the word “Telerik” isn’t in the translation resource. The human translator would realize that Telerik is spelled the same way in German as it is in English, so it can be left as is. You can save this file to get added support for an additional language. An easy way to test this is to change the Application Title in the MainPage.xaml with the following line: Then set the phone language to whatever language you specified. In my example, I selected German, and the app title appeared as “MEINE TELERIK APP.” Taking Advantage of the Shared Core With the release of Windows 8 came a shared core that Windows Phone 8 developers could use. Some of the more notable improvements in the Microsoft .NET Framework 4.5 are async and await support, as well as an easier way to use Isolated Storage. In Windows Phone 7.1, you’d typically write the code shown in Figure 5 to write a file to Isolated Storage. private void WriteFileToIsolatedStorage(string fileName, string fileContent) { using (IsolatedStorageFile isolatedStorageFile = IsolatedStorageFile.GetUserStoreForApplication()) { using (IsolatedStorageFileStream isolatedStorageFileStream = isolatedStorageFile.CreateFile(fileName)) { using (StreamWriter streamWriter = new StreamWriter(isolatedStorageFileStream)) { streamWriter.Write(fileContent); } } } } The code in Figure 5uses the System.IO.IsolatedStorage namespace, not found in Windows 8. Instead, both Windows 8 and Windows Phone 8 can make use of Windows.Storage and the async/await pattern to prevent performance bottlenecks and enhance the overall responsiveness of your app. Here’s an example of how to write the same exact call in Windows Phone 8, taking advantage of the shared core: public async Task WriteFileToIsolatedStorage( string fileName, string fileContent) {(fileContent); await stream.WriteAsync(content, 0, content.Length); } } Another namespace heavily used in Windows 8 is HttpClient. Although the Windows Phone 8 SDK still uses the WebClient class by default, Microsoft has provided the HttpClient class through NuGet. If you simply search for “Microsoft.Net.Http” and install the NuGet package, you can write code such as the following snippet that will work in Windows 8 as well as Windows Phone 8: private async void Button_Click(object sender, RoutedEventArgs e) { var httpClient = new HttpClient(); var request = await httpClient.GetAsync(new Uri( "", UriKind.RelativeOrAbsolute)); var txt = await request.Content.ReadAsStringAsync(); // Do something with txt, such as MessageBox.Show(txt) } Crucial New Features So far I’ve discussed a variety of ways to help make your transition to Windows Phone 8 easier. I’ll now take a look at several new features that your app can’t live without. New Tile Types Windows Phone 7.1 has one tile type called the Flip Tile and one tile size, 173x173, otherwise known as the Medium Tile type. Windows Phone 8 introduces new tile types and sizes: - Flip Tile: This is identical to Windows Phone 7.1 except for the new tile sizes; it flips from the front to the back. - Iconic Tile: This is based largely on the Windows Phone design principles for a modern look. - Cycle Tile: This lets you cycle through up to nine images. A comparison of tile sizes can be found in Figure 6. Figure 6 File Size Comparisons Between the Various Tile Types Tiles can be easily configured through the WMAppManifest.xml file by selecting “Tile Template” and then adding the proper images. You might also set this through codebehind, and a “Flip Tile template for Windows Phone 8” can be found in the Dev Center at bit.ly/10pavKC. Lock Screen and Notifications In Windows Phone 7.1, you could see only notifications such as mail, text messages and phone calls. Now your users have the ability to use your app as a lock screen background image provider and include custom notifications similar to those described earlier. Setting the background image can be as easy as adding an image to your folder with the content type and updating the app manifest file to declare your app as a background provider. Right-click on the WMAppManifest.xml file; choose “Open With” and select XML (Text) Editor; then add this extension: Next, call the code snippet shown in Figure 7. private async void btnLockScreenImage_Click_1( object sender, RoutedEventArgs e) {); } } You’ll notice that you begin by first checking to see if the user has access to change the background. If not, you’ll present a GUI asking for permission, then create a URI with the path to your image and use the Windows.Phone.System.UserProfile.LockScreen namespace to set it. You can also add a notification to show an icon and a count—of messages, calls and so on—in the notification area on the Windows Phone 8 device. For more information on this, see the article, “Lock screen notifications for Windows Phone 8,” at bit.ly/QhyXyR. Speech One of the most exciting new features is speech. Several speech components are included in the Windows Phone 8 SDK: - Text-to-speech (also known as speech synthesis): This allows text to be spoken back to the user through the phone speaker, headphones or Bluetooth connection. - Speech-to-text (also known as speech recognition): This allows your users to speak commands to a phone to accomplish tasks. - Voice commands: These allow your users to speak commands outside of your app by holding down the start button and saying “open” or “start,” followed by your app name, to perform certain tasks. All of this is made possible with the Speech.Synthesis and Speech.Recognition APIs. A simple implementation of text-to-speech can be accomplished with two lines of code in an event handler: Just make sure the async and await operators have been added to the method. Make the Most out of Your Move I’ve discussed everything from the new tooling and templates to some of the new features included in the Windows Phone 8 SDK. I’ve shown how easy it is to implement localization and described the added bonus of a shared code base with Windows 8. You should now be equipped with the knowledge to make the most out of your move from Windows Phone 7 to Windows Phone 8. Michael Crump is a Microsoft MVP, INETA Champion and an author of several .NET Framework e-books. He works at Telerik with a focus on the XAML control suite. You can reach him on Twitter at twitter.com/mbcrump or keep up with his blog by visiting michaelcrump.net. Thanks to the following technical experts for reviewing this article: Jeff Blankenburg (Microsoft) and Lance McCarthy (Telerik) Jeff Blankenburg (Jeffrey.Blankenburg@microsoft.com) is a developer evangelist at Microsoft, co-author of the book, Migrating to Windows Phone (Apress, 2011), and organizer of several tech conferences. Lance McCarthy is a Nokia ambassador and Telerik XAML Support Specialist MSDN Magazine Blog More MSDN Magazine Blog entries > Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/dn385709.aspx
CC-MAIN-2015-35
refinedweb
3,053
62.58
14 Image Core This library is the core part of the 2htdp/image library that DrRacket links into the namespace of all languages that it runs. This ensures that minimal support for these images are the same in all languages, specifically including support for printing the images and constructing the core data structures making up an image. Not all image? values have special caching capabilities; in those cases, this returns a copy of the value if it is a snip%; otherwise it returns the value itself (if it isn’t a snip%). Ordinarily, the image’s bitmap cache is computed the first time the image is actually rendered. This test is intended to be cheaper than a full equality comparison. It is also used by the implementation of equal? on images to short-circuit the full check. (The full check draws the two images and then compares the resulting bitmaps.) Not all image? values are snip%s, but those that are use this as their snip-class%.
https://docs.racket-lang.org/mrlib/Image_Core.html
CC-MAIN-2018-13
refinedweb
167
65.12
Efficient For BlackBerry device users to enjoy using your application on a daily basis and recommend it to others, your application should be as efficient as possible. The features and charactaristics outlined in the other chapters - highly contextualized, always on, integrated and proactive - need to be implemented efficiently, or else your app can become less enjoyable to use. An efficient application takes into consideration the limited resources of a mobile device, such as processor power, battery life and memory, and uses these resources as effectively as possible. An efficient application doesn't consume battery power too quickly, open unnecessary network connections that might increase a user's wireless data charges, or make the UI on the BlackBerry device sluggish or unresponsive. Remember that users don't usually report or provide feedback on performance issues with applications; they typically delete these applications from their devices. For this reason, it's important to focus on creating efficient apps. What are the benefits of an efficient app? - Increase battery life. A longer battery life means that users can spend more time using your app and less time charging their devices or searching for apps to remove to improve performance (including yours!). - Improve response times. Efficient apps can respond quickly to a user's input, which means less waiting on the user's part and more potential for productivity. - Reduce data costs. Users don't want to see an increase on their bill from their wireless service provider because your app is transferring data unnecessarily. If your app transfers very little data while it's in use, your users are more likely to continue to use your app and recommend it to others. - Increase the "stickiness" of your app. A "sticky" app is one that a user sticks with, that engages the user, that the user comes back to over and over again. By focusing on efficiency when you design your app, you can improve your user's experience and help increase the stickiness of your app. Approaches to efficient application design The following approaches can help you make your app efficient. Responding to the status of the device The Highly contextualized page describes how to consider the contexts (such as battery, connectivity, and device characteristics) that are associated with BlackBerry devices. When you design your app, the monitoring of certain contexts, or states, of the device can help your app be more efficient while also providing an exceptional user experience. Your app can detect states such as low battery level, poor wireless coverage, and Wi-Fi connectivity to increase efficiency. You can change the behavior of your app in response to state changes. For example, a Bookshelf app communicates with a web service that provides information about books that have been released. If Bookshelf detects that the device is connected to a Wi-Fi network, Bookshelf can use the bandwidth that the Wi-Fi connection provides to request more detailed information about a book, such as cover art, audio clips, and extended synopses. Using listeners One of the most effective ways to respond to state changes on a device is by using listeners. You can register your app as a listener for different types of events. You might want your app to close automatically if the battery level reaches a certain point, or to provide different options if the device is connected to a computer using a USB connection. You can also use listeners to monitor global events on the device and use these global events to communicate between processes. To be enabled as a listener, your app must implement one or more listener interfaces (for example, CoverageStatusListener or GlobalEventListener). The following table describes some common listener interfaces and the events that they listen for. If you choose to use listeners in your app, it's important to remember to deregister the listeners that you use when you are done with them. If you don't deregister a listener, a reference to the listener remains in memory on the device, and your app is not properly terminated. You can create your own listeners if the ones that are provided in the BlackBerry APIs don't suit your needs. This approach might be a good way to improve the efficiency of your app. A listener that you create yourself might be more focused or specialized for your app's functionality than a generic listener. For example, a listener that is designed for the Bookshelf app might listen specifically for events that are generated by the Bookshelf web service, such as when new information about a book's location is available. Checking the status of the device Your app can check the status of the device before trying to perform a particular action. For example, you can invoke DeviceInfo.getBatteryLevel() or DeviceInfo.getBatteryStatus() before you start your app, to determine if there is sufficient battery power remaining to run your app or to perform a lengthy operation. You can also invoke DeviceInfo.getIdleTime() to determine how long the device has been idle, which is useful when you want your app to perform complex or time-consuming operations when the user is not using the device. If your app must open a network connection, your app should check the network coverage that is available. If coverage is poor, your app uses more battery power to use the wireless transceiver on the device, and a more efficient approach might be to wait until coverage improves to open the connection. Alternatively, you can design your app to retrieve a smaller, lower-quality set of data immediately, and then wait until coverage improves or a Wi-Fi connection is available to retrieve the full set of data. You can invoke RadioInfo.getSignalLevel(), CoverageInfo.isCoverageSufficient(), or TransportInfo.hasSufficientCoverage() to help you determine the available network coverage. You should also consider whether an IT policy rule that is set by a BlackBerry Enterprise Server administrator might block a feature of the device that your app is trying to use. For example, if the Disable GPS IT policy rule is applied to the device, your app won't be able to obtain a GPS fix and should not waste resources trying to do so. Code sample: Listening for status changes to the wireless coverage The following code sample demonstrates how you can listen in your app for status changes to the wireless coverage. Your app can respond and stop transferring data or communicating with a server or web service. public class MyApplication extends UiApplication implements CoverageStatusListener { // class constructor public MyApplication() { // ... } public void coverageStatusChanged(int newCoverage) { // respond to the change in coverage } // ... } Code sample: Setting a listener when the backlight changes The following code sample demonstrates how you can set a listener in your app when the backlight changes on the device. If the backlight is off, your app doesn't need to respond to any events, and you don't need to set a listener for the screen. If the backlight is on, your app can resume listening for events. // override backlightStateChange() of the SystemListener2 interface public void backlightStateChange(boolean on) { if (screen != null) { if (on) { // set the Screen object of your app to listen for events from // your app MyApplication.getInstance().setListener(screen); } else { MyApplication.getInstance().setListener(null); } } } Code sample: Detecting a low battery level on a device that is not charging The following code sample demonstrates how you can detect a low battery level on a device that is not charging. If this situation occurs, you might want to close your app automatically or stop using GPS. private boolean batteryLowNotCharging() { int batteryStatus = DeviceInfo.getBatteryStatus(); if ((batteryStatus & DeviceInfo.BSTAT_LOW) != 0) { if ((batteryStatus & DeviceInfo.BSTAT_CHARGING) == 0) { return true; } } return false; } Find out more For more information about device status and network connections, see Networking and connectivity. Eliminating unnecessary processing on the device The Always on page describes how to keep your app running in the background on a BlackBerry device, as well as how to schedule processes to run periodically. When you implement these approaches correctly, you build efficiency into your app by choosing when and how often to perform processing. You can help make your app efficient by minimizing the amount of processing that you need to do in your app, and eliminating any processing that might not be necessary at a given time. For example, the Bookshelf app doesn't need to check the Bookshelf web service continuously for updates to the information about a book. A book's information doesn't change very often, so it might be more efficient to use a push solution to send the updates to Bookshelf when the information is updated. Running in the background You should only run your app in the background if it makes sense to do so. Running your app in the background lets your app continue to process data and provide updates to the BlackBerry device user, even when the user is using other apps on the device. However, running in the background can consume valuable system resources and battery power. There are alternatives to running your app in the background: - You can push important information or updates to your app from an external source, such as a web server. Your app doesn't need to be running in the background to receive push notifications. When push data arrives on the device, if your app is registered as a push handler, your app processes the data automatically. - You can schedule processes to run periodically. Your app can perform resource-intensive operations at specific times so that your app doesn't run continuously. When your app must perform tasks in the background, you can save battery power by making sure that your app does its processing all at once, instead of spread out over a long period of time. The device can enter a low-power mode when processing is complete to minimize power consumption. You should also avoid opening many separate network connections to transfer small amounts of data. Instead, you should design your app to wait until there is a significant amount of data to transfer, and then open a single connection and transfer all of the data at once. Detecting when your app is not displayed If your app includes any animations or contains code that repaints the screen at regular intervals, you can save a substantial amount of battery power by not redrawing UI elements if your app is running but not displayed, or if your app is not in use. You can use the following methods to determine if your app is in use: - You can use methods to stop animating or repainting the screen when the screen is not visible, and resume when the screen is visible again. You can override Screen.onExposed(), which is invoked when your app's screen is on top of the display stack and displayed to the user. You can override Screen.onObscured(), which is invoked when your app's screen is not displayed to the user or is obscured by another screen. - To determine if your app is in the foreground, you can invoke Application.isForeground(). If this method returns false, your app is not visible to the user. - To determine if the backlight on the device is turned on, you can invoke Backlight.isEnabled(). If this method returns false, no UI elements are visible to the user. The backlight turns off automatically after a period of inactivity. Your app should not keep the backlight on unless the device is connected to a charger, or if screen visibility is critical to the app. - To determine how long the device has been idle, you can invoke Device.getIdleTime(). To prevent any potential UI lag or latency, your app should perform processor-intensive operations when the device is idle. Find out more For more information about pushing content to devices, see the Push Service SDK developer documentation. Using location services effectively The Highly contextualized page describes how location is an important context to consider when you're designing your app. You can use GPS technology on the BlackBerry device to add this context to your app, but you should remember that obtaining GPS fixes can consume a lot of battery power. You can help make your app efficient by making effective use of location services on the device, such as GPS and geolocation. For example, the Bookshelf app should notify BlackBerry device users when they are in the area of a released book. Because maintaining a GPS fix at all times can consume battery power, Bookshelf might obtain a GPS fix only periodically, with an option for more frequent fixes that the user can select. Bookshelf might also use the geolocation service to obtain the general position of the user until a more precise GPS fix is calculated. Obtaining GPS fixes that are timely and necessary In general, the most battery-intensive operation done by GPS is a full scan of the sky. A full scan of the sky involves locating and connecting to GPS satellites and obtaining a GPS fix using the information from those satellites. An app that performs frequent full scans can drain the battery very quickly. To avoid this situation, your app should perform a full scan to obtain GPS fixes only as often as required to provide a good user experience. For example, your app might not need to maintain the user's exact position at all times, but instead can provide a good user experience if the app obtains a fix every ten minutes. If your app needs to track the user's position more precisely, you can decrease the time between fixes, at the cost of increased battery usage. If your app cannot obtain a GPS fix, you should consider carefully whether to retry the request. For example, if your app hasn't been able to obtain a fix for the last 30 minutes, it might be because the user is indoors, and your app shouldn't retry the request. Your app might also reduce the frequency of fix requests until a fix is successful. Your app should use assisted GPS mode sparingly. Assisted GPS mode obtains a GPS fix by communicating with the wireless service provider to retrieve satellite information. This method provides a fix very quickly and consumes less battery power than other GPS modes, but it relies on the wireless service provider and increases their costs, as well as any network costs that are associated with communicating with the wireless service provider. You should design your app to use assisted GPS mode to obtain an initial fix before switching to autonomous GPS mode. If you want your app to obtain a fix using a particular GPS mode (for example, assisted GPS or autonomous GPS), your app should check to see if that mode is available by invoking GPSInfo.isGPSModeAvailable(). Using the geolocation service to obtain an approximate location As an alternative to using GPS on a device, you can use the geolocation service to retrieve the location of the device. The geolocation service provides an approximate location (within 200 meters to 5 kilometers) and includes the latitude, longitude, and horizontal accuracy based on the positioning of cell towers and WLAN access points. If your app doesn't require the user's exact position, the geolocation service can be an excellent approach and can save substantial amounts of battery power. The geolocation service can also function indoors, making it feasible to use in apps that don't always have access to GPS satellites (for example, apps that recommend local points of interest). Code sample: Using the geolocation service to obtain an approximate location The following code sample demonstrates how to use the geolocation service to retrieve the approximate location of the device. // specify the geolocation mode BlackBerryCriteria myBlackBerryCriteria = new BlackBerryCriteria(LocationInfo.GEOLOCATION_MODE); // retrieve a location provider BlackBerryLocationProvider myBlackBerryProvider = (BlackBerryLocationProvider)LocationProvider.getInstance(myBlackBerryCriteria); // request a single geolocation fix BlackBerryLocation myBlackBerryLoc = myBlackBerryProvider.getLocation(timeout); // retrieve the geolocation of the device double lat = myBlackBerryLoc.getQualifiedCoordinates().getLatitude(); double lng = myBlackBerryLoc.getQualifiedCoordinates().getLongitude(); double lat = myBlackBerryLoc.getQualifiedCoordinates().getAltitude(); Find out more For more information about GPS and location-based services, visit the following resources: - Location-Based Services (LBS) category overview, in the API reference for the BlackBerry Java SDK. - Location-based services. Using the Profiler tool You can use the Profiler tool to analyze and optimize your code for efficiency. The Profiler tool is available in the BlackBerry Java Development Environment or the BlackBerry Java Plug-in. You can use the Profiler tool to identify what threads are running at any point in the execution of your app, how long methods take to run, how many objects your app creates, and so on. You can use this tool to consider code-level efficiency when you're designing your app. You can make sure that the methods in your app run as quickly and efficiently as possible, don't create too many objects, and don't commit too many objects to memory on the device. For example, if the Bookshelf app creates a new object every time the app receives new location information for a book from the Bookshelf web service, Bookshelf probably isn't using objects as efficiently as it could. It might be more efficient to update the same object each time with new information. When you're using the Profiler tool to analyze your code, the values that you obtain (for example, execution time in clock ticks or execution time in milliseconds) are most useful when considered relative to each other. For example, the number of clock ticks that one method takes to run isn't necessarily relevant, because this number can vary depending on factors such as device model, number of other apps running simultaneously, whether the method is running on a device or on a simulator, and so on. Instead, the comparison of the number of clock ticks that two methods that perform the same function take is more useful, and you can use this data to determine which method is more efficient. In general, you shouldn't try to reduce profiler metrics individually. Instead, you should use these metrics to help identify areas of inefficiency in your app. You can also use profiler metrics to identify bottlenecks in your app's execution, and determine the best places to try to optimize your code. You should try to optimize the methods in your app that run the most frequently, such as implementations of methods that open and manage network connections and methods that draw UI elements (for example, Screen.paint(), Screen.paintBackground(), and FullScreen.sublayout()). The following table lists some of the metrics that you can monitor by using the Profiler tool. Find out more For more information about the Profiler tool, see the BlackBerry Java Plug-in for Eclipse Development Guide. Storing data on the device The Social and connected Social and connected page describes how to capture audio and video in your app by using the javax.microedition.media.Player class. In the code samples there, the audio and video files are stored in internal storage on the BlackBerry device by specifying a record location of... You can choose from several data storage options to store data or files that your app creates. Each storage option has advantages and disadvantages, and you should carefully consider which option to use based on the type of data that your app needs to store. You can help make your app efficient by choosing the most appropriate storage location and storage option for your app's data. For example, the Bookshelf app needs to store information about a book that you have released, such as comments about the book or the book's current location. Storing this information on the device is probably more efficient than querying the Bookshelf web service whenever the user requests this information. If the information consists of relational data, you might choose to store the information in a SQLite database. If the information needs to be shared between apps on the device, you might choose to store the information in the runtime store. Understanding data storage options The following table describes the main data storage options that you can use to store information that your app creates. Choosing a data storage option When you choose a data storage option to use in your app, you should keep in mind the following considerations: - Memory on mobile devices can be very limited, so you should consider not storing all of your data on the device. BlackBerry devices are frequently connected to wireless networks so that your app can access data when needed. In many cases, the best approach is to store data across device resets only when the data is frequently accessed. - The file system and MIDP RMS are standards-based approaches, and the persistent store and runtime store are specific to BlackBerry devices. If you want your app to run on other Java ME compatible devices, you should consider a standards-based approach. - The file system is typically the most efficient storage location for large, read-only files such as videos or large graphics. - For storing data other than large, read-only files, SQLite provides a scalable data storage option. - If you use the persistent store in your app, you should use the grouping mechanism that is provided in the net.rim.device.api.system.ObjectGroup class to commit groups of objects to memory more efficiently. - If you use the runtime store in your app, make sure that you remove objects that your app adds to the runtime store when they are no longer required. Failing to remove objects from the runtime store is a common cause of memory leaks in BlackBerry apps. - The BlackBerry Java Virtual Machine includes a garbage collection tool, which runs periodically to remove unreferenced objects and weakly referenced objects from memory. To take advantage of this functionality in your app, you should release objects by setting their references to null after your app is done with them. Code sample: Creating a file in internal storage The following code sample demonstrates how to create a file in internal storage on the device. import net.rim.device.api.system.Application; import javax.microedition.io.*; import javax.microedition.io.file.*; import java.io.IOException; public class CreateFileApp extends Application { public static void main(String[] args) { CreateFileApp app = new CreateFileApp(); app.setAcceptEvents(false); try { FileConnection fc = (FileConnection)Connector.open(""); // If no exception is thrown, then the URI is valid, but the file may or // may not exist. if (!fc.exists()) { // create the file if it doesn't exist fc.create(); } fc.close(); } catch (IOException ioe) { System.out.println(ioe.getMessage()); { } } Code sample: Creating a SQLite database The following code sample demonstrates how to create a SQLite database in the root folder of a media card. import net.rim.device.api.system.Application; import net.rim.device.api.database.*; import net.rim.device.api.io.*; public class CreateDatabase extends Application { public static void main(String[] args) { CreateDatabase app = new CreateDatabase(); try { URI strURI = URI.create(""); DatabaseFactory.create(strURI); } catch (Exception e) { System.out.println(e.getMessage()); } } } Find out more For more information about data storage, see the Data storage overview.
https://developer.blackberry.com/bbos/java/documentation/efficient_1984361_11.html
CC-MAIN-2014-15
refinedweb
3,860
50.77
Python's Mypy: Callables and mills learn the way Mypy s category blockage works with services and turbines. In my remaining two articles I ve described one of the vital techniques Mypy, a kind checker for Python, can support establish skills problems with your cipher. See Introducing Mypy, an Experimental Optional Static Type Checker for Python and Python s Mypy—Advanced Usage . For individuals like me who ve enjoyed activating languages for a long time, Mypy might seem like a step backward. however given the many mission-vital initiatives actuality written in Python, regularly by way of gigantic teams with restricted communique and Python experience, some form of classification checking is an more and more vital gross. or not it s vital to be aware that Python, the accent, isn t altering, and it is rarely becoming statically typed. Mypy is a abstracted application, running backyard Python, typically as part of a continuous affiliation CI device or invoked as a part of a Git accomplish hook. The idea is that Mypy runs before you place your cipher into construction, identifying where the facts does not healthy the annotations you ve gotten fabricated to your variables and function parameters. i go to focal point on a couple of of Mypy s advanced features right here. You may now not come across them very commonly, however even though you do not, it can supply you a higher graphic of the complexities linked to type blockage, and how acutely the Mypy team is thinking about their work, and what assessments deserve to be executed. it will additionally help you take into account more about the techniques individuals do classification blockage, and how to steadiness the splendor, adaptability and expressiveness of dynamic typing with the accurateness and beneath mistakes of changeless typing. Callable types once I inform participants in my Python courses that every thing in Python is an object, they nod their active, clearly thinking, I ve heard this before about other languages. but then I show them that features and courses are both altar, and that they know that Python s proposal of every thing is a bit of greater expansive than endemic. And yes, Python s narrative of. every thing isn t as wide as Smalltalk s. in case you define a function, you might be growing a brand new article, considered one of category characteristic : >>> def foo: ... return,i m foo! >>> classificationfoo in a similar way, in the event you actualize a new classification, you might be adding a new article classification to Python: >>> category Foo: ... move >>> categoryFoo it s a stunning normal archetype in Python to put in writing a feature that, back it runs, defines and runs an inner characteristic. here is often known as a cease , and it has a few diverse uses. as an instance, which you could address: def foox: def bary: acknowledgment . In bar, x * y = x*y acknowledgment bar then you definitely can run: b = foo bookb and you may get right here achievement: In bar, * = I do not need to dwell on how all of this works, together with inner capabilities and Python s scoping rules. I do, although, need to ask the question. how are you able to use Mypy to check all of this? You could comment each x and y as int. and you ll annotate the acknowledgment price from bar as a cord. however how can you comment the return price from foo? seeing that, as proven above, features are of classification feature, perhaps which you could employ that. but function is never truly a diagnosed identify in Python. in its place, you will deserve to consume the typing module, which comes with Python so that you can do this sort of type checking. And in typing, the identify Callable is described for precisely this aim. so you can address: from typing acceptation Callable def foox: int -> Callable: def bary: int -> str: acknowledgment . In bar, x * y = x*y acknowledgment bar b = foo bookb sure adequate, this passes Mypy s assessments. The function foo returns Callable, an outline that includes each services and courses. however, delay a nd. might be you don t most effective need to examine that foo returns a Callable. probably you additionally are looking to make certain that it allotment a feature that takes an int as an argument. To do that, you ll consume square brackets after the note Callable, inserting two points in these brackets. The aboriginal might be a listing during this case, a one-element list of argument kinds. The d factor within the checklist will call the return type from the characteristic. In different phrases, the code now will seem like this: #!usrbinenv python def foox: int -> Callableint, str: def bary: int -> str: return ,In bar, x * y = x*y return bar b = foo bookb turbines With all this speak of callables, you additionally may still trust what occurs with generator services. Python loves iteration and encourages you to employ for loops wherever that you could. in many instances, it s easiest to specific your iterator within the kind of a function, everyday in the Python apple as a architect feature . for instance, that you could actualize a generator characteristic that returns the Fibonacci sequence as follows: def fib: first = second = whereas true: yield aboriginal first, nd = d, first+second then you definitely can get the first Fibonacci numbers as follows: g = fib for i in range: printnextg it is top notch, however what if you need to add Mypy blockage to your fib feature? it will look that you can simply say that the return price is an integer: def fib -> int: aboriginal = d = whereas authentic: yield first first, d = d, first+nd but if you try working this via Mypy, you get a gorgeous ascetic acknowledgment: atfb.py:four: absurdity: The return category of a architect feature may still be. generator or considered one of its supertypes atfb.py:: error: No afflict alternative of,next suits altercation type. int atfb.py:: note: viable afflict alternative: atfb.py:: be aware: def _T nexti: Iterator_T -> _T atfb.py:: be aware: < greater non-matching overload not proven> Whoa! What s happening? neatly, or not it s vital to remember that the influence of operating a architect feature isn t whatever thing you re yielding with each and every generation. fairly, the outcome is a generator article. The generator object, in turn, again yields a particular type with each and every generation. So what you truly need to do is inform Mypy that fib will return a architect, and that with every generation of the architect, you ll get an accumulation. you might feel that you just might do it this way: from accounting acceptation architect def fib -> generatorint: aboriginal = nd = while true: yield first aboriginal, d = nd, first+d but if you try to run Mypy, you get the following: atfb.py:: error: architect expects category arguments, however accustomed It seems that the architect classification can optionally get arguments in square brackets. but if you provide any arguments, you ought to provide three: The class lower back with every iteration—what you constantly suppose about from iterators. The class that the architect will receive, if you invoke the send system on it. The category that might be back back the generator exits altogether. seeing that simplest the primary of those is vital in this application, you are going to move None for each of the other values: from typing import generator def fib -> generatorint, None, None: first = nd = while genuine: crop aboriginal aboriginal, nd = nd, aboriginal+second sure ample, it now passes Mypy s exams. cessation You may think that Mypy isn t as much as the assignment of dealing with advanced accounting complications, but it in reality has been thought out somewhat well. And of path, what I ve shown right here and in my outdated two accessories on Mypy is simply the beginning; the Mypy authors have apparent all forms of issues, from modules mutually referencing each and every others types to aliasing long classification descriptions. if you re thinking of abbreviating up your organization s code, including category blockage by means of Mypy is a good manner to head. A growing variety of groups are including its checks, bit by bit, and are having fun with anything that activating-language advocates have long ignored, specifically that if the computer can check what varieties you are the usage of, your courses really might run more smoothly. elements that you would be able to study more about Mypy here. That site has affidavit, tutorials and alike suggestions for americans the usage of Python who want to introduce mypy by way of feedback as opposed to annotations. which you could read extra about the origins of class annotations in Python, and how to spend them, in PEP Python accessory inspiration , attainable on-line here.
https://whatismyip.pro/documentation/content-generator/date-01-10-2019/Date-01-10-2019-008.php
CC-MAIN-2020-16
refinedweb
1,484
58.72
I have a file that may be in a different place on each user's machine. Is there a way to implement a search for the file? A way that I can pass the file's name and the directory tree to search in? os.walk is the answer, this will find the first match: import os def find(name, path): for root, dirs, files in os.walk(path): if name in files: return os.path.join(root, name) And this will find all matches: def find_all(name, path): result = [] for root, dirs, files in os.walk(path): if name in files: result.append(os.path.join(root, name)) return result And this will match a pattern: import os, fnmatch def find(pattern, path): result = [] for root, dirs, files in os.walk(path): for name in files: if fnmatch.fnmatch(name, pattern): result.append(os.path.join(root, name)) return result find('*.txt', '/path/to/dir') I used a version of os.walk and on a larger directory got times around 3.5 sec. I tried two random solutions with no great improvement, then just did: paths = [line[2:] for line in subprocess.check_output("find . -iname '*.txt'", shell=True).splitlines()] While it's POSIX-only, I got 0.25 sec. From this, I believe it's entirely possible to optimise whole searching a lot in a platform-independent way, but this is where I stopped the research.
https://pythonpedia.com/en/knowledge-base/1724693/find-a-file-in-python
CC-MAIN-2020-34
refinedweb
237
76.01