text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Controlling exposure through feature flags in VS Team Services Buck, let’s take a look at how we do this for Team Services. Goals Our first goal is decoupling deployment and exposure. We want to be able to control when a feature is available to users without having to time when the code is committed. This allows engineering the freedom to implement the feature based on our needs while also allowing control for the business on when a feature is announced. Next we want to be able to change the setting at any scope from globally to particular scale units to accounts to individual users. This granularity gives us a great deal of flexibility. We can deploy a feature and then expose it to select users and accounts. That allows us to get feedback early, which includes not only what users tell us but also how the feature is used based on aggregated telemetry. Additionally, we want to be able to react quickly if a feature causes issues and be able to turn it off quickly. To make all of this work well, we need to be able to change a feature flag’s state without re-deploying any of our services. We need each service to react automatically to the change to minimize the propagation delay. As a result, we have the following goals. - Decouple deployment and exposure - Control down to an individual user - Get feedback early - Turn off quickly - Change without redeployment Feature flags Feature flags, sometimes called feature switches, allow us to achieve our goals. At the core, a feature flag is nothing more than an input to an if statement in the code: if the flag is enabled, execute a new code path, and else if not, execute the existing code path. Let’s look at an actual example. In this case I want to control whether a new feature to revert a pull request is available to the user. I’ve highlighted the Revert button in the screen shot. First we need to define the feature flag. We do that by defining it in an XML file. Each service in VSTS has its own set of flags. Here’s part of the actual file that defines the feature flag for this button with the name of the feature flag highlighted. <" /> When we deploy the service that defined this feature flag, the deployment engine will create the feature flag in the database. Using the feature flag in code is simple. Here’s the Typescript code that is used to create the button. I’ve combined contents from two files. The export is from a file that defines constants. The rest is from the code to create the button on the page. I’ve highlighted the flag and the button creation. In this case, there was no prior code, so if the flag is off, nothing gets added to the page. export module FeatureAvailabilityFlags { export var SourceControlRevert = "SourceControl.Revert"; } import FeatureAvailability = require("VSS/FeatureAvailability/Services"); private _addRevertButton(): void { if(FeatureAvailability.isFeatureEnabled(FeatureAvailabilityFlags> ); } } In addition to the web UI, the code for the MVC controller is also protected with a feature flag. I’m omitting the definition of the constant and some of the code for brevity, but as you can see the only mention of the feature flag is in the attribute on the controller, making it really easy to control the feature with a flag. namespace Microsoft.TeamFoundation.SourceControl.WebServer { [FeatureEnabled(FeatureAvailabilityFlags.SourceControlRevert)] public class GitRevertsController : GitApiController { [HttpPost] public HttpResponseMessage CreateRevert( [FromBody] WebApi.GitAsyncRefOperationParameters revertToCreate, [ClientParameterType(typeof(Guid), true)] string repositoryId, [ClientIgnore] string projectId = null) { } [HttpGet] public GitRevert GetRevertForRefName( [FromUri] string refName, [ClientParameterType(typeof(Guid), true)] string repositoryId, [ClientIgnore] string projectId = null) { } [HttpGet] public GitRevert GetRevert( int revertId, [ClientParameterType(typeof(Guid), true)] string repositoryId, [ClientIgnore] string projectId = null) { } } } The FeatureEnabled attribute checks to see if the specified feature flag is enabled and throws an exception if not. public class FeatureEnabledAttribute : AuthorizationFilterAttribute { public FeatureEnabledAttribute(string featureFlag) { this.FeatureFlag = featureFlag; } public string FeatureFlag { get; private set; } public override void OnAuthorization(HttpActionContext actionContext) { base.OnAuthorization(actionContext); TfsApiController tfsController = actionContext.ControllerContext.Controller as TfsApiController; if (tfsController != null) { if (!tfsController.TfsRequestContext.IsFeatureEnabled(this.FeatureFlag)) { throw new FeatureDisabledException(FrameworkResources.FeatureDisabledError()); } } } } Other than some similar code for a menu entry for revert, the feature flag has now been added for the new feature. Controlling feature flags We have both PowerShell commands and a web UI to turn the feature flags off and on for different scopes. Our PowerShell commands are what you would expect: Get-FeatureFlag and Set-FeatureFlag. Here are some examples of what they can do. We also have internal site that provides an interactive way to do the same operations. In this example, you can see that I have the feature flag for the revert feature on for one of my personal accounts and off for the other. This is a great way to be able to control the flags for accounts on demand, such as when we allowed customers to request SSH before it became broadly available. Turn it off! It’s important to have the right telemetry to monitor new features. If we find that a feature is causing problems, we can turn the feature flag off. Since there’s no deployment involved – just a script or change in the administrative web UI – we can quickly revert to the prior behavior. Testing New features that are hidden behind a feature flag are deployed with the flag turned off. As we start turning on the feature for users or accounts, both the old and new code will be executing. We need to test both with the feature flag on and off to ensure it works. This also critical for ensuring the feature can be turned off quickly if something goes wrong. Tests can easily control whether a flag is on or off by calling methods like the following. public static void RegisterFeature(TestCollection testCollection, string featureName) public static void UnregisterFeature(TestCollection testCollection, string featureName) public static void SetFeatureStateForApplication(TestCollection testCollection, string featureName, bool state) public static void SetFeatureStateForDeployment(TestCollection testCollection, string featureName, bool state) public static bool IsFeatureEnabled(TestCollection testCollection, string featureName) Tests can be run conditionally based on the state of a flag. [TestMethod, Owner("buck"), Priority(1)] [Description("Verity that Revert works correctly.")] [RequiresFeature(FeatureAvailabilityFlags.SourceControlRevert)] public void SourceControl_Revert() { ... } Since most feature flags are used initially with a default state of off, it’s also easy to set them in the test environment for the test run. <?xml version="1.0" encoding="utf-8"?> <TestEnvironment> <TestVariables> <Value Key="SetTfsFeaturesOn">SourceControl.Revert</Value> </TestVariables> </TestEnvironment> Stages I mentioned earlier that we can use feature flags to get feedback. We also use them to allow our own team to begin using the feature to help flush out bugs (it’s a great to build the service that we use). Rather than have every team invent their own process, we established a process to roll out features in a standardized way. This provides an opportunity to gather feedback and bugs early on in the development process. We have standard stages that we’ve defined that each team can use. How quickly a feature goes through the stages depends on the scope of the feature, feedback, and telemetry. The stages include an increasingly broad group of users with increasingly diverse perspectives. Stage 0 – Canary This is the first phase and is the account used by the VSTS team plus select internal accounts. Program managers are responsible for sending out communication to the users once the feature flags are enabled. Stage 1 – MVPs & Select Customers This is the second phase and will include MVPs and select top customers who have opted in. Program managers are again responsible for emailing the users. Stage 2 – Private Preview Private preview is used for major new features and services and is designed to test new features with a broader set of customers that we don’t have regular contact with. There are many ways to collect a list of customers for a private preview – from forum interaction, blog comments, etc. We’ve also done invitation codes, publicized email aliases for customers to request access, as we did for SSH access, and sometimes create in-product “opt-in” experiences for customers, as we’ve done for the new navigation UI. Individual teams will manage and communicate directly with their private preview customers. Stage 3 – Public Preview Public preview is a state reserved for major new features and services where we want to gather feedback but are not yet ready to provide a full SLA, etc. Public Preview features are enabled for all VSTS customers but their main entry points in the UI are annotated with a “preview” designation so that customers understand this is not a completed feature. When a feature enters public preview, it is announced in the VSTS news feed and may also be accompanied by marketing communication, depending on the feature. Stage 4 – General Availability (GA) GA denotes when a feature/service is available to all customers and fully supported (SLA, etc). Cleaning up flags It’s easy to accumulate a lot of flags that are no longer used, so after a feature has been in production and fully available, teams decide when to delete the feature flag and the old code path. This may happen a few weeks or a few months later, depending on the scope of the feature. Events One of our goals is to allow for features to be unveiled at events in some cases. If you want to unveil a feature at an event, when do you enable it? We learned the hard way that it’s not the morning of the event. Several years ago at the Connect 2013 event we turned on a large set of feature flags just before a keynote and demo. The service was unusable. By turning on feature flags at the same time, we had a large amount of new code interacting with the system under production load, and the system fell apart. You can read about details of the incident in Brian’s post, A Rough Patch. At that time, we didn’t have stages, and we only had one public scale unit. As a result of that experience, we ensure that feature flags are enabled in production at least 24 hours ahead of an event so that the code is under full production load. That leaves us time to react, whether to fix problems or disable features, before the event starts. Of course, this means that new features could be discovered early. That’s certainly a possibility, but it’s mitigated by controlling the entry points, making the new features hard to find except for the customers who’ve gotten early access or knowing where to look (perhaps setting a special cookie). Some of it comes down to a judgment call. We followed this policy when we unveiled the Marketplace at the Connect 2015 event. In contrast to the event in 2013, everything worked as it should during the event. Summary Feature flags have become a critical part of how we roll out feature, get feedback, and allow engineering and marketing to proceed on their own schedules. It’s hard to imagine DevOps services without them! While we built our own implementation, you don’t have to. LaunchDarkly offers feature flags as a service, including a LaunchDarkly VSTS extension to integrate with VSTS work items and releases. Follow me at twitter.com/tfsbuck
https://devblogs.microsoft.com/buckh/controlling-exposure-through-feature-flags-in-vs-team-services/
CC-MAIN-2021-10
refinedweb
1,912
53.31
Capacity Planning for Vertical Search Engines: An approach based on Coloured Petri Nets - Jeremy McLaughlin - 2 years ago - Views: Transcription 1 Capacity Planning for Vertical Search Engines: An approach based on Coloured Petri Nets Veronica Gil-Costa,3, Jair Lobos 2, Alonso Inostrosa-Psijas,2, and Mauricio Marin,2 Yahoo! Research Latin America 2 DIINF, University of Santiago of Chile 3 CONICET, University of San Luis Argentina Contact Abstract. This paper proposes a Colored Petri Net model capturing the behaviour of vertical search engines. In such systems a query submitted by a user goes through different stages and can be handled by three different kinds of nodes. The proposed model has a modular design that enables accommodation of alternative/additional search engine components. A performance evaluation study is presented to illustrate the use of the model and it shows that the proposed model is suitable for rapid exploration of different scenarios and determination of feasible search engine configurations. Keywords: Web search engines, Petri Net applications. Introduction Vertical search engines are single-purpose dedicated systems devised to cope with highly dynamic and demanding workloads. Examples include advertising engines in which complex queries are executed on a vertical search engine each time a user displays an in a large system such as Yahoo! mail. As potentially millions of concurrent users are connected to at any time, the workload on the search engine is expected to be of the order of many hundred thousand queries per second. When vertical search engines are used as part of large scale general purpose search engines, queries per second intensity is featured by the unpredictable behavior of users who are usually very reactive to worldwide events. In such cases, models and tools for performance evaluation are useful to plan capacity of data center clusters supporting computation of search engines. The typical questions one would like to quickly answer through a reasonable model of the actual system are like given that next year we expect a X% increment in query traffic, what are the feasible sizes of the different search engine services so that we make an efficient use of hardware resources deployed in the data center?. An appropriate answer to this question could reduce operational costs at the data center. A model like the one proposed in this paper is also useful in research when one wants to evaluate alternative ideas for service design and deployment on cluster of processors. 2 In this paper we focus on vertical search engines since they are amenable for modelling as they are built from a small number of fairly simple components called services. The question of modelling general purpose search engines remains as an open area of research. The contribution of this paper is the proposal of a method to model actual parallel computations of vertical search engines. The method makes use of Petri nets which is a well-known tool for modelling complex systems. Petri net realization is designed in a modular manner which enables evaluation of different alternatives for service design and configuration. Typically each service of a vertical search engine is deployed on a large set of processors forming a cluster of processors. Both processors and communication network are constructed from commodity hardware. Each processor is a multicore system enabling efficient multi-threading on shared data structures, and message passing is performed among processors to compute on the distributed memory supported by the processors. Modelling each of those components is a complex problem in its own merits. Fortunately, ours is a coarse grain application where the running time cost of each service is dominated by a few primitive operations. We exploit this feature to formulate a model based on coloured Petri nets where tokens represent user queries which are circulated through different cost causing units in accordance with the computations performed to solve the query in the actual search engine. are modeled as directed acyclic graphs whose arcs represent the causality relationships among the different steps related to processing the query in the actual system, and vertices represent points in which cost must take place by considering the effects of other queries under processing. The relative cost of primitive operations is determined by means of small benchmark programs. The coarse grain feature allows us to model the effects of actual hardware and system software by identifying only a few features that have a relevant influence in the overall cost of computations. We validated the proposed Petri net model against a complex discrete event simulator of the same system and an actual implementation of a small search engine. The advantage of the Petri net model over the simulator is that it is much simpler and efficient than the simulator, and it is fairly easy to extend to include new features or change behavior of services by using a graphical language and model verification tests. This paper is organized as follows: In Section 2 we review Petri nets concepts, query processing and related works. Section 3 presents our proposed method to model vertical search engines. Section 4 shows the model validation and experimental results. Conclusions follow in Section 5. 2 Background We briefly introduce coloured Petri nets (CPNs), then we describe vertical Web Search Engines (WSEs) and related work. 3 CPN is a high-level Petri net formalism, extending standard Petri nets to provide basic primitives to model concurrency, communication and synchronization. The notation of CPNs introduces the notion of token types, namely tokens are differentiated by colors, which may be arbitrary data values. They are aimed at practical use because they enable the construction of compact and parameterized models. CPNs are bipartite directed graphs comprising places, transitions and arcs, with inscriptions and types allowing tokens to be distinguishable [6, 25]. In Figure 4 we see part of a CPN. Places are ovals (e.g., Queue) and transitions are rectangles (e.g., start). Places have a type (e.g., Server Query) and can have an initial marking which is a multi-set of values (tokens) of the corresponding type. Arcs contain expressions with zero or more free variables. An arc expression can be evaluated in a binding using the assignments, resulting in a value. A binding is a transition and assignment of values to all its free variables. Then, a binding is enabled if all input places contain at least the tokens prescribed by evaluating the arc expressions. Hierarchical Colored Petri Nets (HCPNs) introduce a facility for building a CPN out of subnets or modules. The interface of a module is described using port places, places with an annotation In, Out, or I/O. A module can be represented using a substitution transition, which is a rectangle with a double outline (e.g., FS in Figure 2). The module concept of CPN is based on a hierarchical structuring mechanism allowing a module to have sub-modules and reuse of sub-modules in different parts of the model. Places connected to a substitution transition are called socket places and are connected to port places using port/socket assignments. There are useful software to support CPN modelling like CPN-AMI 4, Great- SPN 5 and Helena 6. In particular, CPN-tools 7 supports HCPNs modeling and provides a way to walk through a CPN model by allowing one to investigate different scenarios in detail and check whether or not the model works as expected. It is possible to observe the effects of the individual steps directly on the graphical representation of the CPN model. Also, this tool auto-generates Java code, a CPN simulator, which can be modified to introduce others metrics and can be used to fastly run very large numbers of queries without using the graphical interface. We emphasize that performance metric results are always highly dependent on the stream of user queries that are passed through the search engine. Thus any reasonable performance evaluation study must consider the execution of thousands of millions of actual user queries. In this context, the graphical definition of the model is useful for model construction and verification, and then the production model is executed through the Java CPN simulator greatspn/ 6 evangelista/helena 7 4 2. Query Processing Large scale general Web search engines are commonly composed of a collection of services. Services are devised to quickly process user queries in an on-line manner. In general, each service is devoted to a single operation within the whole process of solving a query. One of such services is the caching service (), which is in charge of administering a distributed cache devoted to keep the answers for frequent queries (they are query results composed of document IDs). This service is usually partitioned using a hashing based strategy such as Memcached [4]. A second service is the Index Service (IS), which is responsible for calculating the top-k results (document IDs) that best match a query. A third service, called Front-Service (FS) is in charge of receiving new queries, routing them to the appropriate services (, IS) and performing the blending of partial results returned from the services. Other related services include: a) construction of the result Web page for queries, b) advertising related to query terms, c) query suggestions, d) construction of snippets, which is a small summary text surrounding the document ID (URL) of each query result, between others. Given the huge volume of data, each service is deployed on a large set of processors wherein each processor is dedicated to efficiently perform a single task. Multithreading is used to exploit multi-core processing on data stored in the processor. In this work we focus on vertical WSEs consisting of three main services: Front-End Service (FS), Caching Service () and Index Service (IS). Each service is deployed on a different cluster of processors or processing nodes. Figure shows the query processing operations performed by a WSE as explained below. The Front-End Service (FS) is composed of several processing nodes and each node supports multi-threading. This service is composed of a set of FS nodes where each one is mapped onto a different processor. Each FS node receives user queries and sends back the top-k results to the requester (a user or another machine requiring service). After a query arrives to a FS node f i, we select a caching service () machine to determine whether the query has been previously processed. For the cluster architecture, we use an array of P D processors or caching service nodes. A simple LRU (Least Recently Used) approach is used. The memory cache partition is performed by means of a distributed memory object caching system named Memcached [4], where one given query is always assigned to the same partition. Memcached uses a hash function with uniform distribution over the query terms to determine the partition cs j which should hold the entry for the query. To increase throughput and to support fault tolerance, each partition is replicated D times. Therefore, at any given time, different queries can be solved by different replicas of the same partition. Replicas are selected in a round-robin way. If the query is cached, the node sends the top-k result document IDs to the FS machine. Afterwards the FS f i sends the query results to users. Otherwise, if the query is not found in cache, the node sends a hit-miss message to the FS f i. Then, the FS machine sends an index search request to the index service (IS) cluster. The IS contains an index built from a large set of documents. The index is used to speed up the determination of what documents contain the query 5 HTML page New query FS Top-K iddoc identifiers Front-End Query search Response Index search P P D IS D Fig.. Query processing. terms. A document is a generic concept, it can be an actual text present in a Web page or it can be a synthetic text constructed for the specific application like a short text for advertisement. The index allows the fast mapping among query terms and documents. The amount of documents and indexes are usually huge and thereby they must be evenly distributed onto a large set of processors in a sharing nothing fashion. Usually these systems are expected to hold the whole index in the distributed main memory held by the processors. Thus, for the IS setting, the standard cluster architecture is an array of P D processors or index search nodes, where P indicates the level of document collection partitioning and D the level of document collection replication. The rationale for this 2D array is as follows: each query is sent to all of the P partitions and, in parallel, the local top-k document IDs in each partition are determined. These local top-k results are then collected together by the FS node f i to determine the global top-k document IDs. The index stored in each index search node is the so-called inverted index [5]. The inverted index [5, 3,2,23,29,22] (or inverted file) is a data structure used by all well-known WSEs. It is composed of a vocabulary table (which contains the V distinct relevant terms found in the document collection) and a set of posting lists. The posting list for term c V stores the identifiers of the documents that contain the term c, along with additional data used for ranking purposes. To solve a query, one must fetch the posting lists for the query terms, compute the intersection among them, and then compute the ranking of the resulting intersection set using algorithms like BM25 or WAND [6]. Hence, an inverted index allows for the very fast computation of the top-k relevant documents for a query, because of the pre-computed data. The aim is to speed up query processing by first using the index to quickly find a reduced subset of documents that must be compared against the query, to then determine which of them have the potential of becoming part of the global top-k results. 6 2.2 Related Work There are several performance models for capacity planning of different systems [2, 2,26], but these models are not designed in the context of web search engines. The work in [3] presents a performance model for searching large text databases by considering several parallel hardware architectures and search algorithms. It examines three different document representations by means of simulation and explores response times for different workloads. More recently the work in [] presents an Inquery simulation model for multi-threading distributed IR system. The work in [2] presents a framework based upon queuing network theory for analyzing search systems in terms of operational requirements: response time, throughput, and workload. But the proposed model does not use the workload of a real system but a synthetic one. A key feature in our context is to properly consider user behavior through the use of actual query logs. Notice that in practical studies, one is interested in investigating the system performance by using an existing query log. This log has been previously collected from the queries issued by the actual users of the search engine. To emphasize this fact, below we call it the query log under study. Moreover, [2] assumes perfect balance between the service times of nodes. Another disadvantage is that it does not verify the accuracy of their model with actual experimental results. In [9, ] the authors simulate different architectures of a distributed information retrieval system. Through this study it is possible to approximate an optimal architecture. However, they make the unrealistic assumption that service times are balanced when the Information Retrieval nodes handle a similar amount of data when processing a query. This work is extended in [7] to study the interconnection network of a distributed Information Retrieval system. This work is also extended in [8] to estimate the communication overhead. In [7] the authors present algorithms for capacity analysis for general services in on-line distributed systems. The work in [9] presents the design of simulation models to evaluate configurations of processors in an academic environment. The work presented in [8] proposes a mathematical algorithm to minimize the resource cost for a server-cluster. In our application domain, the problem of using mathematical models is that they are not capable of capturing the dynamics of user behavior nor temporarily biased queries on specific query topics. The work in [8] is extended in [28] to include mechanisms which are resilient to failures. The work presented in [4] and continued in [3] characterizes the workload of the search engines and use the approximate MVA algorithm [24, 2]. However, this proposal is evaluated on a very small set of IS nodes, with only eight processors and just one service, namely IS. The effects of asynchronous multi-threading is not considered as they assume that each processor serves load using a single thread. This work also does not consider the effects caused in the distribution of inter-arrival times when queries arrive at more than one FS node. They also use the harmonic number to compute average query residence time at the index service (IS) cluster. This can be used in WSE systems such that every P i index partition delivers its partial results to a manager processor (front service node 7 in our architecture) and stays blocked until all P partitions finish the current query. This is an unrealistic assumption since current systems are implemented using asynchronous multi-threading in each processor/node. In the system used in this work, the flow of queries is not interrupted. Each time an IS partition finishes processing a query, it immediately starts the next one. The limitations of previous attempt to model vertical search engines show that this problem is not simple to solve. Our proposal is resorting to a more empirical tool but more powerful tool in terms of its ability to model complex systems. Namely we propose using Coloured Petri Nets (CPN) to model vertical search engines computations. To our knowledge this is the first CPN based capacity planning model proposed in the literature for vertical search engines. 3 Modelling a Vertical Search Engine Our approach is to model query routing through the search engine services. The model main objective is to check whether a given configuration of services is able to satisfy constraints like the following. The services are capable of () keeping query response times below an upper-bound, (2) keeping all resources workload below 4% and (3) keeping query throughput at the same query arrival speed. Query throughput is defined as the number of queries processed per unit of time. In our model we focus on high level operation costs. This is possible because query processing can be decomposed into a few dominant cost operations such as inverted list intersection, document ranking, cache search and update, and blending of partial results. We also represent a few key features of hardware cost and very importantly these features are directly related to the cost dominant primitive operations. In the following we explain each component of our model. Arrivals Queue IS IS queue FS queue Query Completed Fig.2. Vertical Web Search Engine model using CPN. The high level view of our model is shown in Figure 2. arrive with an exponential distribution to the system through the Arrival module (the suitabil- 8 ity of the exponential distribution for this case has been shown in [4]). We have three more modules which model the front service (FS), the index service (IS) and the caching service (). Each module is associated with a socket place. The socket place called Completed receives queries that have already been processed. The query and its top-k document ID results are sent to the user/requester. The IS Queue and Queue are used to communicate queries that travel from the FS service to the IS and, respectively. Finally the Queue socket place receives new queries and query results from the IS and. This query routing based approach can be easily extended to more services as needed. All service clusters support multi-threading. Namely, each service node has multiple threads. Each thread is idle until a new query arrives to the node. Then the thread takes and processes the query. When finished, the query goes to the next state and the thread checks whether there is any waiting query in the queue of the node. If so, it takes the query to process. Otherwise it becomes idle again. Each query has four possible states of execution: () q.new represents arriving queries and have to be processed in the cluster, (2) q.hit represents queries found in the cache, (3) q.no hit represents queries that have to be processed in the IS cluster, and (4) q.done indicates that the query has been finished. 3. Modelling the communication infrastructure We apply the communication cost of sending a query through the network. Cluster processors are grouped in racks. We use a network topology that interconnects communication switches and processors that is commonly used in data centers. Namely we use a Fat-Tree communication network [] with three levels as shown in Figure 3. At the bottom processors are connected to what is called Edge switches. At the top we have the so-called Core switches and in the middle we have the Aggregation switches. Node x sends a message to node y by visiting five switches before reaching its destination. To send a message from node x to node z it has to go through three switches (two Edge switches and one Aggregation switch). Only one Edge switch is visited when sending a message from a node x to other node in the same rack, e.g. node w. An interesting property of this network is that it allows to achieve a high level of parallelism and to avoid congestion. Namely, node x sends a message to node y through path A, meanwhile node x sends a message to node v through a different path (dotted line path in Figure 3). Due to its complexity, we do not actually simulate the above Fat-Tree protocol in the CPN but we empirically estimate the cost of sending messages throughout this network. We use this cost in our model to cause transition delays between services. To this end, we run a set of benchmarks programs on actual hardware to determine the communication costs. We then used a discrete-event simulation model of the Fat-Tree, a model that actually simulates the message passing among switches, to obtain an empirical probability distribution related to the number of switch-hops required to go from one service node to another. Therefore, each time we have to send a message from service s i to service s j, we 9 Core switches send(x,v) Aggregation Path A send(x,y) Edge x w z Rack of Processors FAT-TREE y v Fig. 3. Fat-tree network. estimate the simulation time by considering the number of hops (using the empirical distribution) and the communication cost obtained through the benchmark programs. We also developed a CPN model for a communication switch for LAN networks which represents the case of a commodity cluster of processors hosting a small vertical search engine (details below). 3.2 Front-Service (FS) The Front-Service (FS) cluster manages the query process and determines its route. We model the Front-Server cluster as shown in Figure 4 which consists of a number of servers initially idle. The number of replicas is set by the num of FS parameter. When a query arrives, we take an idle server and increase simulation time using the timefs(query) function. If we are processing a binding of results we apply an average cost of merging the documents retrieved by the IS nodes. Otherwise, we apply an average cost of managing the query. Both costs were obtained through benchmark programs. If the query execution state is q.new we send the query to the cluster. If the query execution state is q.hit or q.done the query routing is finished and we deliver the results to the user. Additionally, q.done adds the time required to merge the partial results obtained by the IS cluster. Finally, the query is sent to the IS cluster if the state is q.no hit. The FS does not change the execution state of a query, just decides the route. 3.3 Caching-Service () The caching-service () keeps track of the most frequent queries and their results. The CPN model includes sub-models as shown in Figure 5. In this example we develop a with three partitions. Each partition models the time of service and the competition for resources. Namely, the Memcached algorithm can send queries to one of three sets of processors. Each partition is replicated to increase throughput. The # query variable is used in the model to simulate 10 server server Idle (server,query) Queue start busy in query::queries ServerxQuery query select Select stop out queue queries^^[query] Query if #querytype query = new then `query else empty if #querytype query = done orelse #querytype query = hit then `query else empty if #querytype query = no_hit then `query else empty Completed out Query query to IS to IS Query queries^^[query] IS queue out Fig. 4. Front-Service model using CPN. the query flow through different partitions. As we explained, the Memcached algorithm is used to partition the cluster. This algorithm evenly distributes the queries among partitions by means of a hashing function on the query terms. Therefore, each partition has the same probability of receiving a query. The branch condition in the model is evaluated by comparing a random real number, generated with a uniform distribution in a [, ] interval, against the cumulative probability of each partition. Namely, if the random number satisfies r < 3, then the query is sent to the first partition. if # query = Cs3 then `[query] else empty Cs3 out cs3 join 3 query::queries `[query] queue in Select if # query = Cs2 then `[query] else empty Cs2 out cs2 join 2 query::queries `[query] out `[query] if # query = Cs then `[query] else empty Cs out cs join query::queries Fig.5. Caching Service model using CPN. 11 Inside a partition we have a CPN multi-server system as shown in Figure 6. When the query arrives to a partition, we select a free server and we increase simulation time using the time() function. The number of replicas is set by the num of parameter. In this example, the query reports a cache hit with a probability of 46%. This value can be adjusted according to a desired percentage of hits observed in an actual execution of the query log under study on a LRU cache. The cluster also changes the execution state of the query. We can perform three operations on a node: insert a query with their topk results, update a query priority and erase a query entry. These operations are independent of the lengths of the posting lists. The erase and update operations have constant cost and the insert operation depends on the k size. Therefore, the number of partitions does not affect the average service time in this cluster. Figures 7 (a) and (b) show the hit ratio obtained by benchmark programs executed using the query log under study as we increase the cache size and the number of partitions. With a memory size of 8GB it is not possible to increase the number of hits significantly beyond 2 partitions. On the other hand, with a cache size of 5K cache entries we reach the maximum number of hits with a minimum of 256 partitions. server Idle server Server in queue query::queries start (server,query) busy ServerxQuery stop out to FS input (query); output (querytemp); action if rrandom() > 46 then (Query.set_queryType query no_hit) else (Query.set_queryType query hit) Fig.6. Multi-server system model for a partition. 3.4 Index-Service (IS) The Index-Server (IS) cluster accesses the inverted index to search for relevant documents for the query and performs a ranking operation upon those documents. The CPN Index-Server (Figure 9) consists of a number of servers initially idle. Each server processes a query at a time. For a given query every IS partition performs the same operations (intersection of posting lists and ranking) although with different parts of the inverted index. As the inverted index is uniformly distributed among processors all index size partitions tend to be of the same size with an O(log n) behavior for the maximum size n [3]. We have observed that this O(log n) feature fades away from the average query cost when we consider systems constructed by using non-blocking multi-threaded query processing. 12 8Kb Hit Ratio KB 5KB Hit Ratio M 2M Partitions (a) Partitions (b) Fig.7. Cache hits obtained by different number of partitions...8 Service Time IS Partitions Fig.8. Average query service times obtained with different IS partitions. The service time of this cluster depends on the posting list size of the query terms. The intersection and ranking operations over larger posting lists require larger service time. Figure 8 shows how the IS service time decreases with more partitions for the query log under study and using a benchmark program that actually implements the inverted file and document ranking process (notice that our CPN model is expected to be used as a tool exploited around an actual vertical search engine, from which it is possible to get small benchmark programs that are executed to measure the different costs on the actual hardware). Almost with 4 partitions the IS cluster cannot improve performance any longer, because the inverted index becomes very small. Therefore, in the CPN model the service time is adjusted according to the number of index partitions IS p. Each query is associated with a posting list size randomly selected from a set of all possible posting lists sizes of our query log. This value is used together with the number of partitions IS p to estimate the document ranking cost in the valuer variable of Figure 9. Therefore, from the point of view of the CPN simulation and as it is not of interest to obtain metrics about the quality of results, we can model just one 13 IS partition. The number of replicas is set in the global parameter num of IS. After a query is served in this cluster, it changes its status to q.done. in queue IS query::queries start IS server output (valuer); action timeis(); Idle busy IS Server ServerxQueryxT (server,query) queries^^[query] server stop IS IS to FS out input (query); output (querytemp); action (Query.set_queryType query done); Fig.9. Index Service model using CPN. In this work we estimate the cost of the WAND algorithm for document ranking [6]. We use results obtained from the WAND benchmark to study the average running time required to solve a query in the IS cluster. Query running times reported by the WAND algorithm can be divided into two groups: () the time required to update a top-k heap, and (2) the time required to compute the similarity between the query and a document. The heap is used to maintain the local top-k document scores in each index node. The similarity between the query and a document is given by the score of a document to a given query. Figure.(a) shows that the total running time reported by the WAND algorithm required to compute the similarity among documents and a query is dominant over the time required to update the heap. Computing the similarity is less expensive, like 5% of the time required to update the heap. But, the number of heap updates is much lower than the number of similarity computations. Figure.(b) at left shows the average number of heap updates and similarity computations per query. Each query performs.% heaps updates of the total number of operations with P = 32 and top- document ID results. This percentage increases to % with P = 256 because the size of the posting lists are smaller and the WAND algorithm can skip more similarity computations. For a larger top-k the number of heap update operations increases. Results of Figure.(b) at right show the variance of the number of similarity computations. The variance also decreases with more processors, because posting lists are smaller and the upper bound of each term of the query tends to be smaller. Thus all processors perform less similarity computations and tend to perform almost the same amount of work. 4 A Performance Evaluation Study using the CPN model In this section we first validate our model against the execution of a small vertical web search engine. Then we use this model to evaluate the impact in performance of alternative service configurations (number of partitions and replicas). The aim is to illustrate the potential use of the CPN model in a production environment. 14 Normalized Time (a) Heap Similarity Normalized values Number of operations top top top top.% % 4% 5% Processors (b) Similarity Heap Variance Fig.. Benchmarking the WAND document ranking method. Below we refer to workload to mean average utilization of processors hosting the FS, and IS services. Utilization is defined over a given period as the fraction of time that a given processor is busy performing computations. 4. Model Validation We consider a vertical search engine deployed on a small cluster of 3 processing nodes with 4 cores each. We run experiments over a query log of 36,389,567 queries submitted to the AOL Search service between March and May 3, 26. We pre-processed the query log following the rules applied in [5] by removing stopwords and completely removing any query consisting only of stopwords. We also erased duplicated terms and assumed that two queries are identical if they contain the same words no matter the order. The resulting query trace has 6,9,873 queries, where 6,64,76 are unique queries and the vocabulary consists of,69,7 distinct query terms. These queries were also applied to a sample (.5TB) of the UK Web obtained in 25 by the Yahoo! search engine, over which a 26,, terms and 56,53,99 documents inverted index was constructed. We executed the queries against this index in order to get the topk documents for each query by using the WAND method. We set the CPN simulator with the statistics obtained from this log. The simulator supports different query arrival rates λ (new queries per second). We are interested in two main measures: workload and query response time. In Figure the y-axis shows the relative error (difference in percentage between values) obtained for each measure with the CPN model and the x-axis stands for different services configurations. A configuration is a tuple FS, p, r, IS p, IS r where FS is the number of front service replicas, IS r represents the number of IS replicas and IS p the number of IS partitions, the same nomenclature is used for the. A low value in the x-axis represents configurations with a small number of processors where workload is about 85%-95%. On the other hand, a high value in the x-axis represents configurations with more processors with processor workload of about 2%-3%. 15 Figure (a) shows results obtained with λ =. We run different services configurations with a range from 4 to 8 for values of the configuration tuple. The IS workload presents the higher error rate, about 5.3% for configurations with few service nodes. The FS workload measure also presents a maximum error close to 5%. Finally, the workload is the closest one to the real result. The maximum error presented by this service is at most 3.5%. Finally, the average query response time presents a maximum error of at most 4%. Figure (b) shows results obtained with a larger λ = 3 and a different set of services configurations. We changed the set of services configurations to avoid saturating the system early in the simulation. We increase the range of configuration values from 4 to 2. In this experiment, the maximum error rate of about 5% is reported by the IS cluster. Query response time presents an error rate of at most 3.5%. Therefore, these results verify that our CPN model can predict with an error close to 5% the relevant costs of the vertical search engine under study. 5 4 Query Time IS FS 5 4 Query Time IS FS Error (%) 3 2 Error (%) Configuration (a) Configuration (b) Fig.. CPN model validation: (a) query arrival rate λ = and (b) query arrival rate λ = Model Assessment In this section we aim to answer questions like: Given a number of processors, What is the query traffic that they can support?. To this end we stress our search engine model by increasing the arrival rate λ. In this way we can determine the workload supported by the search engine and how average query response time is affected by the increased workload. In the following experiments we set to 8 the number of threads per processor which is consistent with the current hardware deployed in production for the vertical search engine. Figure 2 shows the results obtained for query traffic ranging from very low (A) to very high (I). We start with x = A = λ = 8 and we increase the arrival query rate by 4 in each experiment until we reach x = I = λ = 4. In Figure 2.(a) we use a total of 355 processors and in Figure 2.(b) we use 463 16 processors. In both figures, workload is above 8% from x = G = λ = 32 but for different cluster services. In the former we reach this percentage of workload with the and the IS clusters. In the last, we maintain the number of replicas but we increase the number of IS replicas as we decrease the FS replicas. Thus, the FS cluster reaches this percentage of workload instead the IS cluster. As expected service time tends to increase as services clusters are saturated. With more processors, a total of 56 in Figure 2.(c) and 553 in Figure 2.(d), the workload is kept below 8%. There is no saturation and therefore query response time is not affected. With these experiments we can answer our question about the query traffic that can be supported by a given services configuration. For the first two configurations in Figure 2.(a) and Figure 2.(b) it is clear that with x = C we get a workload close to 4%. For the two last figures, we get a workload close to 4% in x = E. Beyond these points we cannot meet the second constrain (keep all resources workload below 4%). This constrain has to be satisfied to support suddenly peaks in query traffic. The other two constraints (keep query response time below an upper-bound, and keep query throughput at query arrival speed) cannot be guaranteed with a workload above 8%-9%. From this point on, the system is saturated and queries have to wait for too long in services queues. 4.3 Evaluation using a switched LAN In this section we illustrate with an example that the proposed CPN model can easily incorporate additional components. We show this by evaluating our search engine model with a different kind of network. To this end, we replace the Fat-tree network by a switched LAN network as explained in [27]. In Figure 3 we add a new module WS connecting all services queues. We also have to add three more places (IS SW, SW, FS SW ) to make this connection feasible. Inside the SW module we have connection components. Each component has input/output ports and internal buffers represented as places. There are two transitions: one to model the processing of input packages and one to model the processing of output packages. We have one component for each service cluster. To evaluate the switched LAN network we run some benchmark programs to determine the time of sending a message from one service node to another over a Switch Linksys SLM ports. We compare it against a Fat-Tree constructed using the same kind of 48-ports switches. Figure 4.(a) compares CPN simulations using the two networks. The results show that using a Fattree network helps to reduce the average response query time as we increase the query arrival rate. With a maximum query arrival rate of λ = 4, the search engine using the Fat-tree network obtains a gain of %. But, the LAN network has a second effect over the search engine. expend more time inside the network and inside the queues connecting services with the network than inside the services clusters. Therefore, using a switched LAN network the services workload placed on processors tends to be lower than in the case of the Fat-tree. In Figure 4.(b) the FS cluster reports 3% higher workload using the Fat-tree. In Figure 4.(c) this difference is only 5% for the cluster. In 17 .4.2 Query Time IS Workload (%) Workload (%) FS Workload (%).4.2 Query Time IS Workload (%) Workload (%) FS Workload (%).8 Conf. = <,3,5,,33>.8 Conf. = <8,3,5,,44> A B C D E F G H I Arrival Time (a) A B C D E F G H I Arrival Time (b).4.2 Query Time IS Workload (%) Workload (%) FS Workload (%).4.2 Query Time IS Utilization (%) Utilization (%) FS Utilization (%).8 Conf. = <5,3,7,,48>.8 Conf. = <2,3,,2,25> A B C D E F G H I Arrival Time (c) A B C D E F G H I Arrival Time Fig. 2. Workload and average query response time obtained with different configurations. Query traffic varies from low (A) to very high (I). (d) Figure 4.(d) we show that the IS cluster gets 2% lower workload using the switched LAN network. These results were obtained with a services configuration, 3, 5,, Conclusions We have presented a model for predicting performance of vertical search engines which is based on a Coloured Petri Net (CPN). We illustrated the use of the CPN model through the execution of a corresponding CPN simulator in the context of a specific vertical search engine currently deployed in production. From the actual implementation of the search engine, several small pieces of code are extracted to construct benchmark programs. These programs are executed on the same search engine hardware and the results enable the setting of the simulator parameters. Typically vertical search engines are built from a software architecture that makes it simple to generate appropriate benchmark programs for the CPN simulator. After validating the model results against the search engine results, we used the CPN simulator to evaluate relevant performance metrics under different scenarios. This study showed that the proposed CPN model is suitable for solving 18 Arrivals Queue IS FS queue Query FS SW IS queue Completed SW IS SW SW query::queries In queries^^[query] query::queries Buffer PortOut queries^^[query] Fig.3. Web search engine modeled with a switched LAN network. the problem of determining the number of cluster processors that are required to host the different services that compose the respective search engine. This is a relevant question that data center engineers must answer in advance when a new vertical search engine instance is required to serve a new product deployed in production, or in accordance with an estimation of query traffic growth for the next term, to determine how big the different services must be in order to decide the amount of hardware required to host the new search engine instance. An advantage of the proposed model is that it has been designed in a modular manner so that new components (services) can be easily accommodated. We illustrated this feature by evaluating the impact of introducing an alternative communication network technology. The enhanced model was simple and fast to formulate and experimentation was quickly conducted to evaluate the performance metrics for the new case at hand. Acknowledgment This work has been partially supported by FONDEF D9I85 R&D project References. M. Al-Fares, A. Loukissas, and A. Vahdat. A scalable, commodity data center network architecture. SIGCOMM, 38:63 74, 28. 19 .2.8 Fat-tree LAN.8 Fat-tree LAN A B C D E F G H I Arrival Time (a) A B C D E F G H I Arrival Time (b).8 Fat-tree LAN.8 Fat-tree LAN A B C D E F G H I Arrival Time (c) A B C D E F G H I Arrival Time Fig. 4. Results obtained with two different kind of networks. (a) Average query response time. (b) Front-Service workload. (c) Caching-Service workload. (d) Index- Service workload. 2. M. Arlitt, D. Krishnamurthy, and J. Rolia. Characterizing the scalability of a large web-based shopping system. J. of ACM Trans. Internet Technol., :44 69, C. S. Badue, J. M. Almeida, V. Almeida, R. A. Baeza-Yates, B. A. Ribeiro-Neto, A. Ziviani, and N. Ziviani. Capacity planning for vertical search engines. CoRR, abs/6.559, C. S. Badue, R. A. Baeza-Yates, B. A. Ribeiro-Neto, A. Ziviani, and N. Ziviani. Modeling performance-driven workload characterization of web search systems. In CIKM, pages , R. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. Addison- Wesley, A. Z. Broder, D. Carmel, M. Herscovici, A. Soffer, and J. Y. Zien. Efficient query evaluation using a two-level retrieval process. In CIKM, pages , F. Cacheda, V. Carneiro, V. Plachouras, and I. Ounis. Network analysis for distributed information retrieval architectures. In European Colloquium on IR Research, pages , F. Cacheda, V. Carneiro, V. Plachouras, and I. Ounis. Performance analysis of distributed information retrieval architectures using an improved network simulation model. Inf. Process. Manage., 43():24 224, F. Cacheda, V. Plachouras, and I. Ounis. Performance analysis of distributed architectures to index one terabyte of text. In ECIR, pages , 24. (d) 20 . F. Cacheda, V. Plachouras, and I. Ounis. A case study of distributed information retrieval architectures to index one terabyte of text. Inf. Process. Manage., 4(5), 25.. B. Cahoon, K. S. McKinley, and Z. Lu. Evaluating the performance of distributed architectures for information retrieval using a variety of workloads. ACM Trans. Inf. Syst., 8: 43, January A. Chowdhury and G. Pass. Operational requirements for scalable search systems. In CIKM, pages , T. R. Couvreur, R. N. Benzel, S. F. Miller, D. N. Zeitler, D. L. Lee, M. Singhal, N. G. Shivaratri, and W. Y. P. Wong. An analysis of performance and cost factors in searching large text databases using parallel search systems. Journal of The American Society for Information Science and Technology, 45: , B. Fitzpatrick. Distributed caching with memcached. J. of Linux, 24:72 76, Q. Gan and T. Suel. Improved techniques for result caching in web search engines. In WWW, pages 43 44, K. Jensen and L. Kristensen. Coloured Petri Nets. Springer-Verlag Berlin Heidelberg, G. Jiang, H. Chen, and K. Yoshihira. Profiling services for resource optimization and capacity planning in distributed systems. J. of Cluster Computing, pages , W. Lin, Z. Liu, C. H. Xia, and L. Zhang. Optimal capacity allocation for web systems with end-to-end delay guarantees. Perform. Eval., pages 4 46, B. Lu and A. Apon. Capacity Planning of a Commodity Cluster in an Academic Environment: A Case Study M. Marin and V. Gil-Costa. High-performance distributed inverted files. In Proceedings of CIKM, pages , D. A. Menasce, V. A. Almeida, and L. W. Dowdy. Performance by Design: Computer Capacity Planning. Prentice Hall, A. Moffat, W. Webber, and J. Zobel. Load balancing for term-distributed parallel retrieval. SIGIR, pages , A. Moffat, W. Webber, J. Zobel, and R. Baeza-Yates. A pipelined architecture for distributed text query evaluation. Information Retrieval, (3):25 23, M. Reiser and S. S. Lavenberg. Mean-value analysis of closed multichain queuing networks. J. ACM, 27(2):33 322, W. van der Aalst and C. Stahl. Modeling Business Processes A Petri Net-Oriented Approach. MIT Press, H. Wang and K. C. Sevcik. Experiments with improved approximate mean value analysis algorithms. Perform. Eval., 39:89 26, D. Zaitsev. An evaluation of network response time using a coloured petri net model of switched lan. In Proceedings of Fifth Workshop and Tutorial on Practical Use of Coloured Petri Nets and the CPN Tools, pages 57 67, C. Zhang, R. N. Chang, C.-s. Perng, E. So, C. Tang, and T. Tao. An optimal capacity planning algorithm for provisioning cluster-based failure-resilient composite services. In Proceedings of the 29 IEEE International Conference on Services Computing, SCC 9, pages 2 9, J. Zhang and T. Suel. Optimized inverted list assignment in distributed search engine architectures. IPDPS, J. Zobel and A. Moffat. Inverted files for text search engines. J. of UR, 38(2), 26. Locality Based Protocol for MultiWriter Replication systems Locality Based Protocol for MultiWriter Replication systems Lei Gao Department of Computer Science The University of Texas at Austin lgao@cs.utexas.edu One of the challenging problems in building replication Performance evaluation of Web Information Retrieval Systems and its application to e-business Performance evaluation of Web Information Retrieval Systems and its application to e-business Fidel Cacheda, Angel Viña Departament of Information and Comunications Technologies Facultad de Informática,: Approximate Parallel Simulation of Web Search Engines Approximate Parallel Simulation of Web Search Engines Mauricio Marin 1,2 Veronica Gil-Costa 1,3 Carolina Bonacic 2 Roberto Solar 1 1 Yahoo! Labs Santiago, Chile 2 DIINF, University of Santiago, Chile 3 Programma della seconda parte del corso Programma della seconda parte del corso Introduction Reliability Performance Risk Software Performance Engineering Layered Queueing Models Stochastic Petri Nets New trends in software modeling: Metamodeling, An Evaluation of Fault-Tolerant Query Processing for Web Search Engines An Evaluation of Fault-Tolerant Query Processing for Web Search Engines Carlos Gomez-Pantoja,2 Mauricio Marin,3 Veronica Gil-Costa,4 Carolina Bonacic Yahoo! Research Latin America, Santiago, Chile 2 DCC, Energy Efficient MapReduce Energy Efficient MapReduce Motivation: Energy consumption is an important aspect of datacenters efficiency, the total power consumption in the united states has doubled from 2000 to 2005, representing MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect ashutosh_shinde@hotmail.com Validating if the workload generated by the load generating tools is applied Inverted files and dynamic signature files for optimisation of Web directories s and dynamic signature files for optimisation of Web directories Fidel Cacheda, Angel Viña Department of Information and Communication Technologies Facultad de Informática, University of A Coruña Campus in Fault Tolerant Video Server Load Balancing in Fault Tolerant Video Server # D. N. Sujatha*, Girish K*, Rashmi B*, Venugopal K. R*, L. M. Patnaik** *Department of Computer Science and Engineering University Visvesvaraya College of Cloud Storage. Parallels. Performance Benchmark Results. White Paper. Parallels Cloud Storage White Paper Performance Benchmark Results Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements..., Client/Server Computing Distributed Processing, Client/Server, and Clusters Client/Server Computing Distributed Processing, Client/Server, and Clusters Chapter 13 Client machines are generally single-user PCs or workstations that provide a highly userfriendly interface to the Benchmarking Cassandra on Violin Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract Web Server Software Architectures Web Server Software Architectures Author: Daniel A. Menascé Presenter: Noshaba Bakht Web Site performance and scalability 1.workload characteristics. 2.security mechanisms. 3. Web cluster architectures. Shoal: IaaS Cloud Cache Publisher University of Victoria Faculty of Engineering Winter 2013 Work Term Report Shoal: IaaS Cloud Cache Publisher Department of Physics University of Victoria Victoria, BC Mike Chester V00711672 Work Term 3 Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age. Volume 3, Issue 10, October 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: Load Measurement Cluster Applications Hadoop Overview Data analytics has become a key element of the business decision process over the last decade. Classic reporting on a dataset stored in a database was sufficient until recently, but yesterday Cluster Computing. ! Fault tolerance. ! Stateless. ! Throughput. ! Stateful. ! Response time. Architectures. Stateless vs. Stateful. Architectures Cluster Computing Job Parallelism Request Parallelism 2 2010 VMware Inc. All rights reserved Replication Stateless vs. Stateful! Fault tolerance High availability despite failures If Chapter 18: Database System Architectures. Centralized Systems Chapter 18: Database System Architectures! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types 18.1 Centralized Systems! Run on a single computer system 2 Parallel Programming Platforms Lecture 2 Parallel Programming Platforms Flynn s Taxonomy In 1966, Michael Flynn classified systems according to numbers of instruction streams and the number of data stream. Data stream Single Multiple SCALABILITY AND AVAILABILITY SCALABILITY AND AVAILABILITY Real Systems must be Scalable fast enough to handle the expected load and grow easily when the load grows Available available enough of the time Scalable Scale-up increase DELL s Oracle Database Advisor DELL s Oracle Database Advisor Underlying Methodology A Dell Technical White Paper Database Solutions Engineering By Roger Lopez Phani MV Dell Product Group January 2010 THIS WHITE PAPER IS FOR INFORMATIONAL Load balancing Static Load Balancing Chapter 7 Load Balancing and Termination Detection Load balancing used to distribute computations fairly across processors in order to obtain the highest possible execution speed. Termination detection SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer Tableau Server 7.0 scalability Tableau Server 7.0 scalability February 2012 p2 Executive summary In January 2012, we performed scalability tests on Tableau Server to help our customers plan for large deployments. We tested three different Load Balancing and Termination Detection Chapter 7 Load Balancing and Termination Detection 1 Load balancing used to distribute computations fairly across processors in order to obtain the highest possible execution speed. Termination ZooKeeper. Table of contents by Table of contents 1 ZooKeeper: A Distributed Coordination Service for Distributed Applications... 2 1.1 Design Goals...2 1.2 Data model and the hierarchical namespace...3 1.3 Nodes and ephemeral nodes... Analysis of IP Network for different Quality of Service 2009 International Symposium on Computing, Communication, and Control (ISCCC 2009) Proc.of CSIT vol.1 (2011) (2011) IACSIT Press, Singapore Analysis of IP Network for different Quality of Service Ajith. Load Balancing 1 / 24 Load Balancing Backtracking, branch & bound and alpha-beta pruning: how to assign work to idle processes without much communication? Additionally for alpha-beta pruning: implementing the young-brothers-wait... Understanding Neo4j Scalability Understanding Neo4j Scalability David Montag January 2013 Understanding Neo4j Scalability Scalability means different things to different people. Common traits associated include: 1. Redundancy in the, Networking in the Hadoop Cluster Hadoop and other distributed systems are increasingly the solution of choice for next generation data volumes. A high capacity, any to any, easily manageable networking layer is critical for peak Hadoop Binary search tree with SIMD bandwidth optimization using SSE Binary search tree with SIMD bandwidth optimization using SSE Bowen Zhang, Xinwei Li 1.ABSTRACT In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous Optimizing Shared Resource Contention in HPC Clusters Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters Fault Tolerance in Hadoop for Work Migration 1 Fault Tolerance in Hadoop for Work Migration Shivaraman Janakiraman Indiana University Bloomington ABSTRACT Hadoop is a framework that runs applications on large clusters which are built on numerous Cloud Management: Knowing is Half The Battle Cloud Management: Knowing is Half The Battle Raouf BOUTABA David R. Cheriton School of Computer Science University of Waterloo Joint work with Qi Zhang, Faten Zhani (University of Waterloo) and Jose OpenFlow Based Load Balancing OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple Operating Systems, 6 th ed. Test Bank Chapter 7 True / False Questions: Chapter 7 Memory Management 1. T / F In a multiprogramming system, main memory is divided into multiple sections: one for the operating system (resident monitor, kernel) and (), Optimizing a ëcontent-aware" Load Balancing Strategy for Shared Web Hosting Service Ludmila Cherkasova Hewlett-Packard Laboratories 1501 Page Mill Road, Palo Alto, CA 94303 cherkasova@hpl.hp.com Shank Distributed File Systems Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.
http://docplayer.net/932971-Capacity-planning-for-vertical-search-engines-an-approach-based-on-coloured-petri-nets.html
CC-MAIN-2017-39
refinedweb
9,496
54.02
What is Prop drilling? and how to solve it using Context API React interview cheatsheet series 23 April, 2021 As we know, the props are the data we pass -or can access- from the top-level components to its child components. And, also we know that React data flow is unidirectional. Prop drilling, also know as threading, is the process where the devs pass the props down to an specific child component, but in between, other components, between the origin and the destiny, get the props just to pass it down the chain. So props drilling issue is when passing this data from the parent component (A) to a children component (Z), but in the middle this data pass through many components in t between A and Z that doesn't need that data (props) excepts to communicate A and Z. Props drilling is not a problem at all if we have a couple or three levels, but imagine if we have dozens of levels... That's the problem. There is a good video explaining this here. To avoid prop drilling, in other words, passing props through intermediate elements, we have some solutions: - React Context API. - Composition - Render props - HOC (High Order Components) - Redux or MobX 1. Avoid Props Drilling with React Context API Context is designed to share data that can be considered “global” for a tree of React components, such as the current user, theme, or language. Context is primarily used when some data needs to be accessible by many components at different nesting levels. So, if you only want to avoid passing some props through many levels, component composition is often a simpler solution than context. Context API was created as an answer to prop drilling, and an easier alternative to Redux. There is a great Toptal.com article abut this, I invite to read it, this great an eloquent image is given from there: See an interesting article at const MyContext = React.createContext(defaultValue); Creates a Context object. When React renders a component that subscribes to this Context object it will read the current context value from the closest matching Provider above it in the tree. The argument is only used when a component does not have a matching Provider above it in the tree. <MyContext.Provider value={ value }> Every Context object comes with a Provider React component that allows consuming components to subscribe to context changes. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes. import React, { Component } from ""; const AppContext = React.createContext(); class AppProvider extends Component { state = { teamMembers: { player001: { name: 'Joaquin', position: 'Forward', number: 7 }, player002: { name: 'canales', position: 'Midfielder', number: 10 } } }; render() { return ( <AppContext.Provider value={{ players: this.state.teamMembers }} > {this.props.children} </AppContext.Provider> ); } }; const TeamList = () => ( <div> <h2>Real Betis Balompié:</h2> <Players /> </div> ); const Player = props => ( <tr> <td>{props.number}</td> <td>{props.name}</td> <td>{props.position}</td> </tr> ); const Players = () => ( <AppContext.Consumer> {context => ( <div> <h4>Players:</h4> <table> <tr> <th>Number</th> <th>Name</th> <th>Positon</th> </tr> {Object.keys(context.players).map(key => { return <Player key={context.players[key]} number={context.players[key].number} name={context.players[key].name} position={context.players[key].position} /> })} </table> </div> )} </AppContext.Consumer> ); class App extends Component { render() { return ( <AppProvider> <div> <TeamList /> </div> </AppProvider> ); } }; ReactDOM.render( <App />, document.getElementById('root') ); See the code working at codepen 2. Composition Composition is an easy> ); About composition and props drilling in React, there is an excellent article in Medium written by my old mate Bolu, Composition: An Alternative to Props Drilling in React, better reading it, it's so clear. See also: Props Drilling In React.Js
https://www.albertofortes.com/pop-drilling-reactjs-context-api/
CC-MAIN-2021-31
refinedweb
638
55.24
Pattern matching is the big new feature coming to Ruby 2.7. It has been committed to the trunk so anyone who is interested can install Ruby 2.7.0-dev and check it out. Please bear in mind that none of these are finalized and the dev team is looking for feedback so if you have any, you can let the committers know before the feature is actually out. I hope you will understand what pattern matching is and how to use it in Ruby after reading this article. What Is Pattern Matching? Pattern matching is a feature that is commonly found in functional programming languages. According to Scala documentation, pattern matching is “a mechanism for checking a value against a pattern. A successful match can also deconstruct a value into its constituent parts.” This is not to be confused with Regex, string matching, or pattern recognition. Pattern matching has nothing to do with string, but instead data structure. The first time I encountered pattern matching was around two years ago when I tried out Elixir. I was learning Elixir and trying to solve algorithms with it. I compared my solution to others and realized they used pattern matching, which made their code a lot more succinct and easier to read. Because of that, pattern matching really made an impression on me. This is what pattern matching in Elixir looks like: [a, b, c] = [:hello, "world", 42] a #=> :hello b #=> "world" c #=> 42 The example above looks very much like a multiple assignment in Ruby. However, it is more than that. It also checks whether or not the values match: [a, b, 42] = [:hello, "world", 42] a #=> :hello b #=> "world" In the examples above, the number 42 on the left hand side isn’t a variable that is being assigned. It is a value to check that the same element in that particular index matches that of the right hand side. [a, b, 88] = [:hello, "world", 42] ** (MatchError) no match of right hand side value In this example, instead of the values being assigned, MatchError is raised instead. This is because the number 88 does not match number 42. It also works with maps (which is similar to hash in Ruby): %{"name": "Zote", "title": title } = %{"name": "Zote", "title": "The mighty"} title #=> The mighty The example above checks that the value of the key name is Zote, and binds the value of the key title to the variable title. This concept works very well when the data structure is complex. You can assign your variable and check for values or types all in one line. Furthermore, It also allows a dynamically typed language like Elixir to have method overloading: def process(%{"animal" => animal}) do IO.puts("The animal is: #{animal}") end def process(%{"plant" => plant}) do IO.puts("The plant is: #{plant}") end def process(%{"person" => person}) do IO.puts("The person is: #{person}") end Depending on the key of the hash of the argument, different methods get executed. Hopefully, that shows you how powerful pattern matching can be. There are many attempts to bring pattern matching into Ruby with gems such as noaidi, qo, and egison-ruby. Ruby 2.7 also has its own implementation not too different from these gems, and this is how it’s being done currently. Ruby Pattern Matching Syntax Pattern matching in Ruby is done through a case statement. However, instead of using the usual when, the keyword in is used instead. It also supports the use of if or unless statements: case [variable or expression] in [pattern] ... in [pattern] if [expression] ... else ... end Case statement can accept a variable or an expression and this will be matched against patterns provided in the in clause. If or unless statements can also be provided after the pattern. The equality check here also uses === like the normal case statement. This means you can match subsets and instance of classes. Here is an example of how you use it: Matching Arrays translation = ['th', 'เต้', 'ja', 'テイ'] case translation in ['th', orig_text, 'en', trans_text] puts "English translation: #{orig_text} => #{trans_text}" in ['th', orig_text, 'ja', trans_text] # this will get executed puts "Japanese translation: #{orig_text} => #{trans_text}" end In the example above, the variable translation gets matched against two patterns: ['th', orig_text, 'en', trans_text] and ['th', orig_text, 'ja', trans_text]. What it does is to check if the values in the pattern match the values in the translation variable in each of the indices. If the values do match, it assigns the values in the translation variable to the variables in the pattern in each of the indices. Matching Hashes translation = {orig_lang: 'th', trans_lang: 'en', orig_txt: 'เต้', trans_txt: 'tae' } case translation in {orig_lang: 'th', trans_lang: 'en', orig_txt: orig_txt, trans_txt: trans_txt} puts "#{orig_txt} => #{trans_txt}" end In the example above, the translation variable is now a hash. It gets matched against another hash in the in clause. What happens is that the case statement checks if all the keys in the pattern matches the keys in the translation variable. It also checks that all the values for each key match. It then assigns the values to the variable in the hash. Matching subsets The quality check used in pattern matching follows the logic of ===. Multiple Patterns |can be used to define multiple patterns for one block. translation = ['th', 'เต้', 'ja', 'テイ'] case array in {orig_lang: 'th', trans_lang: 'ja', orig_txt: orig_txt, trans_txt: trans_txt} | ['th', orig_text, 'ja', trans_text] puts orig_text #=> เต้ puts trans_text #=> テイ end In the example above, the translation variable is match against both the {orig_lang: 'th', trans_lang: 'ja', orig_txt: orig_txt, trans_txt: trans_txt} hash and the ['th', orig_text, 'ja', trans_text] array. This is useful when you have slightly different types of data structures that represent the same thing and you want both data structures to execute the same block of code. Arrow Assignment In this case, => can be used to assign matched value to a variable. case ['I am a string', 10] in [Integer, Integer] => a # not reached in [String, Integer] => b puts b #=> ['I am a string', 10] end This is useful when you want to check values inside the data structure but also bind these values to a variable. Pin Operator Here, the pin operator prevents variables from getting reassigned. case [1,2,2] in [a,a,a] puts a #=> 2 end In the example above, variable a in the pattern is matched against 1, 2, and then 2. It will be assigned to 1, then 2, then to 2. This isn’t an ideal situation if you want to check that all the values in the array are the same. case [1,2,2] in [a,^a,^a] # not reached in [a,b,^b] puts a #=> 1 puts b #=> 2 end When the pin operator is used, it evaluates the variable instead of reassigning it. In the example above, [1,2,2] doesn’t match [a,^a,^a] because in the first index, a is assigned to 1. In the second and third, a is evaluated to be 1, but is matched against 2. However [a,b,^b] matches [1,2,2] since a is assigned to 1 in the first index, b is assigned to 2 in the second index, then ^b, which is now 2, is matched against 2 in the third index so it passes. a = 1 case [2,2] in [^a,^a] #=> not reached in [b,^b] puts b #=> 2 end Variables from outside the case statement can also be used as shown in the example above. Underscore ( _) Operator Underscore ( _) is used to ignore values. Let’s see it in a couple of examples: case ['this will be ignored',2] in [_,a] puts a #=> 2 end case ['a',2] in [_,a] => b puts a #=> 2 Puts b #=> ['a',2] end In the two examples above, any value that matches against _ passes. In the second case statement, => operator captures the value that has been ignored as well. Use Cases for Pattern Matching in Ruby Imagine that you have the following JSON data: { nickName: 'Tae' realName: {firstName: 'Noppakun', lastName: 'Wongsrinoppakun'} username: 'tae8838' } In your Ruby project, you want to parse this data and display the name with the following conditions: - If the username exists, return the username. - If the nickname, first name, and last name exist, return the nickname, first name, and then the last name. - If the nickname doesn’t exist, but the first and last name do, return the first name and then the last name. - If none of the conditions apply, return “New User.” This is how I would write this program in Ruby right now: def display_name(name_hash) if name_hash[:username] name_hash[:username] elsif name_hash[:nickname] && name_hash[:realname] && name_hash[:realname][:first] && name_hash[:realname][:last] "#{name_hash[:nickname]} #{name_hash[:realname][:first]} #{name_hash[:realname][:last]}" elsif name_hash[:first] && name_hash[:last] "#{name_hash[:first]} #{name_hash[:last]}" else 'New User' end end Now, let’s see what it looks like with pattern matching: def display_name(name_hash) case name_hash in {username: username} username in {nickname: nickname, realname: {first: first, last: last}} "#{nickname} #{first} #{last}" in {first: first, last: last} "#{first} #{last}" else 'New User' end end Syntax preference can be a little subjective, but I do prefer the pattern matching version. This is because pattern matching allows us to write out the hash we expect, instead of describing and checking the values of the hash. This makes it easier to visualize what data to expect: `{nickname: nickname, realname: {first: first, last: last}}` Instead of: `name_hash[:nickname] && name_hash[:realname] && name_hash[:realname][:first] && name_hash[:realname][:last]`. Deconstruct and Deconstruct_keys There are two new special methods being introduced in Ruby 2.7: deconstruct and deconstruct_keys. When an instance of a class is being matched against an array or hash, deconstruct or deconstruct_keys are called, respectively. The results from these methods will be used to match against patterns. Here is an example: class Coordinate attr_accessor :x, :y def initialize(x, y) @x = x @y = y end def deconstruct [@x, @y] end def deconstruct_key {x: @x, y: @y} end end The code defines a class called Coordinate. It has x and y as its attributes. It also has deconstruct and deconstruct_keys methods defined. c = Coordinates.new(32,50) case c in [a,b] p a #=> 32 p b #=> 50 end Here, an instance of Coordinate is being defined and pattern matched against an array. What happens here is that Coordinate#deconstruct is called and the result is used to match against the array [a,b] defined in the pattern. case c in {x:, y:} p x #=> 32 p y #=> 50 end In this example, the same instance of Coordinate is being pattern-matched against a hash. In this case, the Coordinate#deconstruct_keys result is used to match against the hash {x: x, y: y} defined in the pattern. An Exciting Experimental Feature Having first experienced pattern matching in Elixir, I had thought this feature might include method overloading and implemented with a syntax that only requires one line. However, Ruby isn’t a language that is built with pattern matching in mind, so this is understandable. Using a case statement is probably a very lean way of implementing this and also does not affect existing code (apart from deconstruct and deconstruct_keys methods). The use of the case statement is actually similar to that of Scala’s implementation of pattern matching. Personally, I think pattern matching is an exciting new feature for Ruby developers. It has the potential to make code a lot cleaner and make Ruby feel a bit more modern and exciting. I would love to see what people make of this and how this feature evolves in the future. Understanding the basics According to Scala documentation, pattern matching is “a mechanism for checking a value against a pattern. A successful match can also deconstruct a value into its constituent parts.” It has the potential to make code a lot cleaner and make Ruby feel a bit more modern and exciting. Through switch case statement in Ruby 2.7, using the keyword "in" instead of the usual "when." According to Scala documentation, pattern matching is “a mechanism for checking a value against a pattern. A successful match can also deconstruct a value into its constituent parts.”
https://www.toptal.com/ruby/ruby-pattern-matching-tutorial
CC-MAIN-2021-31
refinedweb
2,034
59.53
Modern sites often combine all of their JavaScript into a single, large main.js script. This regularly contains the scripts for all your pages or routes, even if users only need a small portion for the page they're viewing. When JavaScript is served this way, the loading performance of your web pages can suffer – especially with responsive web design on mobile devices. So let's fix it by implementing JavaScript code splitting. What problem does code splitting solve? When a web browser sees a <script> it needs to spend time downloading and processing the JavaScript you're referencing. This can feel fast on high-end devices but loading, parsing and executing unused JavaScript code can take a while on average mobile devices with a slower network and slower CPU. If you've ever had to log on to coffee-shop or hotel WiFi, you know slow network experiences can happen to everyone. Each second spent waiting on JavaScript to finish booting up can delay how soon users are able to interact with your experience. This is particularly the case if your UX relies on JS for critical components or even just attaching event handlers for simple pieces of UI. Do I need to bother with code splitting? It is definitely worth asking yourself whether you need to code-split (if you've used a simple website builder you probably don't). If your site requires JavaScript for interactive content (for features like menu drawers and carousels) or is a single-page application relying on JavaScript frameworks to render UI, the answer is likely 'yes'. Whether code splitting is worthwhile for your site is a question you'll need to answer yourself. You understand your architecture and how your site loads best. Thankfully there are tools available to help you here. Remember that if you're implementing changes across your design system, save those changes in your shared cloud storage so your team can see. Get help For those new to JavaScript code splitting, Lighthouse – the Audits panel in Chrome Developer Tools – can help shine a light on whether this is a problem for your site. The audit you'll want to look for is Reduce JavaScript Execution Time (documented here). This audit highlights all of the scripts on your page that can delay a user interacting with it. PageSpeed Insights is an online tool that can also highlight your site's performance – and includes lab data from Lighthouse and real-world data on your site performance from the Chrome User Experience Report. Your web hosting service may have other options. Code coverage in Chrome Developer Tools If it looks like you have costly scripts that could be better split, the next tool to look at is the Code Coverage feature in the Chrome Developer Tools (DevTools>top-right menu>More tools> Coverage). This measures how much unused JavaScript (and CSS) is in your page. For each script summarised, DevTools will show the 'unused bytes'. This is code you can consider splitting out and lazy-loading when the user needs it. The different kinds of code splitting There are a few different approaches you can take when it comes to code splitting JavaScript. How much these apply to your site tends to vary depending on whether you wish to split up page/application 'logic' or split up libraries/frameworks from other 'vendors'. Dynamic code splitting: Many of us 'statically' import JavaScript modules and dependencies so that they are bundled together into one file at build time. 'Dynamic' code splitting adds the ability to define points in your JavaScript that you would like to split and lazy-load as needed. Modern JavaScript uses the dynamic import() statement to achieve this. We'll cover this more shortly. Vendor code splitting: The frameworks and libraries you rely on (e.g. React, Angular, Vue or Lodash) are unlikely to change in the scripts you send down to your users, often as the 'logic' for your site. To reduce the negative impact of cache invalidation for users returning to your site, you can split your 'vendors' into a separate script. Entry-point code splitting: Entries are starting points in your site or app that a tool like Webpack can look at to build up your dependency tree. Splitting by entries is useful for pages where client-side routing is not used or you are relying on a combination of server and client-side rendering. For our purposes in this article, we'll be concentrating on dynamic code splitting. Get hands on with code splitting Let's optimise the JavaScript performance of a simple application that sorts three numbers through code splitting – this is an app by my colleague Houssein Djirdeh. The workflow we'll be using to make our JavaScript load quickly is measure, optimise and monitor. Start here. Measure performance Before attempting to add any optimisations, we're first going to measure the performance of our JavaScript. As the magic sorter app is hosted on Glitch, we'll be using its coding environment. Here's how to go about it: - Click the Show Live button. - Open the DevTools by pressing CMD+OPTION+i / CTRL+SHIFT +i. - Select the Network panel. - Make sure Disable Cache is checked and reload the app. This simple application seems to be using 71.2 KB of JavaScript just to sort through a few numbers. That certainly doesn't seem right. In our source src/index.js, the Lodash utility library is imported and we use sortBy – one of its sorting utilities – in order to sort our numbers. Lodash offers several useful functions but the app only uses a single method from it. It's a common mistake to install and import all of a third-party dependency when in actual fact you only need to use a small part of it. Optimise your bundle There are a few options available for trimming our JavaScript bundle size: - Write a custom sort method instead of relying on a thirdparty library. - Use Array.prototype.sort(), which is built into the browser. - Only import the sortBy method from Lodash instead of the whole library. - Only download the code for sorting when a user needs it (when they click a button). Options 1 and 2 are appropriate for reducing our bundle size – these probably make sense for a real application. For teaching purposes, we're going to try something different. Options 3 and 4 help improve the performance of the application. Only import the code you need We'll modify a few files to only import the single sortBy method we need from Lodash. Let's start with replacing our lodash dependency in package.json: "lodash": "^4.7.0", with this: "lodash.sortby": "^4.7.0", In src/index.js, we'll import this more specific module: js import "./style.css"; import _ from "lodash"; import sortBy from "lodash.sortby"; Next, we'll update how the values get sorted: js form.addEventListener("submit", e => { e.preventDefault(); const values = [input1.valueAsNumber, input2.valueAsNumber, input3.valueAsNumber]; const sortedValues = _.sortBy(values); const sortedValues = sortBy(values); results.innerHTML = ` <h2> ${sortedValues} </h2> ` }); Reload the magic numbers app, open up Developer Tools and look at the Network panel again. For this specific app, our bundle size was reduced by a scale of four with little work. But there's still much room for improvement. JavaScript code splitting Webpack is one of the most popular JavaScript module bundlers used by web developers today. It 'bundles' (combines) all your JavaScript modules and other assets into static files web browsers can read. The single bundle in this application can be split into two separate scripts: - One is responsible for code making up the initial route. - Another one contains our sorting code. Using dynamic imports (with the import() keyword), a second script can be lazy-loaded on demand. In our magic numbers app, the code making up the script can be loaded as needed when the user clicks the button. We begin by removing the top-level import for the sort method in src/index.js: import sortBy from "lodash.sortby"; Import it within the event listener that fires when the button is clicked: form.addEventListener("submit", e => { e.preventDefault(); import('lodash.sortby') .then(module => module.default) .then(sortInput()) .catch(err => { alert(err) }); }); This dynamic import() feature we're using is part of a standardstrack proposal for including the ability to dynamically import a module in the JavaScript language standard. Webpack already supports this syntax. You can read more about how dynamic imports work in this article. The import() statement returns a Promise when it resolves. Webpack considers this as a split point that it will break out into a separate script (or chunk). Once the module is returned, the module.default is used to reference the default export provided by lodash. The Promise is chained with another .then() calling a sortInput method to sort the three input values. At the end of the Promise chain, .catch() is called upon to handle where the Promise is rejected as the result of an error. In a real production applications, you should handle dynamic import errors appropriately. Simple alert messages (similar to what is used here) are what are used and may not provide the best user experience for letting users know something has gone wrong. In case you see a linting error like "Parsing error: import and export may only appear at the top level", know that this is due to the dynamic import syntax not yet being finalised. Although Webpack support it, the settings for ESLint (a JavaScript linting tool) used by Glitch have not been updated to include this syntax yet but it does still work. The last thing we need to do is write the sortInput method at the end of our file. This has to be a function returning a function that takes in the imported method from lodash.sortBy. The nested function can sort the three input values and update the DOM: const sortInput = () => { return (sortBy) => { const values = [ input1.valueAsNumber, input2.valueAsNumber, input3.valueAsNumber ]; const sortedValues = sortBy(values); results.innerHTML = ` <h2> ${sortedValues} </h2> ` }; } Monitor the numbers Now let's reload the application one last time and keep a close eye on the Network panel. You should notice how only a small initial bundle is downloaded when the app loads. After the button is clicked to sort the input numbers, the script/ chunk containing the sorting code gets fetched and executed. Do you see how the numbers still get sorted as we would expect them to? JavaScript code splitting and lazy-loading can be very useful for trimming down the initial bundle size of your app or site. This can directly result in faster page load times for users. Although we've looked at adding code splitting to a vanilla JavaScript application, you can also apply it to apps built with libraries or frameworks. Lazy-loading with a JavaScript library or framework A lot of popular frameworks support adding code splitting and lazy-loading using dynamic imports and Webpack. Here's how you might lazy-load a movie 'description' component using React (with React.lazy() and their Suspense feature) to provide a "Loading…" fallback while the component is being lazy-loaded in (see here for some more details): import React, { Suspense } from 'react'; const Description = React.lazy(() => import('./Description')); function App() { return ( <div> <h1>My Movie</h1> <Suspense fallback="Loading..."> <Description /> </Suspense> </div> ); } Code splitting can help reduce the impact of JavaScript on your user experience. Definitely consider it if you have larger JavaScript bundles and when in doubt, don't forget to measure, optimise and monitor. This article was originally published in issue 317 of net, the world's best-selling magazine for web designers and developers. Buy issue 317 here or subscribe here. Related articles:
https://www.creativebloq.com/how-to/all-you-need-to-know-about-javascript-code-splitting
CC-MAIN-2021-31
refinedweb
1,967
64.1
IRC log of CSS on 2009-04-29 Timestamps are in UTC. 15:56:44 [RRSAgent] RRSAgent has joined #CSS 15:56:44 [RRSAgent] logging to 15:56:54 [dsinger] Zakim, who is here? 15:56:54 [Zakim] sorry, dsinger, I don't know what conference this is 15:56:55 [Zakim] On IRC I see RRSAgent, Zakim, emilyw, dsinger, sylvaing, MikeSmith, glazou_pain, Hixie, fantasai, Bert, trackbot, myakura, krijnh, arronei 15:56:57 [glazou_pain] dsinger: you have to use /invite, that's what I did 15:57:09 [dsinger] Zakim, this is style 15:57:09 [Zakim] ok, dsinger; that matches Style_CSS FP()12:00PM 15:57:28 [Zakim] + +95089aabb 15:57:29 [Zakim] - +1.408.398.aaaa 15:57:36 [glazou_pain] Zakim, aaaa is me 15:57:36 [Zakim] sorry, glazou_pain, I do not recognize a party named 'aaaa' 15:57:44 [Zakim] + +1.408.398.aacc 15:57:47 [glazou] Zakim, +aaaa is me 15:57:47 [Zakim] sorry, glazou, I do not recognize a party named '+aaaa' 15:57:48 [MikeSmith] MikeSmith has left #css 15:58:03 [glazou] Zakim, +95089aabb is me 15:58:03 [Zakim] +glazou; got it 15:58:05 [sylvaing] zakim, [Microsoft] has sylvaing, alexmog 15:58:05 [Zakim] +sylvaing, alexmog; got it 15:58:10 [dsinger] dsinger has joined #css 15:58:32 [dsinger] Zakim, who is here? 15:58:32 [Zakim] On the phone I see [Microsoft], glazou, +1.408.398.aacc 15:58:33 [Zakim] [Microsoft] has sylvaing, alexmog 15:58:34 [Zakim] On IRC I see dsinger, RRSAgent, Zakim, emilyw, sylvaing, glazou, Hixie, fantasai, Bert, trackbot, myakura, krijnh, arronei 15:59:04 [dsinger] Zakim, i am +1.408.398.aacc 15:59:04 [Zakim] +dsinger; got it 15:59:10 [ChrisL] ChrisL has joined #css 15:59:16 [dsinger] Zakim, mute me 15:59:16 [Zakim] dsinger should now be muted 15:59:18 [glazou] so we have regrets from szilles, anne, molly, dbaron and probably plinss too 15:59:26 [glazou] hi ChrisL 15:59:31 [ChrisL] hi daniel 15:59:53 [Zakim] +Bert 16:00:03 [Zakim] +ChrisL 16:01:22 [alexmog] alexmog has joined #css 16:02:26 [Zakim] +??P24 16:02:32 [fantasai] Zakim, ??P24 is fantasai 16:02:32 [Zakim] +fantasai; got it 16:05:02 [dsinger] Which module? 16:05:24 [ChrisL] scribenick: chrisl 16:05:50 [ChrisL] regrets: anne, molly, david, steve 16:06:03 [ChrisL] topic: column-break 16:07:04 [fantasai] 16:07:07 [ChrisL] 16:07:17 [Zakim] -fantasai 16:07:37 [Zakim] +??P24 16:07:43 [ChrisL] am: showed three combinations in an email 16:07:44 [fantasai] Zakim, ??P24 is fantasai 16:07:44 [Zakim] +fantasai; got it 16:07:56 [ChrisL] zakim, who is here? 16:07:56 [Zakim] On the phone I see [Microsoft], glazou, dsinger (muted), Bert, ChrisL, fantasai 16:07:58 [Zakim] [Microsoft] has sylvaing, alexmog 16:07:59 [Zakim] On IRC I see alexmog, ChrisL, dsinger, RRSAgent, Zakim, emilyw, sylvaing, glazou, Hixie, fantasai, Bert, trackbot, myakura, krijnh, arronei 16:08:48 [ChrisL] am: page-break-avoid and column-break-avoid, all combinations make sense 16:09:10 [ChrisL] ... supposr 2 col layout, something is a column and a half wide 16:09:32 [ChrisL] ... if we had two separate properties, column-break-avoid would make it start in the second column 16:09:45 [ChrisL] ... page-break-avoid would move it to the next page 16:09:54 [ChrisL] ... if they were totallyy separate 16:10:19 [ChrisL] ... however, a single break property would move things to the next column but not necessarily the next page 16:10:37 [ChrisL] dg: so you would get a blank page 16:10:47 [ChrisL] am: or a blank column before the next page break 16:11:06 [fantasai] break-inside: avoid | avoid-column | avoid-page 16:11:09 [fantasai] would give you all combinations 16:11:55 [ChrisL] am: if we think page break is always a column break, then its hard to say thata page break is avoided but ok to start im mind column 16:12:01 [ChrisL] s/mind/mid/ 16:12:36 [ChrisL] am: close to the opinion that its okay to have separate column and page properties 16:13:12 [ChrisL] el: one property (as above) would do it as well as long as all combinations are listed 16:13:26 [dsinger] Can we lay out all cases? Near end of girst col, near end of second 16:13:47 [dsinger] Pb avoid, cb avoid 16:13:47 [anne] anne has joined #css 16:13:59 [dsinger] Pb+cb avoid? 16:14:14 [ChrisL] am: advantage of separate properties is that you avoid first column breaks then page breaks 16:14:47 [ChrisL] dg: also an issue of readability 16:15:22 [ChrisL] el: can be readable with one property, with good choice of values. encourages people to think about pages when designing columns 16:15:32 [dsinger] A break over page would violate cb avoid? 16:15:42 [ChrisL] dg: these are being confused 16:16:08 [ChrisL] bb: more interesting question, they are semi independent so all combinations need to be considered either way 16:16:21 [ChrisL] dg: some comninations will be unused 16:16:30 [ChrisL] bb: is there a list of all the combinations? 16:16:41 [ChrisL] am: email did not listall of them 16:17:38 [glazou] 16:18:17 [ChrisL] dg: avoid means 'try to avoid' 16:18:38 [ChrisL] am: most common pattern is to avoid all breaks. 16:19:22 [ChrisL] dg: column should take precedence over pages 16:20:56 [ChrisL] am: some people think there are only two combinations, but differ on which two those are 16:21:03 [ChrisL] el: happy to define all three 16:21:31 [ChrisL] am: avoid colum, page, both makes sense to me 16:22:01 [ChrisL] bb: agree with elika, define all three even though one is not useful 16:22:26 [ChrisL] ... avoid-both is ok, if you avoid a column break also avoids a column break 16:22:36 [dsinger] dsinger has joined #css 16:22:49 [ChrisL] el: no, avoiding both means you prioritise avoiding page breaks over column breaks 16:22:50 [Zakim] + +1.408.996.aadd 16:23:02 [Zakim] -dsinger 16:23:21 [dsinger] zakim, +1.408.996.aadd is [apple] 16:23:21 [Zakim] +[apple]; got it 16:23:24 [glazou] ok 16:23:26 [dsinger] zakim, [apple] has dsinger 16:23:26 [Zakim] +dsinger; got it 16:24:26 [dsinger] right, col1 of page 2 is not col2 of page 1 16:24:33 [ChrisL] cl: a page break always produces a new colum break 16:25:01 [ChrisL] bb: if its too long then there is no need to push it anywhere 16:25:16 [ChrisL] am: avoid is not forbid. its 'attempt to not break" 16:26:35 [ChrisL] cl: no way to say 'minimise the total number of breaks' 16:26:50 [ChrisL] am: good point, can be complex to optimise for that though 16:27:23 [ChrisL] sg: see example with avoid-column 16:27:25 [fantasai] am: I would prefer to specify that you try to lay out, and if it doesn't fit, you push to the top of the next column 16:28:13 [ChrisL] am: choice of keeping "most" of the article together 16:28:37 [ChrisL] ... prefer a break art the end rather than a break near the start 16:29:00 [ChrisL] am: page break is always a column break as well. that has to be made clear 16:30:02 [ChrisL] el: i agree with alex. want avoid to mean 'try layout then push over a break'. more complex stuff needs different keeywords. avoid behaviour is simple and useful so is what we should do now 16:30:19 [ChrisL] bb: seems fine 16:30:39 [ChrisL] dg: seem close to consensus 16:31:15 [ChrisL] el: page break inside option does not work, 16:31:35 [ChrisL] ... introducing a shorthand that combines both column and page is the best option 16:31:54 [ChrisL] am: cleaner solution to forget the old property 16:32:02 [ChrisL] el: have to support the old property 16:32:11 [ChrisL] am: yes but avoid in new documents 16:32:21 [fantasai] 16:32:36 [szilles] szilles has joined #css 16:32:51 [ChrisL] (consensus seems to be reached) 16:33:21 [Zakim] +SteveZ 16:33:49 [fantasai] el: We're down to either alias or shorthand 16:33:59 [ChrisL] el so we eliminate the first of melindas options but have to choose between 2 and 3 16:34:08 [ChrisL] bb: shorthand seems like overkill 16:34:18 [ChrisL] am: fine with either 16:34:55 [ChrisL] dg: would like to see a summing up and final proposal 16:35:31 [ChrisL] el: can work with hakon to propose something that covers all three combinations, need to pick 2 or 3 16:35:43 [ChrisL]. 16:35:54 [ChrisL] 3. Define 'break-before', 'break-after', and 'break-inside' as aliases to 'page-break-before', 'page-break-after', and 'page-break-inside'. 16:36:38 [ChrisL] am: does the alias mean all the values apply to the old properties? 16:37:06 [ChrisL] el: no, one is a superset of the others 16:37:24 [ChrisL] am: preferable from an implementor standpoint to allow all the properties 16:37:37 [ChrisL] el: 'always' property would be a problem 16:37:45 [fantasai] s/property/value/ 16:37:47 [ChrisL] am: ok so i prefer a new set of properties 16:37:52 [ChrisL] el: so do I 16:38:30 [ChrisL] cl: so everyone seems to like melindas option 2 best 16:39:10 [ChrisL] resolution: Add three new column-breaking properties per melindas email oprtion 2 16:39:28 [fantasai] /resolution/RESOLVED/ 16:39:41 [ChrisL] action: fantasai work with hakon on spec text to define the column-break properties and interaction with page bbreak properties 16:39:41 [trackbot] Created ACTION-141 - Work with hakon on spec text to define the column-break properties and interaction with page bbreak properties [on Elika Etemad - due 2009-05-06]. 16:40:03 [ChrisL] topic: email from svg on image fit 16:40:10 [glazou] 16:40:33 [ChrisL] "The naming was briefly discussed in another SVG telcon[1], and the conclusion was that the SVG WG prefers the naming 'content-fit' and 'content-position' because of the reasons already mentioned above. 16:40:33 [ChrisL] " 16:41:00 [ChrisL] el: concern is that for css, this only appies to images while the name implies it applies more widely eg to text content 16:41:09 [ChrisL] ... but cant comu up with a better name 16:41:47 [ChrisL] dg: don't see a clash with the content property, but could live with it 16:41:58 [ChrisL] el: wonder if we should ask for better names 16:42:09 [ChrisL] dg: ack the problem and ask for a better name 16:42:40 [ChrisL] action: daniel respond to agreeing there is a problem but asking for a better name 16:42:40 [trackbot] Created ACTION-142 - Respond to agreeing there is a problem but asking for a better name [on Daniel Glazman - due 2009-05-06]. 16:43:07 [ChrisL] topic: almost-ready specs 16:43:19 [ChrisL] dg: what can we move to PR? 16:43:41 [ChrisL] dg: chris reported progress with implementations agfainst the colour module tests 16:43:56 [ChrisL] dg: we are seen as very slow and need to publish and move forward 16:44:04 [ChrisL] dg: other candidates? 16:44:11 [ChrisL] el: namespaces? 16:44:31 [ChrisL] ... a parsing bug and one test is failing 16:44:52 [ChrisL] dg: all implementations fail? 16:45:10 [ChrisL] dg; who is doing implementation reports? 16:45:24 [ChrisL] el: easy to do once implementations pass, test suite is not very long 16:46:08 [ChrisL] dg: discussed media queries with anne, he thinks some will be untestable as we do not have suitable devices and that testing on desktop is enough 16:46:26 [ChrisL] dg: concerned that we need to test on mono and character-cell devices 16:46:50 [ChrisL] dg: desktop ones seem to be interoperable at this time, but some features do not apply 16:47:00 [ChrisL] el: do we have any implementations for grid? 16:47:03 [ChrisL] dg: no 16:47:46 [ChrisL] el: should make an imp report for dersktop and survey what other devices actually exist. at end of 6 months if there are no implementations of some features or devices we can drop them from the spec 16:48:21 [ChrisL] bb: not honest to say we pass a test if there are no implementations 16:48:40 [ChrisL] ... some implementatiosn can emullate and always pass 16:48:52 [ChrisL] ... currently no features at risk 16:49:50 [ChrisL] cl: prefer to do an imp report then mark features at risk and republish 16:50:05 [ChrisL] sz: only need to claim to be a device, not to actually be that device 16:50:21 [ChrisL] dg: yes but not implementation claims to be a grid for example 16:51:09 [ChrisL] sz: only issue in testing is if the right selection was made, not whether it then goes on to lay outcorrectly 16:51:33 [ChrisL] dg: will not agree to implement like that and claim to have a feature that they in fact dfon't have 16:51:42 [ChrisL] s/dfo/do/ 16:52:10 [Bert] If we test '@media (grid)' and '@media not (grid)' and Opera does the right thing for both, sin't that enough? 16:52:17 [ChrisL] dg: can test for not-grid 16:52:31 [ChrisL] el: probably sufficient. 16:53:01 [ChrisL] ... will have to say its sufficient 16:54:00 [ChrisL] dg: anne had some tests for desktop only. no tests for other devices. WG should look at tests from Anne and contribute more 16:54:19 [ChrisL] ... as soon as we have tests, we can move forward 16:54:33 [ChrisL] cl: where are anne's tests? 16:54:58 [Bert] 16:55:07 [ChrisL] not listed on 16:55:25 [ChrisL] bb: because not reviewed yet 16:55:46 [ChrisL] dg: will try to review for next week 16:56:26 [ChrisL] cl: cdf testsuite has media query tests which could be re-used 16:56:32 [plinss_] plinss_ has joined #css 16:56:39 [ChrisL] sz: snapshot? 16:56:46 [ChrisL] el: depends on 2.1 and selectors 16:57:31 [ChrisL] sz: snapshot is important as it actually defines the current state 16:58:14 [ChrisL] dg: don't think the snapshot is very useful 16:58:40 [dsinger] well, there will be browsers that are interoperable on defined modules... 16:58:47 [ChrisL] rrsagent, make logs public 16:58:52 [ChrisL] zakim, list attendees 16:58:52 [Zakim] As of this point the attendees have been +1.408.398.aaaa, +1.408.398.aacc, glazou, sylvaing, alexmog, dsinger, Bert, ChrisL, fantasai, SteveZ 16:58:58 [Zakim] -SteveZ 16:59:00 [Zakim] -ChrisL 16:59:03 [dsinger] bye 16:59:06 [Zakim] -[Microsoft] 16:59:10 [Zakim] -[apple] 16:59:13 [Zakim] -Bert 16:59:16 [Zakim] -fantasai 16:59:22 [ChrisL] rrsagent, make minutes 16:59:22 [RRSAgent] I have made the request to generate ChrisL 16:59:49 [ChrisL] meetiing: CSS WG telcon 16:59:55 [ChrisL] chair: Daniel 17:00:44 [ChrisL] agenda: (member only) 17:00:47 [ChrisL] rrsagent, make minutes 17:00:47 [RRSAgent] I have made the request to generate ChrisL 17:01:37 [ChrisL] Agenda: (member only) 17:01:49 [ChrisL] Chair: Daniel 17:01:57 [ChrisL] Meetiing: CSS WG telcon 17:02:00 [ChrisL] rrsagent, make minutes 17:02:01 [RRSAgent] I have made the request to generate ChrisL 17:03:15 [Zakim] -glazou 17:03:17 [Zakim] Style_CSS FP()12:00PM has ended 17:03:21 [Zakim] Attendees were +1.408.398.aaaa, +1.408.398.aacc, glazou, sylvaing, alexmog, dsinger, Bert, ChrisL, fantasai, SteveZ 17:09:21 [alexmog] alexmog has joined #css 17:47:04 [shepazu] shepazu has joined #css 18:59:34 [Zakim] Zakim has left #CSS 20:01:42 [alexmog] alexmog has joined #css 20:09:22 [sylvaing] sylvaing has joined #css 20:35:30 [jdaggett] jdaggett has joined #css 21:03:53 [annevk] annevk has joined #css 21:30:54 [Lachy] Lachy has joined #css 21:55:56 [sylvaing] sylvaing has joined #css 23:13:04 [annevk] annevk has joined #css 23:22:16 [jdaggett] jdaggett has joined #css
http://www.w3.org/2009/04/29-CSS-irc
CC-MAIN-2016-07
refinedweb
2,810
60.52
Python Programming, news on the Voidspace Python Projects and all things techie. Implementing __dir__ (and finding bugs in Pythons) A new magic method was added in Python 2.6 to allow objects to customise the list of attributes returned by dir. The new protocol method (I don't really like the term "magic method" but it is so entrenched both in the Python community and in my own mind) is __dir__. From the docs: This allows objects that implement a custom __getattr__() or __getattribute__() function to customize the way dir() reports their attributes. mock.Mock() is one such object that provides attributes dynamically and I thought it would be good if dir(mock) correctly reported the attributes it had created. Plus, if a mock is created with a spec (so that only attributes available on the original object are available on the mock), then all available attributes (even if they haven't been created yet) should be reported. So a pretty standard way of implementing __dir__ would seem to be thusly; take all the standard attributes normally reported by dir and add the dynamically created ones. It turns out that isn't so easy, because if you're providing __dir__ there is no way to get the list of "standard attributes normally reported by dir". There is no object.__dir__ to call up to, and calling dir(self) causes infinite recursion (of course!). The strategy I went for was to call dir on the type (so get all the class attributes), add anything in the instance __dict__ and finally add the dynamically created attributes. Throw the whole mix into a set to remove duplicates and then return a sorted list of the results: def __dir__(self): return sorted(set((dir(type(self)) + list(self.__dict__) + self._get_dynamic_attributes())) Of course this doesn't play well with multiple inheritance and is just a touch ugly. It would be far nicer to be able to do (using the Python 3 super calling convention): def __dir__(self): standard = super().__dir__() return standard + self._get_dynamic_attributes() Thankfully other folk on the python-ideas list either agreed or didn't disagree (which is a rare thing!), so it might happen for Python 3.3. Whilst fiddling with implementing and testing __dir__ in mock I had an obscurely failing test on pypy. Most dir(mock) calls behaved as expected, but when using a module as a spec __dir__ wouldn't be called. For example: dir(Mock(spec=sys)). If you create a mock with a spec then mock.__class__ returns the class of the spec object, so that isinstance(mock, SpecType) still passes. pypy implements dir (in Python) and special cases modules using isinstance (only the contents of the module and not module attributes are reported). This means that a mock with a module as a spec doesn't have __dir__ called because the pypy implementation of dir thinks that it is a module. Although obscure this is also a problem for a subclass of ModuleType that has a custom __dir__ implementation. Benjamin Peterson happened to be online when I reported it and fixed it within minutes - by moving the check for __dir__ ahead of the check for a module. Whilst looking at the source code to the pypy dir implementation Benjamin and I both noticed another issue that affects both pypy and CPython: >>> class foo(type): ... def __dir__(self): ... return ['f'] ... >>> class bar(object): ... __metaclass__ = foo ... >>> dir(bar) ['f'] >>> dir(bar()) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: __dir__() takes exactly 1 argument (2 given) (The same thing happens with Python 3 - just use class bar(metaclass=foo): instead). A class that has a metaclass implementing __dir__ will blow up when you call dir on instances of the class. It claims that you passed two arguments to __dir__! This is because both pypy and CPython look up the __dir__ method by doing the equivalent of type(obj).__dir__(obj) (or in the case of pypy exactly that). They do the lookup on the class rather than the instance because this is how protocol methods are supposed to be fetched. Fetching the method from the class returns an unbound method (or in Python 3 a normal function - which is a rant for another day), so it takes an instance of the class as the first argument. Unfortunately you can lookup metaclass methods on a type, so if the class doesn't have a __dir__, but the metaclass does, then type(obj).__dir__ returns the metaclass method. What is more the class is an instance of the metaclass, so the method returned is a bound method, with the class already bound as the first argument. When this method is then called with the original object as a second argument it rightly complains. >>> bar.__dir__ <bound method foo.__dir__ of <class '__main__.bar'>> >>> bar.__dir__() ['f'] >>> type(bar()).__dir__ <bound method foo.__dir__ of <class '__main__.bar'>> I'm pretty sure that Benjamin has now fixed this too, in both pypy and CPython. So the custom __dir__ will be in mock 0.8.0, which has several great new features. There'll be an alpha release soon for you to play with. Whilst working on those features I found a bug in CPython, a bug in pypy and a bug in jython. The bug in jython was in 2.5.1, and already fixed in 2.5.2, but it's still nice to be able to say that in one day I found bugs in three implementations of Python! The specific problem was that rich comparisons of Python classes and object were returning different results from CPython. My tests for rich comparisons (for the new magic method mocking support in 0.7.0) were then failing with jython 2.5.1. In jython 2.5.1: >>> class Foo(object): pass ... >>> Foo < object True And in CPython: >>> class Foo(object): pass ... >>> Foo < object False Onto pypy. Booleans in Python subclass integer, which is sucky but that's another discussion and isn't going to change anyway. So they have integer attributes, for example True.real == 1. In pypy 1.5 True.real == True. This triggered a bug in the new mock code and caused infinite recursion - so it was a useful difference. However, this was a more general bug in pypy. To match CPython behaviour, these int / long / float attributes on subclasses should return straight ints, longs or floats. In pypy 1.5 they return instance of the subclass instead. This was fixed by Armin Rigo within ten minutes of me reporting the issue. (The report was accompanied by Maciej complaining that the real problem is that CPython doesn't have tests for this behaviour which is why it is different in pypy, which is a fair point.) And finally the CPython bug. In CPython (2.x only) re pattern objects don't have a __class__ attribute (well, technically they do, but they raise an AttributeError when you try to access it). >>> import re >>> re.compile('foo') <_sre.SRE_Pattern object at 0x1043230> >>> p = re.compile('foo') >>> p.__class__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: __class__ This is because of a broken tp_getattr slot implementation. This was fixed (for Python 2.7.2) by Benjamin Peterson about fifteen minutes after I reported it. So the Jython guys win by fixing the bug even before I discovered it. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2011-05-25 11:04:51 | | Categories: Hacking, Python, Projects Tags: mock, jython, pypy Archives This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License. Counter...
http://www.voidspace.org.uk/python/weblog/arch_d7_2011_05_21.shtml
CC-MAIN-2014-41
refinedweb
1,281
73.58
Short tutorial¶ Here we provide a short tutorial that guides you through the main features of Snakemake. Note that this is not suited to learn Snakemake from scratch, rather to give a first impression. To really learn Snakemake (starting from something simple, and extending towards advanced features), use the main Snakemake Tutorial. This document shows all steps performed in the official Snakemake live demo, such that it becomes possible to follow them at your own pace. Solutions to each step can be found at the bottom of this document. The examples presented in this tutorial come from Bioinformatics. However, Snakemake is a general-purpose workflow management system for any discipline. For an explanation of the steps you will perform here, have a look at Background. More thorough explanations are provided in the full Snakemake Tutorial. Prerequisites¶ First, install Snakemake via Conda, as outlined in Installation via Conda/Mamba. The minimal version of Snakemake is sufficient for this demo. Second, download and unpack the test data needed for this example from here, e.g., via mkdir snakemake-demo cd snakemake-demo wget tar --wildcards -xf v5.4.5.tar.gz --strip 1 "*/data" Step 1¶ First, create an empty workflow in the current directory with: touch Snakefile Once a Snakefile is present, you can perform a dry run of Snakemake with: snakemake -n Since the Snakefile is empty, it will report that nothing has to be done. In the next steps, we will gradually fill the Snakefile with an example analysis workflow. Step 2¶ The data folder in your working directory looks as follows: data ├── genome.fa ├── genome.fa.amb ├── genome.fa.ann ├── genome.fa.bwt ├── genome.fa.fai ├── genome.fa.pac ├── genome.fa.sa └── samples ├── A.fastq ├── B.fastq └── C.fastq You will create a workflow that maps the sequencing samples in the data/samples folder to the reference genome data/genome.fa. Then, you will call genomic variants over the mapped samples, and create an example plot. First, create a rule called bwa, with input files data/genome.fa data/samples/A.fastq and output file mapped/A.bam To generate output from input, use the shell command "bwa mem {input} | samtools view -Sb - > {output}" Providing a shell command is not enough to run your workflow on an unprepared system. For reproducibility, you also have to provide the required software stack and define the desired version. This can be done with the Conda package manager, which is directly integrated with Snakemake: add a directive conda: "envs/mapping.yaml" that points to a Conda environment definition, with the following content channels: - bioconda - conda-forge dependencies: - bwa =0.7.17 - samtools =1.9 Upon execution, Snakemake will automatically create that environment, and execute the shell command within. Now, test your workflow by simulating the creation of the file mapped/A.bam via snakemake --use-conda -n mapped/A.bam to perform a dry-run and snakemake --use-conda mapped/A.bam --cores 1 to perform the actual execution. Step 3¶ Now, generalize the rule bwa by replacing the concrete sample name A with a wildcard {sample} in input and output file the rule bwa. This way, Snakemake can apply the rule to map any of the three available samples to the reference genome. Test this by creating the file mapped/B.bam. Step 4¶ Next, create a rule sort that sorts the obtained .bam file by genomic coordinate. The rule should have the input file mapped/{sample}.bam and the output file mapped/{sample}.sorted.bam and uses the shell command samtools sort -o {output} {input} to perform the sorting. Moreover, use the same conda: directive as for the previous rule. Test your workflow with snakemake --use-conda -n mapped/A.sorted.bam and snakemake --use-conda mapped/A.sorted.bam --cores 1 Step 5¶ Now, we aggregate over all samples to perform a joint calling of genomic variants. First, we define a variable samples = ["A", "B", "C"] at the top of the Snakefile. This serves as a definition of the samples over which we would want to aggregate. In real life, you would want to use an external sample sheet or a config file for things like this. For aggregation over many files, Snakemake provides the helper function expand (see the docs). Create a rule call with input files fa="data/genome.fa" bam=expand("mapped/{sample}.sorted.bam", sample=samples) output file "calls/all.vcf" and shell command samtools mpileup -g -f {input.fa} {input.bam} | bcftools call -mv - > {output} Further, define a new conda environment file with the following content: channels: - bioconda - conda-forge dependencies: - bcftools =1.9 - samtools =1.9 Step 6¶ Finally, we strive to calculate some exemplary statistics. This time, we don’t use a shell command, but rather employ Snakemake’s ability to integrate with scripting languages like R and Python. First, we create a rule stats with input file "calls/all.vcf" and output file "plots/quals.svg". Instead of a shell command, we write script: "scripts/plot-quals.py" and create the corresponding script and its containing folder in our working directory with mkdir scripts touch scripts/plot-quals.py We open the script in the editor and add the following content import matplotlib matplotlib.use("Agg") import matplotlib.pyplot as plt from pysam import VariantFile quals = [record.qual for record in VariantFile(snakemake.input[0])] plt.hist(quals) plt.savefig(snakemake.output[0]) As you can see, instead of writing a command line parser for passing parameters like input and output files, you have direct access to the properties of the rule via a magic snakemake object, that Snakemake automatically inserts into the script before executing the rule. Finally, we have to define a conda environment for the rule, say envs/stats.yaml, that provides the required Python packages to execute the script: channels: - bioconda - conda-forge dependencies: - pysam =0.15 - matplotlib =3.1 - python =3.8 Make sure to test your workflow with snakemake --use-conda plots/quals.svg --cores 1 Step 7¶ So far, we have always specified a target file at the command line when invoking Snakemake. When no target file is specified, Snakemake tries to execute the first rule in the Snakefile. We can use this property to define default target files. At the top of your Snakefile define a rule all, with input files "calls/all.vcf" "plots/quals.svg" and neither a shell command nor output files. This rule simply serves as an indicator of what shall be collected as results. Step 8¶ As a last step, we strive to annotate our workflow with some additional information. Automatic reports¶ Snakemake can automatically create HTML reports with snakemake --report report.html Such a report contains runtime statistics, a visualization of the workflow topology, used software and data provenance information. In addition, you can mark any output file generated in your workflow for inclusion into the report. It will be encoded directly into the report, such that it can be, e.g., emailed as a self-contained document. The reader (e.g., a collaborator of yours) can at any time download the enclosed results from the report for further use, e.g., in a manuscript you write together. In this example, please mark the output file "plots/quals.svg" for inclusion by replacing it with report("plots/quals.svg", caption="report/calling.rst") and adding a file report/calling.rst, containing some description of the output file. This description will be presented as caption in the resulting Threads¶ The first rule bwa can in theory use multiple threads. You can make Snakemake aware of this, such that the information can be used for scheduling. Add a directive threads: 8 to the rule and alter the shell command to bwa mem -t {threads} {input} | samtools view -Sb - > {output} This passes the threads defined in the rule as a command line argument to the bwa process. Temporary files¶ The output of the bwa rule becomes superfluous once the sorted version of the .bam file is generated by the rule sort. Snakemake can automatically delete the superfluous output once it is not needed anymore. For this, mark the output as temporary by replacing "mapped/{sample}.bam" in the rule bwa with temp("mapped/{sample}.bam"). Solutions¶ Only read this if you have a problem with one of the steps. Step 2¶ The rule should look like this: rule bwa: input: "data/genome.fa", "data/samples/A.fastq" output: "mapped/A.bam" conda: "envs/mapping.yaml" shell: "bwa mem {input} | samtools view -Sb - > {output}" Step 3¶ The rule should look like this: rule bwa: input: "data/genome.fa", "data/samples/{sample}.fastq" output: "mapped/{sample}.bam" conda: "envs/mapping.yaml" shell: "bwa mem {input} | samtools view -Sb - > {output}" Step 4¶ The rule should look like this: rule sort: input: "mapped/{sample}.bam" output: "mapped/{sample}.sorted.bam" conda: "envs/mapping.yaml" shell: "samtools sort -o {output} {input}" Step 5¶ The rule should look like this: samples = ["A", "B", "C"] rule call: input: fa="data/genome.fa", bam=expand("mapped/{sample}.sorted.bam", sample=samples) output: "calls/all.vcf" conda: "envs/calling.yaml" shell: "samtools mpileup -g -f {input.fa} {input.bam} | " "bcftools call -mv - > {output}" Step 6¶ The rule should look like this: rule stats: input: "calls/all.vcf" output: "plots/quals.svg" conda: "envs/stats.yaml" script: "scripts/plot-quals.py" Step 7¶ The rule should look like this: rule all: input: "calls/all.vcf", "plots/quals.svg" It has to appear as first rule in the Snakefile. Step 8¶ The complete workflow should look like this: samples = ["A", "B"] rule all: input: "calls/all.vcf", "plots/quals.svg" rule bwa: input: "data/genome.fa", "data/samples/{sample}.fastq" output: temp("mapped/{sample}.bam") conda: "envs/mapping.yaml" threads: 8 shell: "bwa mem -t {threads} {input} | samtools view -Sb - > {output}" rule sort: input: "mapped/{sample}.bam" output: "mapped/{sample}.sorted.bam" conda: "envs/mapping.yaml" shell: "samtools sort -o {output} {input}" rule call: input: fa="data/genome.fa", bam=expand("mapped/{sample}.sorted.bam", sample=samples) output: "calls/all.vcf" conda: "envs/calling.yaml" shell: "samtools mpileup -g -f {input.fa} {input.bam} | " "bcftools call -mv - > {output}" rule stats: input: "calls/all.vcf" output: report("plots/quals.svg", caption="report/calling.rst") conda: "envs/stats.yaml" script: "scripts/plot-quals.py"
https://snakemake.readthedocs.io/en/stable/tutorial/short.html
CC-MAIN-2021-39
refinedweb
1,721
59.9
The current implementation of XMLStreamReader does not attempt to "fix up" the XML to reflect a well formed document. It is designed to be very fast and require low overhead. You say that "...prefixes and namespaces default to null for all elements." As far as I know, prefixes are preserved for elements, but there is a bug for attributes where prefixes are not preserved. And, elements and prefixes should also have the proper namespace reflected. Could you be more specific about this statement? The prefix tracking thread is a different issue. It is about the lack of prefix support in v1. The feature I think you are looking for is a XMLStreamReader with save semantics (available via an option, perhaps). Getting the saver working with the new store is a high priority for me. I have considered a version of XMLStreamReader which uses it. This should give you what I think you are asking for: namespace attributes will be made available through the XMLStreamReader to reflect more well formed, "correct" XML document. - Eric -----Original Message----- From: David Waite [mailto:mass@akuma.org] Sent: Thursday, February 12, 2004 9:46 PM To: xmlbeans-dev@xml.apache.org Subject: v2 XMLStreamReader namespace issue One issue I have encountered using the XMLStreamReader support in v2 is that namespaces are not created in the process, so prefixes and namespaces default to null for all elements. My current test is rather simple, it creates an XMLEventWriter, creates an XMLStreamReader off of an XMLObject and wraps that up in an XMLEventReader, and calls writer.add , passing in the event reader. The StAX implementation used is the 0.7 reference implementation by BEA. With the change from before it appears that the document now finishes output, but without any namespaces. Is there any workaround for this effect I can do in my code? This seems related to the recent "YAP - Prefix Tracking" thread. -David Waite - --------------------------------------------------------------------- To unsubscribe, e-mail: xmlbeans-dev-unsubscribe@xml.apache.org For additional commands, e-mail: xmlbeans-dev-help@xml.apache.org Apache XMLBeans Project -- URL:
http://mail-archives.apache.org/mod_mbox/xml-xmlbeans-dev/200402.mbox/%3C4B2B4C417991364996F035E1EE39E2E10D8E5E@uskiex01.amer.bea.com%3E
CC-MAIN-2017-26
refinedweb
343
58.99
Using a TreeView If, however, you choose to use a TreeView control to display the data you select, you will find that there is no convenient interface that you can use to keep your code from changing. For each node you want to create, you create an instance of the TreeNode class and add the root TreeNode collection to another node. In our example version using the TreeView, we'll add the swimmer's name to the root node collection and the swimmer's time as a subsidiary node. Here is the entire TreeAdapter class. public class TreeAdapter:LstAdapter { private TreeView tree; //------ public TreeAdapter(TreeView tr) { tree=tr; } public void Add(Swimmer sw) { TreeNode nod; //add a root node nod = tree.Nodes.Add(sw.getName()); //add a child node to it nod.Nodes.Add(sw.getTime().ToString ()); tree.ExpandAll (); public int SelectedIndex() { return tree.SelectedNode.Index ; public void Clear() { for (int i=0; i< tree.Nodes.Count ; i++) { nod = tree.Nodes [i]; nod.Remove (); public void clearSelection() {} Shashi Ray Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/kb/384-using-treeview.aspx
CC-MAIN-2017-26
refinedweb
184
65.22
django-fab-deploy is a collection of Fabric scripts for deploying and managing django projects on Debian/Ubuntu servers. License is MIT. Please read the docs for more info. In order to upgrade install fabric >= 1.4 and make sure your custom scripts work. In order to upgrade please set DB_USER to 'root' explicitly in env.conf if it was omitted. In order to upgrade from previous verions of django-fab-deploy, install sudo on server if it was not installed: fab install_sudo. In order to upgrade from 0.2 please remove any usages of env.user from the code, e.g. before upgrade: def my_site(): env.hosts = ['example.com'] env.user = 'foo' #... After upgrade: def my_site(): env.hosts = ['foo@example.com'] #... This release is backwards-incompatible with 0.1.x because of apache port handling changes. In order to upgrade, this is the last release in 0.0.x branch. Bugs with multiple host support, backports URL and stray 'pyc' files are fixed. A few bugfixes and docs improvements. Initial release. MIT license
https://crate.io/packages/django-fab-deploy/
CC-MAIN-2015-11
refinedweb
174
62.85
.NET Core 3.1 Web API & Entity Framework Jumpstart (13 Part Series) This tutorial series is now also available as an online video course. You can watch the first hour on YouTube or get the complete course on Udemy. Or you just keep on reading. Enjoy! :) Introduction tutorial series comes in. In a short period, you will learn how to set up a Web API, make calls to this Web API and also save data persistently with Entity Framework Core and the help of Code First Migration. We will get right to the point, you will see every single step of writing the necessary code and by the end of this tutorial series, tutorial series on any of these operating systems. (I know, Microsoft and cross-platform, it still surprises me, too.) The back end application we’re going to build is a small text-based role-playing game where different users can register (we’re going to use JSON web tokens for authentication) and create their own characters like a mage or a knight, update attributes of these characters, set the skills and also let the characters fight against each other to see who’s better. So, I hope you’re ready for your new skills and your new projects. Let's start! Tools The only tools we need for now are Visual Studio Code and Postman. Additionally to that, you have to download and install the .NET Core 3.1 SDK. VS Code can be found on. Postman is available on. And the SDK can be downloaded on. Make sure to download .NET Core 3.1 for your operating system. So please download and install everything and then continue with the next chapter. Create a new Web API As soon as the .NET Core SDK, Visual Studio Code and Postman is installed, we can already create our first .NET Core application which will be a Web API right away. To start, I created a new folder called "dotnet-rpg" - for "dotnet role-playing game". Open the folder in VS Code. I assume you’re already a bit familiar with Visual Studio Code, if not, feel free to have a look around. While you're doing that, it might also be a good idea to install certain extensions. First "C# for Visual Studio Code" by Microsoft itself. This extension will also be suggested by VS Code as soon as you create your first C# application. It includes editing support, syntax highlighting, IntelliSense, Go to Definition, Find all references, just have a look, pretty useful stuff. Next is "C# Extensions" by jchannon. As the description says, it might speed up the development workflow by adding some entries to the context menu, like Adding a new C# class or interface. And the last one already is one of my personal favorites, the "Material Icon Theme". This one simply provides lots and lots of cute icons. Alright, but now let’s create our Web API! We open a new terminal window and then let's have a look at what the dotnet command provides. With adding a -h you see all the available commands. The one that’s interesting for us right now is the new command, which creates a new .NET project. But we also got the run command to, well, run our application and also the watch command which can be used together with run to restart the application as soon as we make changes to any file. Quite useful, if you don’t want to stop and start the project by yourself every single time you make any changes. With dotnet new -h we see all the available templates. There are a lot. For instance the plain old console application, and further down we finally got the Web API. So let’s use it! We type dotnet new webapi and hit return. Now we see some files that have been generated for us in the explorer. Let’s go through them real quick. At the bottom, we see the WeatherForecast class. This is just part of the default Web API project. We don’t really need it, but let’s use this example in a minute. In the meantime, we get a little popup telling us that we should add some files. Of course, we want to add them. You should see now, that we got the .vscode folder with the launch.json and the tasks.json. Both are configuration files used for debugging, source code formatters, bundlers, and so on, but not very interesting for us at this moment. So let’s have a look at the Startup class.(); }); } Here we find the ConfigureServices and the Configure method. The ConfigureServices configures the app’s services, so a reusable component that provides app functionality. We will register services in the future in this method, so they can be consumed in our web service via dependency injection for instance. Please don’t mind all these buzzwords right now... The Configure method creates the app’s request processing pipeline, meaning the method is used to specify how the app responds to HTTP requests. As you can see we’re using HttpRedirection, Routing, and so on. With all these Use... extension methods, we’re adding middleware components to the request pipeline. For instance UseHttpRedirection adds middleware for redirecting HTTP requests to HTTPS. To make things a bit easier for us in the beginning, let's remove the UseHttpRedirection line or at least comment it Startup class is specified when the app’s host is built. You see that in the Program class in the CreateHostBuilder() method. Here the Startup class is specified by calling the UseStartup() method. public class Program { public static void Main(string[] args) { CreateHostBuilder(args).Build().Run(); } public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); }); } In the .csproject file we see the SDK, the target framework, in our case .NET Core 3.1 and the root namespace. Later on, we will find additional packages like Entity Framework Core in this file. <Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>netcoreapp3.1</TargetFramework> <RootNamespace>dotnet_rpg</RootNamespace> </PropertyGroup> </Project> Regarding the appsettings.json files we only need to know that we can add and modify some configurations here. More interesting right now is the launchSettings.json file where the current environment is configured and also the application URL. With this URL, we will find our running web service. "dotnet_rpg_3._1": { "commandName": "Project", "launchBrowser": true, "launchUrl": "weatherforecast", "applicationUrl": ";", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } } The obj and bin folders can be ignored for now. We find temporary object- and final binary files here. Very interesting and often used throughout this tutorial series is the Controllers folder. The first controller you see here is the generated WeatherForecast demo controller. We’ll get to the details of controllers later. For now, it’s only important to know, that we can already call the Get() method here. (); } First API Call In the terminal, we enter dotnet run. You see, here’s already the URL we’ve seen in the launchSettings.json. So let’s open Chrome and go to. Well, the result of this URL doesn’t look very nice. That's because we have to access the WeatherForecast controller. So when we go back to VS Code, we see the name of the controller ( WeatherForecast - without Controller). We also see the routing attribute ( [Route"[controller]"]) to define how to access this controller - we’ll discuss how routes work in a future chapter. [ApiController] [Route("[controller]")] public class WeatherForecastController : ControllerBase { // ... So we just copy the name - WeatherForecast - go back to Chrome, enter the correct route and finally, we get the results. We can also see the results in the console. Now let's do this with Postman because Postman will be the tool we will use to test our REST calls of the Web API. If you haven’t already, make yourself a bit familiar with Postman. The essential part is in the middle. We can choose the HTTP Request Method - in this particular case, it is GET - then enter the URL and hit "Send". The styling of the result should look similar to the console. Great! So this works. Now let’s move on and build our own web service. Web API Core So far you learned how to create a Web API project in .NET Core from scratch and how to make your first API call with Postman. In the upcoming chapters, we will create a new controller and models for our RPG (role-playing game) characters. Additionally, we will turn our synchronous calls into asynchronous calls, make use of Data-Transfer-Objects (DTOs) and change the structure of our Web API so that it meets best practices. But first, let’s have a look at the Model-View-Controller (MVC) pattern, which is the foundation of all this. The Model-View-Controller (MVC) Pattern Model-View-Controller or short MVC is a software design pattern that divides the related program logic into three interconnected elements. Let me explain, what every single one of these three elements stands for and how they collaborate. We start with the model. You could also say, the data. A character in our role-playing game is a model, for instance. It can have an Id, a name, hitpoints, attributes, skills and so on. public class Character { public int Id { get; set; } = 0; public string Name { get; set; } = "Frodo"; public int HitPoints { get; set; } = 100; public int Strength { get; set; } = 10; public int Defense { get; set; } = 10; public int Intelligence { get; set; } = 10; } You as the developer know the code of your model. But the user won’t see your code. That’s where the view comes in. The user probably wants to see a representation of the character in HTML, plain text or amazing 3D graphics - depending on your game. In other words, the view is the (graphical) user interface or (G)UI. To sum these two up, the model updates the view and the user sees the view. If the model changes, let’s say our character gets another skill or its hitpoints decreased, the view will change, too. That’s why the model always updates the view. Now, what’s up with the controller? The controller does the actual work. There you will find most of your code because it manipulates your data or model. In our case, it’s the Web API that will create, update and delete your data. Since we won’t have a view except the results of our calls in Postman, we’re going to build our application in the following order: First the model and then the controller. And we will always jump back and forth between those two. With the help of the view though, the user can manipulate the data, hence properties of the RPG character with buttons, text fields and so on. In a browser game that might be JavaScript code in essence - maybe with the help of frameworks like Angular, React or VueJS. This JavaScript code, in turn, uses the controller to do the manipulation and save these changes persistently in the database. The manipulated model will update the view, which is then again seen by the user and the circle starts all over again. Well, that sums up the MVC pattern. Now we’re going to build our first model. New Models The first things we need are new models. We need a model for the RPG character itself and also a model for the type of RPG character, i.e. a character class like Barbarian, Monk, Necromancer and so on. First, we create a "Models" folder. For the character model, we will create a new class in this Models folder. If you have the “C# Extensions” installed, you can add a new C# class with a right-click, otherwise, you just create a new file. So right-click the Models folder, then click “New C# Class” and call this class Character. Now let’s add some properties. public class Character { public int Id { get; set; } public string Name { get; set; } = "Frodo"; public int HitPoints { get; set; } = 100; public int Strength { get; set; } = 10; public int Defense { get; set; } = 10; public int Intelligence { get; set; } = 10; } We will also add an RpgClass property, i.e. the type of the character. But first, we have to create a new enum for that. So let’s add a new C# class called RpgClass and then replace class with enum. Feel free to add any kind of role-playing class you want to add here. In this example, I use Knight, Mage, and Cleric. The most basic characters you would need I guess. Some melee action, some magic and of course never forget the healer. public enum RpgClass { Knight = 1, Mage = 2, Cleric = 3 } Now when we have the RpgClass enum ready, we can finally add it to the Character model. public class Character { public int Id { get; set; } public string Name { get; set; } = "Frodo"; public int HitPoints { get; set; } = 100; public int Strength { get; set; } = 10; public int Defense { get; set; } = 10; public int Intelligence { get; set; } = 10; public RpgClass Class { get; set; } = RpgClass.Knight; } I set the default to the Knight, but again, that’s totally up to you. Alright, the first models are ready. Let’s add a new controller now and make a GET call to receive our first role-playing game character. New Controller & GET a New Character To add a new controller, we create a new C# class in the Controllers folder. Let’s call this class CharacterController. Before we can start implementing any logic, we have to make this thing a proper controller. To do that, we first derive from ControllerBase. This is a base class for an MVC controller without view support. Since we’re building an API here, we don’t need view support. If, however, we would want to add support for views, we could derive from Controller. But in our case, just make sure to add ControllerBase. public class CharacterController : ControllerBase After that, we have to add some attributes. The first one is the ApiController attribute. This attribute indicates that a type (and also all derived types) is used to serve HTTP API responses. Additionally, when we add this attribute to the controller, it enables several API-specific features like attribute routing and automatic HTTP 400 responses if something is wrong with the model. We’ll get to the details when we make use of these features. [ApiController] public class CharacterController : ControllerBase Regarding attribute routing, that’s already the next thing we have to add. Below the ApiController attribute, we add the Route attribute. That’s how we’re able to find this specific controller when we want to make a web service call. The string we add to the Route attribute is [controller]. This means that this controller can be accessed by its name, in our case Character - so that part of the name of the C# class that comes before Controller. [ApiController] [Route("[controller]")] public class CharacterController : ControllerBase Don't forget to also add the reference Microsoft.AspNetCore.Mvc on top of the file. using Microsoft.AspNetCore.Mvc; Alright, let’s get into the body of our C# class. The first thing I’d like to add is a static mock character that we can return to the client. For that, you also have to add the dotnet_rpg.Models reference. using Microsoft.AspNetCore.Mvc; using dotnet_rpg.Models; namespace dotnet_rpg.Controllers { [ApiController] [Route("[controller]")] public class CharacterController : ControllerBase { private static Character knight = new Character(); } } Next, we finally implement the Get() method to receive our game character. [ApiController] [Route("[controller]")] public class CharacterController : ControllerBase { private static Character knight = new Character(); public IActionResult Get() { return Ok(knight); } } We return an IActionResult because this enables us to send specific HTTP status codes back to the client together with the actual data that was requested. In this method, with Ok(knight) we send the status code 200 OK and our mock character back. Other options would be a BadRequest 400 status code or a 404 NotFound if a requested character was not found. Alright, the code is implemented. Let’s test this now with Postman. It’s pretty straight forward now. The HTTP method is GET again, the URL is and nothing else has to be configured. Hit "Send", and there is our knight! Now pay attention to the attribute that was added to our Get() method. Exactly, there is none. When we compare this to the WeatherForecastController, we could have added an [HttpGet] attribute. But it’s not necessary for the CharacterController because the Web API supports naming conventions and if the name of the method starts with Get...(), the API assumes that the used HTTP method is also GET. Apart from that we only have one Get() method in our controller so far, so the web service knows exactly what method is requested. However, in the next chapter, we’ll have a deeper look at these attributes.: Attribute routing, HTTP methods, add a new character with POST, asynchronous calls, and more!. .NET Core 3.1 Web API & Entity Framework Jumpstart (13 Part Series) Posted on by: Patrick God Into code as long as I can remember. First games, then web, now both. Located in the sweet Taunus-region in Germany. Always eager to learn, create and teach something new. Read Next 3 Mistakes Junior Developers Make with React Function Component State Tyler Hawkins - Beautiful Responsive Cards UI Design using HTML & CSS Code League - 50+ Free tools and resources to create awesome user interfaces (Part 2) Davide Pacilio - Discussion can i get source code ? pls Hey, I think I will create a GitHub repository soon with the complete project. Until then, please refer to the code in the article. It really is the complete thing. Take care, Patrick i get error in response. addcharacterDTO cannot add in characterDTO. i dont know what i can do Hi, I think you're missing something in the AutoMapperProfile class. I uploaded my own source code following this tutorial: github.com/informagico/dotnet-rpg Hope it can help, cheers! Hi, Thanks you brother. Amazing! I have gone through quite a few .Net Core tutorials but none explained it in such way to understand what everything does clearly. Keep it up Patrick! Looking forward to the next one! Thank you! Your comment makes me happy. Thank you! :) I have no idea why the response that I received, is "1", not the JSON string of the knight character as your screenshot. I also reviewed your source code and it is the same with mine... dev-to-uploads.s3.amazonaws.com/i/... That's strange... have you tried to call this URL in the browser? Maybe something's wrong with Postman. Apart from that, you could try to watch the video tutorial on YouTube: youtu.be/H4qg9HJX_SE Maybe this helps. Take care, Patrick Great tutorial. Can't wait for the next one! Thank you very much! Glad you like it! :) The best jumpstart I found so far. :)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/_patrickgod/net-core-3-1-web-api-entity-framework-jumpstart-part-1-4jla
CC-MAIN-2020-34
refinedweb
3,185
75
Exposes record and playback functionality for C compilers. More... #include "rs_types.h" Go to the source code of this file. Exposes record and playback functionality for C compilers. Definition in file rs_record_playback.h. Definition at line 30 of file rs_record_playback.h. Definition at line 19 of file rs_record_playback.h. Creates a playback device to play the content of the given file Set the playback to work in real time or non real time In real time mode, playback will play the same way the file was recorded. In real time mode if the application takes too long to handle the callback, frames may be dropped. In non real time mode, playback will wait for each callback to finish handling the data before reading the next frame. In this mode no frames will be dropped, and the application controls the frame rate of the playback (according to the callback handler duration). Definition at line 1605 of file rs.cpp.
https://docs.ros.org/en/kinetic/api/librealsense2/html/rs__record__playback_8h.html
CC-MAIN-2021-25
refinedweb
158
65.83
Actions - Struts Action struts 2 Please explain the Action Script in Struts Configuring Actions in Struts application Configuring Actions in Struts Application To Configure an action in struts... the package roseindia extends the the struts default package. <action name...; Now this tag does your action mapping. Here name defines the name Struts2 Actions a client's request matches the action's name, the framework uses the mapping from struts.xml file to process the request. The mapping to an action is usually generated by a Struts Tag. The action tag (within the struts root node of  Actions Threadsafe by Default - Struts Actions Threadsafe by Default Hi Frieds, I am beginner in struts, Are Action classes Threadsafe by default. I heard actions are singleton , is it correct Struts LookupDispatchAction Example ; Struts LookupDispatch Action... the request to one of the methods of the derived Action class. Selection of a method... in the action tag through struts-config.xml file). Then this matching key is mapped Struts 2 Actions request. About Struts Action Interface In Struts 2 all actions may implement... Struts 2 Actions In this section we will learn about Struts 2 Actions, which is a fundamental concept in most of the web Struts dispatch action - Struts Struts dispatch action i am using dispatch action. i send... now it showing error javax.servlet.ServletException: Request[/View/user] does not contain handler parameter named 'parameter' how can i overcome Struts Built-In Actions Struts Built-In Actions  ... actions shipped with Struts APIs. These built-in utility actions provide different...; to combine many similar actions into a single action Implementing Actions in Struts 2 Implementing Actions in Struts 2 Package com.opensymphony.xwork2 contains the many Action classes and interface, if you want to make an action class for you...;roseindia" extends="struts-default"> <action name=" Action in Struts 2 Framework . Actions are mostly associated with a HTTP request of User. The action class... { return "success"; } } action class does not extend another class and nor...Actions Actions are the core basic unit of work in Struts2 framework. Each Struts2 Actions When a client's request matches the action's name, the framework uses the mapping from struts.xml file to process the request. The mapping to an action is usually generated by a Struts Tag. Struts 2 Redirect Action Tutorial In this section we will discuss about Struts. This tutorial will contain..., Architecture of Struts, download and install struts, struts actions, Struts Logic Tags... key components : Request handler (provided by the application developer Struts Forward Action Example Struts Forward Action Example  ...). The ForwardAction is one of the Built-in Actions that is shipped with struts framework... and specify the location where the action will forward the request Dispatch Action - Struts Dispatch Action While I am working with Structs Dispatch Action . I am getting the following error. Request does not contain handler parameter named 'function'. This may be caused by whitespace in the label text Struts MappingDispatchAction Example of the Built-in Actions provided along with the struts framework... that it uses a unique action corresponding to a new request , to dispatch... the request to one of the methods of the derived Action class. Selection Struts MappingDispatchAction Example ; Struts MappingDispatch Action... class except that it uses a unique action corresponding to a new request... to delegate the request to one of the methods of the derived Action Sending large data to Action Class error. Struts code - Struts Sending large data to Action Class error. Struts code I have a jsp...; **** "; window.open("/message-actions?actionType=previewTemplate&val="+val... with this environment on your PC and inform MSIT the same. Yours sincerelyArat Kumar struts - Struts the struts.config.xml file to determine which module to be called upon an action request...-config.xml Action Entry: Difference between Struts-config.xml...struts hi, what is meant by struts-config.xml and wht are the tags password action requires user name and passwords same as you had entered during... The password forgot Action is invoked different kinds of actions in Struts different kinds of actions in Struts What are the different kinds of actions in Struts Test Actions Test Actions An example of Testing a struts Action is given below using...; <!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts...;default" namespace="/" extends="struts-default"> <default | Struts Built-In Actions | Struts Dispatch Action | Struts Forward... | AGGREGATING ACTIONS IN STRUTS | Aggregating Actions In Struts Revisited... configuration file | Struts 2 Actions | Struts 2 Redirect Action no action mapped for action - Struts no action mapped for action Hi, I am new to struts. I followed...: There is no Action mapped for action name HelloWorld Struts Articles . A Struts Action does the job, may be calling some other tiers of the Application.... It lets you do white-box testing on your Struts actions by setting up request... receives a request. 2. Struts identifies the action mapping which Struts Action Chaining Struts Action Chaining Struts Action Chaining Single thread model in Struts - Struts for that Action. The singleton strategy restricts to Struts 1 Actions and requires... me. Hi Struts 1 Actions are singletons therefore they must... as Action objects are instantiated for each request. A servlet container generates Calling Action on form load - Struts this attribute and set it to false . Hi friends,Yes. If your Action does not need any data.... Even if you want to use the tag with a simple Action that does not require input...Calling Action on form load Hi all, is it possible to call Struts Books Request Dispatcher. In fact, some Struts aficionados feel that I... application Struts Action Invocation Framework (SAIF) - Adds features like Action interceptors and Inversion of Control (IoC) to Struts.  Struts 1.x Vs Struts 2.x . However in case of Struts 2, Action objects are instantiated for each request, so... but in case of Struts 2, Actions are not container dependent because they are made simple... allows actions to be tested in isolation. Struts 2 Actions can access Still have the same problem--First Example of Struts2 - Struts tried my own example. Its not displaying the "Struts Welcome" message and also... Files ....... WebInf .......... LibFolder---> contain Struts Library...Still have the same problem--First Example of Struts2 Hi I tried Struts 2 Interceptors of the Action. Message Store Interceptor... Interceptor sets the request parameters onto the Action...Struts 2 Interceptors Struts 2 framework relies upon Interceptors to do most Action Configuration - Struts Action Configuration I need a code for struts action configuration in XML Struts Tutorials to the internal Map of the DynaForm bean does not tell the Struts that each of those... application development using Struts. I will address issues with designing Action... issues with Struts Action classes. Ok, let?s get started. StrutsTestCase Struts + HTML:Button not workin - Struts in same JSP page. As a start, i want to display a message when my actionclass...Struts + HTML:Button not workin Hi, I am new to struts. So pls... check accordingly in action class. But it displays null. Please let me know what Struts - Struts *; public class UserRegisterForm extends ActionForm{ private String action="add...,HttpServletRequest request){ this.id = null; this.userid=null; this.password=null... mapping, HttpServletRequest request ) { ActionErrors errors = new ActionErrors Struts Action Class Struts Action Class What happens if we do not write execute() in Action class Struts - Struts UserRegisterForm extends ActionForm{ private String action="add"; private...; public void reset(ActionMapping mapping,HttpServletRequest request..., HttpServletRequest request ) { ActionErrors errors = new ActionErrors java - Struts friend, Check your code having error : struts-config.xml In Action Mapping In login jsp action and path not same plz correct...: Submit struts Why Struts 2 core interfaces are HTTP independent. Struts 2 Action classes... the request processor per module, Struts 2 lets to customize the request handling per action, if desired. Easy Spring integration - Struts 2 Understanding Struts Action Class Understanding Struts Action Class In this lesson I will show you how to use Struts Action... HTTP request and the business logic that corresponds to it. Then the struts STRUTS Request context in struts? SendRedirect () and forward how to configure in struts-config.xml Action Classes Struts Action Classes 1) Is necessary to create an ActionForm to LookupDispatchAction. If not the program will not executed. 2) What is the beauty of Mapping Dispatch Action Struts 2 Tutorial ; Struts 2 Actions Introduction When a client request matches the action's... the request. The mapping to an action is usually generated by a Struts Tag... 2 The new version Struts 2.0 is a combination of the Sturts action framework action tag - Struts action tag Is possible to add parameters to a struts 2 action tag? And how can I get them in an Action Class. I mean: xx.jsp Thank you java - Struts config /WEB-INF/struts-config.xml 1 action *.do Thanks...java I am getting same error. Failed to find resource /Bookmark...! struts-config.xml Struts Struts What is called properties file in struts? How you call the properties message to the View (Front End) JSP Pages Login Action Class - Struts Login Action Class Hi Any one can you please give me example of Struts How Login Action Class Communicate with i-batis Interview Questions - Struts Interview Questions to also have them handled by the same Struts Action. A very simple way to do... is Struts actions and action mappings? Answer: A Struts action is an instance... with the requested action. In the Struts framework this helper class Struts 2 Hello World Annotation Example the action. The Convention Plugin was first added in the Struts version 2.1... naming conventions for the action class location. The Struts 2 Convention Plugin.... The Struts 2 Convention Plugin will configure the action. This section forward error message in struts forward error message in struts how to forward the error message got in struts from one jsp to other jsp? Hello Friend, Use <%@ page.... The code errorPage="errorPage.jsp" forwards the exception message hi i would like to have a ready example of struts using"action class,DAO,and services" so please help me Struts 2.1.8 Hello World Example . The Struts framework executes the default (or specified) method in the action class... in the same package. In this application two message resource files...; <action name="HelloWorld" Chain Action Result Example Chain Action Example Struts2.2.1 provides the feature to chain many actions... to forward a request to an another action. While propagating the state. An Example...; <action Struts ;Basically in Struts we have only Two types of Action classes. 1.BaseActions...Struts why in Struts ActionServlet made as a singleton what... only i.e ForwardAction,IncludeAction.But all these action classes extends Action How Struts Works to determine which module to be called upon an action request. Struts only reads... to design the dynamic web pages. In struts, servlets helps to route request which... what struts action Servlets exist. The container is responsible for mapping all struts struts how to make one jsp page with two actions..ie i need to provide two buttons in one jsp page with two different actions without redirecting to any other page Struts Struts What is Struts? Hi hriends, Struts is a web page... web applications quickly and easily. Struts combines Java Servlets, Java Server Pages, custom tags, and message resources into a unified framework validation message - Struts validation message sir, i took help of that example but in that we change only color of the message i want to shift the place of the error message.means all messages are put together at top of the form Struts Interview Questions ://struts.apache.org/1.2.7/api/org/apache/struts/actions/LookupDispatchAction.html... Struts Interview Questions Question: Can I setup Apache Struts to use multiple Struts Projects classes and Struts actions and forms. Furthermore it includes a JDBC, a JMS...; Mockrunner does not read any configuration file like web.xml or struts... send message in struts on mobile - Struts How to send message in struts on mobile Hello Experts, How can i send messages on mobile i am working on strus application and i want to send message from jsp Struts 2 issue field methodName and when we submit request we get value in action class and again i am redirecting request to same jsp page and on click of submit submiting values to same action now problem is when we submit request first time e.g value Struts Book - Popular Struts Books . In much the same way that Java has overtaken C++, Struts is well poised to become... Software Foundation. Struts in Action is a comprehensive introduction to the Struts... in a "how to use them" approach. You'll also see the Struts Tag Library in action JSP Actions JSP Action Tags in the JSP application. What is JSP Actions? Servlet container... sections we will learn how to use these JSP Actions (JSP Action tags... JSP Actions   struts - Struts Struts dispatchaction vs lookupdispatchaction What is struts...; Hi,Please check easy to follow example at struts - Struts struts when the action servlet is invoked in struts? Hi Friend, Please visit the following link: Thanks java struts error - Struts java struts error my jsp page is post the problem...*; import javax.servlet.http.*; public class loginaction extends Action{ public...; } public void reset(ActionMapping mapping,HttpServletRequest. Struts 2 Login Application let's develop the action class to handle the login request. In Struts 2...Struts 2 Login Application Developing Struts 2 Login Application In this section we are going to develop struts validation struts validation Sir i am getting stucked how to validate struts using action forms, the field vechicleNo should not be null and it should...(ActionMapping mapping, HttpServletRequest request Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/37485
CC-MAIN-2015-18
refinedweb
2,298
58.69
Double-click on the tAdvancedFileOutputXML component to open the dedicated interface or click on the three-dot button on the Basic settings vertical tab of the Component Settings tab. To the left of the mapping interface, under Schema List, all of the columns retrieved from the incoming data flow are listed (only if an input flow is connected to the tAdvancedFileOutputXML component). To the right of the interface, define the XML structure you want to obtain as output. You can easily import the XML structure or create it manually, then map the input schema columns onto each corresponding element of the XML tree. The easiest and most common way to fill out the XML tree panel, is to import a well-formed XML file. Rename the root tag that displays by default on the XML tree panel, by clicking on it once. Right-click on the root tag to display the contextual menu. On the menu, select Import XML tree. Browse to the file to import and click OK. Note You can import an XML tree from files in XML, XSD and DTD formats. When importing an XML tree structure from an XSD file, you can choose an element as the root of your XML tree. The XML Tree column is hence automatically filled out with the correct elements. You can remove and insert elements or sub-elements from and to the tree: Select the relevant element of the tree. Right-click to display the contextual menu Select Delete to remove the selection from the tree or select the relevant option among: Add sub-element, Add attribute, Add namespace to enrich the tree. If you don't have any XML structure defined as yet, you can create it manually. Rename the root tag that displays by default on the XML tree panel, by clicking on it once. Right-click on the root tag to display the contextual menu. On the menu, select Add sub-element to create the first element of the structure. You can also add an attribute or a child element to any element of the tree or remove any element from the tree. Select the relevant element on the tree you just created. Right-click to the left of the element name to display the contextual menu. On the menu, select the relevant option among: Add sub-element, Add attribute, Add namespace or Delete. Once your XML tree is ready, you can map each input column with the relevant XML tree element or sub-element to fill out the Related Column: Click on one of the Schema column name. Drag it onto the relevant sub-element to the right. Release to implement the actual mapping. A light blue link displays that illustrates this mapping. If available, use the Auto-Map button, located to the bottom left of the interface, to carry out this operation automatically. You can disconnect any mapping on any element of the XML tree: Select the element of the XML tree, that should be disconnected from its respective schema column. Right-click to the left of the element name to display the contextual menu. Select Disconnect linker. The light blue link disappears. Defining the XML tree and mapping the data is not sufficient. You also need to define the loop element and if required the group element. The loop element allows you to define the iterating object. Generally the Loop element is also the row generator. To define an element as loop element: Select the relevant element on the XML tree. Right-click to the left of the element name to display the contextual menu. Select Set as Loop Element. The Node Status column shows the newly added status. Note There can only be one loop element at a time. The group element is optional, it represents a constant element where the groupby operation can be performed. A group element can be defined on the condition that a loop element was defined before. When using a group element, the rows should sorted, in order to be able to group by the selected node. To define an element as group element: Select the relevant element on the XML tree. Right-click to the left of the element name to display the contextual menu. Select Set as Group Element. The Node Status column shows the newly added status and any group status required are automatically defined, if needed. Click OK once the mapping is complete to validate the definition and continue the job configuration where needed. The following scenario describes the creation of an XML file from a sorted flat file gathering a video collection. Configuring the source file Drop a tFileInputDelimited and a tAdvancedFileOutputXML from the Palette onto the design workspace. Alternatively, if you configured a description for the input delimited file in the Metadata area of the Repository, then you can directly drag & drop the metadata entry onto the editor, to set up automatically the input flow. Right-click on the input component and drag a row main link towards the tAdvancedFileOutputXML component to implement a connection. Select the tFileInputDelimited component and display the Component settings tab located in the tab system at the bottom of the Studio. Select the Property type, according to whether you stored the file description in the Repository or not. If you dragged & dropped the component directly from the Metadata, no changes to the setting should be needed. If you didn't setup the file description in the Repository, then select Built-in and manually fill out the fields displayed on the Basic settings vertical tab. The input file contains the following type of columns separated by semi-colons: id, name, category, year, language, director and cast. In this simple use case, the Cast field gathers different values and the id increments when changing movie. If needed, define the tFileDelimitedInput schema according to the file structure. Once you checked that the schema of the input file meets your expectation, click on OK to validate. Configuring the XML output and mapping Then select the tAdvancedFileOutputXML component and click on the Component settings tab to configure the basic settings as well as the mapping. Note that a double-click on the component will open directly the mapping interface. In the File Name field, browse to the file to be written if it exists or type in the path and file name that needs to be created for the output. By default, the schema (file description) is automatically propagated from the input flow. But you can edit it if you need. Then click on the three-dot button or double-click on the tAdvancedFileOutputXML component on the design workspace to open the dedicated mapping editor. To the left of the interface, are listed the columns from the input file description. To the right of the interface, set the XML tree panel to reflect the expected XML structure output. You can create the structure node by node. For more information about the manual creation of an XML tree, see Defining the XML tree. In this example, an XML template is used to populate the XML tree automatically. Right-click on the root tag displaying by default and select Import XML tree at the end of the contextual menu options. Browse to the XML file to be imported and click OK to validate the import operation. Note You can import an XML tree from files in XML, XSD and DTD formats. Then drag & drop each column name from the Schema List to the matching (or relevant) XML tree elements as described in Mapping XML data. The mapping is shown as blue links between the left and right panels. Finally, define the node status where the loop should take place. In this use case, the Cast being the changing element on which the iteration should operate, this element will be the loop element. Right-click on the Cast element on the XML tree, and select Set as loop element. To group by movie, this use case needs also a group element to be defined. Right-click on the Movie parent node of the XML tree, and select Set as group element. The newly defined node status show on the corresponding element lines. Click OK to validate the configuration. Press F6 to execute the Job. The output XML file shows the structure as defined.
https://help.talend.com/reader/hm5FaPiiOP31nUYHph0JwQ/i_YvOl2oUaVpW1UTnJao_g?section=Raa35713
CC-MAIN-2020-24
refinedweb
1,388
63.7
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Boost.Function has two syntactical forms: the preferred form and the.Function. Consult the table below to determine which syntactic form to use for your compiler. If your compiler does not appear in this list, please try the preferred syntax and report your results to the Boost list so that we can keep this table up-to-date. A function wrapper is defined simply by instantiating the function class template with the desired return type and argument types, formulated as a C++ function type. Any number of arguments may be supplied, up to some implementation-defined limit (10 is the default maximum). The following declares a function object wrapper f that takes two int parameters and returns a float: By default, function object wrappers are empty, so we can create a function object to assign to f: struct int_div { float operator()(int x, int y) const { return ((float)x)/y; }; }; f = int_div(); Now we can use f to execute the underlying function object int_div: std::cout << f(5, 3) << std::endl; We are free to assign any compatible function object to f. If int_div had been declared to take two long operands, the implicit conversions would have been applied to the arguments without any user interference. The only limit on the types of arguments is that they be CopyConstructible, so we can even use references and arrays: void do_sum_avg(int values[], int n, int& sum, float& avg) { sum = 0; for (int i = 0; i < n; i++) sum += values[i]; avg = (float)sum / n; } sum_avg = &do_sum_avg; Invoking a function object wrapper that does not actually contain a function object is a precondition violation, much like trying to call through a null function pointer, and will throw a bad_function_call exception). We can check for an empty function object wrapper by using it in a boolean context (it evaluates true if the wrapper is not empty) or compare it against 0. For instance: if (f) std::cout << f(5, 3) << std::endl; else std::cout << "f has no target, so it is unsafe to call" << std::endl; Alternatively, method will return whether or not the wrapper is empty. empty() Finally, we can clear out a function target by assigning it to 0 or by calling the member function, e.g., clear() f = 0; Free function pointers can be considered singleton function objects with const function call operators, and can therefore be directly used with the function object wrappers: float mul_ints(int x, int y) { return ((float)x) * y; } f = &mul_ints; Note that the & isn't really necessary unless you happen to be using Microsoft Visual C++ version 6. In many systems, callbacks often call to member functions of a particular object. This is often referred to as "argument binding", and is beyond the scope of Boost.Function. The use of member functions directly, however, is supported, so the following code is valid: struct X { int foo(int); }; Several libraries exist that support argument binding. Three such libraries are summarized below: Bind. This library allows binding of arguments for any function object. It is lightweight and very portable. The C++ Standard library. Using std::bind1st and std::mem_fun together one can bind the object of a pointer-to-member function for use with Boost.Function: The Lambda library. This library provides a powerful composition mechanism to construct function objects that uses very natural C++ syntax. Lambda requires a compiler that is reasonably conformant to the C++ standard. In some cases it is expensive (or semantically incorrect) to have Boost.Function clone a function object. In such cases, it is possible to request that Boost.Function keep only a reference to the actual function object. This is done using the ref and cref functions to wrap a reference to a function object: Here, f will not make a copy of a_function_object, nor will f2 when it is targeted to f's reference to a_function_object. Additionally, when using references to function objects, Boost.Function will not throw exceptions during assignment or construction. Function object wrappers can be compared via == or != against any function object that can be stored within the wrapper. If the function object wrapper contains a function object of that type, it will be compared against the given function object (which must be either be EqualityComparable or have an overloaded boost::function_equal). For instance: int compute_with_X(X*, int); f = &X::foo; assert(f == &X::foo); assert(&compute_with_X != f); When comparing against an instance of reference_wrapper, the address of the object in the reference_wrapper is compared against the address of the object stored by the function object wrapper: a_stateful_object so1, so2; f = boost::ref(so1); assert(f == boost::ref(so1)); assert(f == so1); // Only if a_stateful_object is EqualityComparable assert(f != boost::ref(so2));
http://www.boost.org/doc/libs/1_52_0/doc/html/function/tutorial.html
CC-MAIN-2017-26
refinedweb
811
50.57
I call this a reverse number guessing game because usually the computer would generate a number for you to guess, but I want the computer to guess my number by guessing randomly in decreasing ranges depending on if I say too high or too low. The biggest problem I'm having is making the range of the random number be correct. It likes to guess numbers out of the range. The computer starts off by giving me a number from 1 - 100. Then i type in "th", "tl", or "co" (too high, too low, correct). If I say too low the lowest number in the range should change to the previous guess and same for if i say too high (but it gets set to the previous highest number instead of previous lowest). This should decrease the range of the random generator until it gets to the point where the range is so small that the computer can only guess my number or it might just get lucky before that (just like a human). Here's the code: Code : import java.util.*; public class computer_guess { public static void main(String[] args) { Scanner console; console = new Scanner(System.in); Random aRandom = new Random(); int prev_low = 1, prev_high = 100, guess = aRandom.nextInt(prev_high) + prev_low, guesses = 0; // long time = 0; boolean win = false; String feedback; while(!win) { System.out.println("My guess is: " + guess); feedback = console.nextLine(); // if(feedback.equals("tl") || feedback.equals("th") || feedback.equals("co")) { if(feedback.equals("tl"))//too low { prev_low = guess; System.out.println(feedback); } else if(feedback.equals("th"))//too high { prev_high = guess; System.out.println(feedback); } else if(feedback.equals("co"))//correct { System.out.print("Yay, I WIN!"); win = true; System.out.println(feedback); } guess = aRandom.nextInt(prev_high) + prev_low; //++guesses; //feedback = " "; //System.out.print(feedback); } } } }
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/8678-reverse-number-guessing-game-printingthethread.html
CC-MAIN-2015-06
refinedweb
298
50.33
Introduction K-Nearest Neighbor(KNN) is a supervised algorithm in machine learning that is used for classification and regression analysis. This algorithm assigns the new data based on how close or how similar the data is to the points in training data. Here, ‘K’ represents the number of neighbors that are considered to classify the new data point. KNN is called a lazy learning algorithm because it uses all the data during training for the classification of a new point. In other words, it doesn’t learn from training data rather it stores data and when new data is introduced it classifies that new point in the course of training. KNN algorithm - Choose the suitable value of k (number of neighbors) - Calculate the distance between the new point and the k number of the closest point - Count the number of neighbors in each category - Assign the new data to that category that has the maximum number of neighbors - The model is ready How does KNN work? - Suppose that we have two categories in the input dataset. The diagram shown below shows the input data having two categories, one with red color and another with green color. We will classify the data in white color using KNN. - The next step is to choose the number of neighbors i.e the value of k. Let’s take the value of k to be 5 - Now, the third step is to calculate the distance between a new point and other points. Here are some of the methods that are used for the calculation of distance - Euclidean distanceIt is a straight line distance between two points. Let (x1, y1) and (x2, y2) be the two points. The above formula will calculate the distance between these two points. - Manhattan distanceIt is the distance between two points along axes at a right angle. Let (x1, y1) and (x2, y2) be the two points. The above formula will calculate the distance between these two points. - After calculating the distance between the new point and other points, we’ve got the nearest neighbors i.e 5 nearest points with reference to the new point. - The next step is to count the number of neighbors in each category. As we can see that there are three(3) points in the red category and two(2) points in the green category. So, the new data point belongs to the red category. Python code for implementation of KNN For demonstration, we’ll be using Jupyter Notebook and we’ll be using the Iris flower classification dataset for implementation of KNN. #importing required models import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import classification_report, accuracy_score #read dataframe df = pd.read_csv("IRIS.csv") #target and input variable selection X, y = df.iloc[:,:-1].values, df.iloc[:, -1].values #do train test split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2) #Standardizing the data sc = StandardScaler() X_train_scaled = sc.fit_transform(X_train) X_test_scaled = sc.transform(X_test) #creating model with random value of K knn = KNeighborsClassifier(n_neighbors=5, metric='minkowski', p=2) model= knn.fit(X_train_scaled, y_train) #making prediction pred = model.predict(X_test_scaled) #checking accuracy of model print(classification_report(y_test, pred)) Output precision recall f1-score support Iris-setosa 1.00 1.00 1.00 9 Iris-versicolor 0.90 0.82 0.86 11 Iris-virginica 0.82 0.90 0.86 10 accuracy 0.90 30 macro avg 0.91 0.91 0.90 30 weighted avg 0.90 0.90 0.90 30 The KNN model performance is pretty good and the accuracy is 90%. How to select the best value for K We can go for the error value generated by using different values of k to see at which particular value the error is minimum to select the best value of K. import matplotlib.pyplot as plt error = [] for k in range(1,50): knn = KNeighborsClassifier(n_neighbors=k, metric='minkowski', p=2) model= knn.fit(X_train_scaled, y_train) pred = model.predict(X_test_scaled) error.append(np.mean(pred != y_test)) #plotting graph of error vs value of k plt.plot(range(1, 50), error) plt.xlabel("K") plt.ylabel("Error") plt.title("Best estimation of k ") plt.show() Output As we can see that when k = 25 the error is minimum. So, the best value of k is 25. Using the value of k = 25, we can rebuild the model as: knn = KNeighborsClassifier(n_neighbors=25) model= knn.fit(X_train_scaled, y_train) #making prediction pred = model.predict(X_test_scaled) #checking accuracy of model print(classification_report(y_test, pred)) Output precision recall f1-score support Iris-setosa 1.00 1.00 1.00 12 Iris-versicolor 1.00 1.00 1.00 9 Iris-virginica 1.00 1.00 1.00 9 accuracy 1.00 30 macro avg 1.00 1.00 1.00 30 weighted avg 1.00 1.00 1.00 30 After substituting the best value of k(25) model accuracy has increased to 100%. So, KNN is useful for classification problems in machine learning. Conclusion K-Nearest Neighbors (KNN) algorithm is a supervised machine learning algorithm. It is applicable for both classification and regression purposes. This algorithm assigns the new data by analyzing how much the data is similar to the specific category. To determine the best k for KNN, it is important to calculate the error associated with different values of k. After calculating the value of error associated with different k, we will choose that value with low error. Data must be standardized before training the model as KNN is a distance-based algorithm for classification. For more information follow this link Happy Learning 🙂
https://pythonsansar.com/k-nearest-neighbor-knn-in-machine-learning/
CC-MAIN-2022-27
refinedweb
950
50.33
Euler problems/181 to 190 From HaskellWiki Latest revision as of 13:39, 25 November 2011 [edit] 1 Problem 181 Investigating in how many ways objects of two different colours can be grouped. [edit] [edit] [edit] 4 Problem 184 Triangles containing the origin. [edit] 5 Problem 185 Number Mind Solution: This approach does NOT solve the problem in under a minute, unless of course you are extremely lucky. The best time I've seen so far has been about 76 seconds. Before I came up with this code, I tried to search for the solution by generating a list of all possible solutions based on the information given in the guesses. This was feasible with the 5 digit problem, but completely intractable with the 16 digit problem. The approach here, simple yet effective, is to make a random guess, and then vary each digit in the guess from [0..9], generating a score of how well the guess matched the given numbermind clues. You then improve the guess by selecting those digits that had a unique low score. It turns out this approach converges rather quickly, but can often be stuck in cycles, so we test for this and try a differenct random first guess if a cycle is detected. Once you run the program, you might have time for a cup of coffee, or maybe even a dinner. HenryLaxen 2008-03-12 import Data.List import Control.Monad import Data.Char import System.Random type Mind = [([Char],Int)] values :: [Char] values = "0123456789" score :: [Char] -> [Char] -> Int score guess answer = foldr (\(a,b) y -> if a == b then y+1 else y) 0 (zip guess answer) scores :: Mind -> [Char] -> [Int] scores m g = map (\x -> abs ((snd x) - score (fst x) g)) m scoreMind :: Mind -> [Char] -> Int scoreMind m g = sum $ scores m g ex1 :: Mind ex1 = [("90342",2), ("39458",2), ("51545",2), ("34109",1), ("12531",1), ("70794",0)] ex2 :: Mind ex2 = [ ("5616185650518293",2), ("3847439647293047",1), ("5855462940810587",3), ("9742855507068353",3), ("4296849643607543",3), ("3174248439465858",1), ("4513559094146117",2), ("7890971548908067",3), ("8157356344118483",1), ("2615250744386899",2), ("8690095851526254",3), ("6375711915077050",1), ("6913859173121360",1), ("6442889055042768",2), ("2321386104303845",0), ("2326509471271448",2), ("5251583379644322",2), ("1748270476758276",3), ("4895722652190306",1), ("3041631117224635",3), ("1841236454324589",3), ("2659862637316867",2)] guesses :: [Char] -> Int -> [[Char]] guesses str pos = [ left ++ n:(tail right) | n<-values] where (left,right) = splitAt pos str bestGuess :: Mind -> [[Char]] -> [Int] bestGuess mind guesses = main Here's another solution, and this one squeaks by in just under a minute on my machine. The basic idea is to just do a back-tracking search, but with some semi-smart pruning and guess ordering. The code is in pretty much the order I wrote it, so most prefixes of this code should also compile. This also means you should be able to figure out what each function does one at a time. import Control.Monad import Data.List import qualified Data.Set as S ensure p x = guard (p x) >> return x selectDistinct 0 _ = [[]] selectDistinct n [] = [] selectDistinct n (x:xs) = map (x:) (selectDistinct (n - 1) xs) ++ selectDistinct n xs data Environment a = Environment { guesses :: [(Int, [a])] , restrictions :: [S.Set a] , assignmentsLeft :: Int } deriving (Eq, Ord, Show) reorder e = e { guesses = sort . guesses $ e } domain = S.fromList "0123456789" initial = Environment gs (replicate a S.empty) a where a = length . snd . head $ gs gs = [(2, "5616185650518293"), (1, "3847439647293047"), (3, "5855462940810587"), (3, "9742855507068353"), (3, "4296849643607543"), (1, "3174248439465858"), (2, "4513559094146117"), (3, "7890971548908067"), (1, "8157356344118483"), (2, "2615250744386899"), (3, "8690095851526254"), (1, "6375711915077050"), (1, "6913859173121360"), (2, "6442889055042768"), (0, "2321386104303845"), (2, "2326509471271448"), (2, "5251583379644322"), (3, "1748270476758276"), (1, "4895722652190306"), (3, "3041631117224635"), (3, "1841236454324589"), (2, "2659862637316867")] acceptableCounts e = small >= 0 && big <= assignmentsLeft e where ns = (0:) . map fst . guesses $ e small = minimum ns big = maximum ns positions s = map fst . filter (not . snd) . zip [0..] . zipWith S.member s acceptableRestriction r (n, s) = length (positions s r) >= n acceptableRestrictions e = all (acceptableRestriction (restrictions e)) (guesses e) firstGuess = head . guesses sanityCheck e = acceptableRestrictions e && acceptableCounts e solve e@(Environment _ _ 0) = [[]] solve e@(Environment [] r _) = mapM (S.toList . (domain S.\\)) r solve e' = do is <- m newE <- f is rest <- solve newE return $ interleaveAscIndices is (l is) rest where f = ensure sanityCheck . update e m = selectDistinct n (positions g (restrictions e)) e = reorder e' l = fst . flip splitAscIndices g (n, g) = firstGuess e splitAscIndices = indices 0 where indices _ [] xs = ([], xs) indices n (i:is) (x:xs) | i == n = let (b, e) = indices (n + 1) is xs in (x:b, e) | True = let (b, e) = indices (n + 1) (i:is) xs in (b, x:e) interleaveAscIndices = indices 0 where indices _ [] [] ys = ys indices n (i:is) (x:xs) ys | i == n = x : indices (n + 1) is xs ys | True = head ys : indices (n + 1) (i:is) (x:xs) (tail ys) update (Environment ((_, a):gs) r l) is = Environment newGs restriction (l - length is) where (assignment, newRestriction) = splitAscIndices is a (_, oldRestriction) = splitAscIndices is r restriction = zipWith S.insert newRestriction oldRestriction newGs = map updateEntry gs updateEntry (n', a') = (newN, newA) where (dropped, newA) = splitAscIndices is a' newN = n' - length (filter id $ zipWith (==) assignment dropped) problem_185 = head . solve $ initial [edit] 6 [edit] 7 Problem 187 Semiprimes Solution: The solution to this problem isn't terribly difficult, once you know that the numbers the problem is referring to are called semiprimes. In fact there is an excellent write-up at: which provides an explicit formula for the number of semiprimes less than a given number. The problem with this formula is the use of pi(n) where pi is the number of primes less than n. For Mathematica users this is no problem since the function is built in, but for us poor Haskeller's we have to build it ourselves. This is what took the most time for me, to come up with a pi(n) function that can compute pi(50000000) in less than the lifetime of the universe. Thus I embarked on a long and circuitous journey that eventually led me to the PI module below, which does little more than read a 26MB file of prime numbers into memory and computes a map from which we can calculate pi(n). I am sure there must be a better way of doing this, and I look forward to this entry being amended (or replaced) with a more reasonable solution. HenryLaxen Mar 24, 2008 module PI (prime,primes,pI) where import Control.Monad.ST import System.IO.Unsafe import qualified Data.Map as M import Data.ByteString.Char8 (readFile,lines,readInt) import Data.Maybe import Data.Array.ST import Data.Array.IArray r = runSTUArray (do a <- newArray (0,3001134) 0 :: ST s (STUArray s Int Int) writeArray a 0 1 zipWithM_ (writeArray a) [0..] primes return a) {-# NOINLINE s #-} s = M.fromDistinctAscList $ map (\(x,y) -> (y,x)) (assocs r) prime n = r!n pI :: Int -> Int pI n = case M.splitLookup n s of (_,Just x,_) -> (x+1) (_,_,b) -> snd (head (M.assocs b)) {-# NOINLINE primes #-} primes :: [Int] primes = unsafePerformIO $ do l <- Data.ByteString.Char8.readFile "primes_to_5M.txt" let p = Data.ByteString.Char8.lines l r = map (fst . fromJust . readInt) p::[Int] return r import PI import Data.List s n k = ( n `div` (prime (k-1))) - k + 1 semiPrimeCount n = let last = pI $ floor $ sqrt (fromIntegral n) s k = pI ( n `div` (prime (k-1))) - k + 1 pI2 = foldl' (\x y -> s y + x) 0 [1..last] in pI2 main = print (semiPrimeCount 100000000) The file primes_to_5M.txt is: 2 3 5 .. 49999991 50000017 [edit] 8 Problem 188 hyperexponentiation The idea here is actually very simple. Euler's theorem tells us that a^phi(n) mod n = 1, so the exponent never need grow above 40000000 in this case, which is phi(10^8). Henry Laxen April 6, 2008 fastPower x 1 modulus = x `mod` modulus fastPower x n modulus | even n = fastPower ((x*x) `mod` modulus) (n `div` 2) modulus | otherwise = (x * (fastPower x (n-1) modulus)) `mod` modulus modulus = 10^8 phi = 40000000 -- eulerTotient of 10^8 a :: Int -> Int -> Int a n 1 = n -- a n k = n^(a n (k-1)) a n k = fastPower n (a n (k-1) `mod` phi) modulus problem_188 = a 1777 1855 [edit] 9
https://wiki.haskell.org/index.php?title=Euler_problems/181_to_190&diff=43147&oldid=32439
CC-MAIN-2016-44
refinedweb
1,366
68.5
Tech Blog Python ODE Adaptive time-step integration of ODEs in Python Overview The ability to solve a system of ordinary differential equations accurately is fundamental to any work involving dynamic systems. In this post we look at how to solve such systems using the routines available in scipy.integrate. Adaptive time-step integration allows us to capture the different time scales in our system. If the system is changing rapidly the time step will reduce whereas when the system is evolving slowly a larger time step will be used, keeping the overall simulation time to a minimum. In scipy the 4/5th order Runge-Kutta method of Dormand and Prince has been implemented under the moniker dopri5. This method is that used in Matlab’s ode45 routine. Spring-mass-damper system As a motivating example, lets solve the classic spring-mass-damper system. This system can be described by the second-order differential equation My″ + Cy′ + ky = 0, where a prime denotes the derivative with respect to time. Our task is to solve the equation for the position of the body y. Solving systems of ODEs In general, ODE solvers solve systems of the form y′(t) = f(t, y), meaning we need to create a function which has arguments t, y and returns y′. The solver then integrates y′ to give us the desired y. We therefore need to write our second-order ODE as a series of first order ODES: to do this, start by introducing a new variable u = y′, so that from our system above u′ = ( − Cy′ − ky)/M. In order to solve the system we are going to use the scipy.integrate.ode() method which takes as its only (required) argument the function that we want to integrate. We can define this function as follows import scipy.integrate import matplotlib.pyplot as plt import numpy as np def smd_ode(t, Y, M, C, k): '''Spring-mass-damper system as a set of 1st order ODEs Inputs: t - time Y - state vector of position and velocity [y, y'] M - mass of body C - damping coefficient k - spring constant ''' # Unpack the state vector y = Y[0] yp = Y[1] # Our ODE system u = yp up = (-C*yp - k*y) / M # Return our system return [u, up] Example solution As an example, take M = 1000 kg, C = 150 Ns/m and k = 250 N/m for a system with an initial displacement of y0 = 5m and zero initial velocity. The solution has the following steps: - Define the problem parameters - Create the integration object using the function we want to integrate - Set the type of integrator we want to use - Set the initial values of the problem - Set the additional parameters we want to pass to the function we want to integrate In code, this sequence takes the form # Simulation parameters M = 1000 # kg C = 150 # Ns/m k = 250 # N/m y0 = 5 # m yp0 = 0 # m/s tsim = 50 # s # Set up the ode-solver object intobj = scipy.integrate.ode(smd_ode) intobj.set_integrator('dopri5') intobj.set_initial_value([y0, yp0], 0.0) intobj.set_f_params(M, C, k) # Function to call at each timestep intobj.set_solout(solout) In order to retrieve our solution we make use of the set_solout method which takes a function to be called at the end of each time step. We can therefore save the time and solution at each time using: sol = [] def solout(t, y): sol.append([t, *y]) The “*” in front of the y is critical to unpack y from a sequence. Further information is here. Having set up the solution, we can finally call the integrate method with the simulation time and plot the results. # Perform the integration intobj.integrate(tsim) # Convert our solution into a numpy array for convenience asol = np.array(sol) # Plot everything plt.figure() plt.plot(asol[:,0], asol[:,1], 'b.-', markerfacecolor='b') plt.xlabel('t (s)') plt.ylabel('y (m)') plt.grid() plt.figure() plt.plot(asol[:,0], asol[:,2], 'b.-', markerfacecolor='b') plt.xlabel('t (s)') plt.ylabel('y\' (m/s)') plt.grid() plt.show() Results The figures below show the results for this particular problem. We can see that the position starts at 5m as we specified then decays over time due to the damping is the system. Please let us know if there are other topics related to scientific computing in python that you would like us to cover – just send us an email or leave a comment.
https://www.evergreeninnovations.co/tech-python-ode/
CC-MAIN-2022-40
refinedweb
745
61.56
Hello Dave Griffith, Bas Leijdekkers and all, I would like to enhance the functionality of ReplaceFullyQualifiedNameWithImportIntention, currently this intention: - is replacing the FullyQualifiedName only to the current element, if i have more elements, like the sample below it will replace only the selected one, i would like to replace them all. - is adding an import even if the class that we refactor is in the same package. - on expressions like this: org.intellij.idea.plugin.removeFQN.Test.foo() the intention is not working; - if we try to apply the intention to an element that is in the same class as the element, like the sample below, it will not work :( Ex: package org.intellij.idea.plugin.removeFQN; import java.lang.*; public class Test { static org.intellij.idea.plugin.removeFQN.Test test; public static void main(String[] args) { String a ="org.intellij.idea.plugin.removeFQN.Test"; org.intellij.idea.plugin.removeFQN.Test t; org.intellij.idea.plugin.removeFQN.Test.foo(); org.intellij.idea.plugin.removeFQN.Test.test.foo(); org.intellij.idea.plugin.removeFQN.Test.test.test1.foo(); org.intellij.idea.plugin.removeFQN.Test.test.test1.test2.foo(); } public static org.intellij.idea.plugin.removeFQN.Test foo(){ return null; } } - If we try to refactor any line from this sample, it will not work. I have a plugin that handles the above issues, i didn't know about this intention :( and I was planning to win the contest :) with this plugin, or at least to get a t-shirt :), but now that i found that this is partially implemented in idea, i would like to integrate my work with this intention, maybe I will get a t-shirt :) So I would like Dave Griffith, Bas Leijdekkers or somebody else, that is working on this intention, to comment on this, and if you think that my work should be integrated in this intention please tell me how can i submit this? Thanks, Dan Ps. For somebody from JetBrains, please read this and answer to this question: will I qualify to a t-shirt at least :)) or do I need to start a new plugin :)?. Hello Dave Griffith, Bas Leijdekkers and all, Hello again, Can somebody help me resolve this? Thanks, Dan What's the problem? If you have a plugin, just upload it and cross your fingers. Here are judging criterias . You think you pass them, go on, be a tiger. Another option is to write email to Alex Tkachman with the title: "Give me please that bloody t-shirt" :) Thank you for your suggestion. Maybe I will submit this. I know that this is already in idea, and i come only with some improvements, and my questions was something like this, should i submit a plugin for this and then we will have two inspections for the same thing one better then other :), or i need to give the code to someone from jetbrains or to this inspection author to include this also in his plugin. Thanks, Dan
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206114909-ReplaceFullyQualifiedNameWithImportIntention
CC-MAIN-2019-18
refinedweb
495
50.87
NAME raw, SOCK_RAW - Linux IPv4 raw sockets SYNOPSIS #include <sys/socket.h> #include <netinet/in.h> raw_socket = socket(PF. NOTES Raw sockets fragment a packet when its total length exceeds the interface MTU (but see BUGS). A more network friendly and faster alternative is to implement path MTU discovery as described in the IP_MTU_DISCOVER section of ip(7).. Work around is to use IP_HDRINCL. EMSGSIZE Packet too big. Either Path MTU Discovery is enabled (the IP_M processes with a effective user ID of 0 or the CAP_NET_RAW attribute may do that. flag was set — that has been removed in 2.2. BUGS Transparent proxy extensions are not described. When the IP_HDRINCL option is set datagrams will not be fragmented and are limited to the interface MTU. This is a limitation in Linux 2.2. Setting the IP protocol for sending in sin_port got lost in Linux 2.2. The protocol that socket was bound to or that was specified in the initial socket(2) call is always used. AUTHORS This man page was written by Andi Kleen. SEE ALSO recvmsg(2), sendmsg(2), capabilities(7), ip(7), socket(7) RFC 1191 for path MTU discovery. RFC 791 and the <linux/ip.h> include file for the IP protocol.
http://manpages.ubuntu.com/manpages/dapper/man7/raw.7.html
CC-MAIN-2014-15
refinedweb
208
69.18
Gravatar Having used ASP.NET MVC, the view that was displaying the comments had something like the following in them. This displayed the name of the commenter being a hyperlink, and that pointing to a mailto: link with the email address. The address was applied some transformations to prevent spammers from picking them from the page automatically: <%= Html.Encode ( SomeBlogNamespace.SpamFreeEmail ( Model.Comment.CommentedEmail ) )%> Now I did not want to change the model at all, but still wanted to add support for Gravatars. For this, I needed a method that could calculate the MD5 hash and display it in the format Gravatar wants it. As I have my own base classes and model classes, I could have added the code there but decided to extend the MVC HtmlHelper instead. Using extension methods, that is really simple. I just added a new class to hold my extension method: public static class GravatarHelper { public static string GravatarHash ( this HtmlHelper html, string value ) { var byteArray = MD5CryptoServiceProvider.Create ().ComputeHash ( System.Text.ASCIIEncoding.ASCII.GetBytes ( value.ToLower () ) ); string result = string.Empty; foreach ( var item in byteArray ) { result += item.ToString ( "x2" ); } return result; } } I also needed to add a namespace import into the .ASCX file that contained my comments view: <%@ Import Namespace="namespace" %> After that I could change the view to display the picture from Gravatar by calculating the hash from within the view code: <%= Html.GravatarHash ( Model.Comment.CommentedEmail ) %> If you make a comment into this blog now, you can take advantage of Gravatars by giving your Gravatar registered email address. If you have a registered avatar the blog will display the image beside your comment. This same method can be used to extend HtmlHelper in many different ways, adding small (or big) utilities that you can code and take advantage from the View in ASP.NET MVC.
http://blog.rebuildall.net/2009/09/24/Extending_ASP_NET_MVC_HtmlHelper_and_Gravatars
CC-MAIN-2018-17
refinedweb
303
53.41
# Date Processing Attracts Bugs or 77 Defects in Qt 6 ![PVS-Studio & Qt 6](https://habrastorage.org/r/w1560/webt/rf/oj/f_/rfojf_keq9czr_egygfakiowdby.png) The recent Qt 6 release compelled us to recheck the framework with PVS-Studio. In this article, we reviewed various interesting errors we found, for example, those related to processing dates. The errors we discovered prove that developers can greatly benefit from regularly checking their projects with tools like PVS-Studio. This is a standard article that reports the results of an open-source project check. This article will add to our "[evidence base](https://www.viva64.com/en/inspections/)" that demonstrates how useful and effective PVS-Studio is in code quality control. Though we have already checked the Qt project in the past ([in 2011](https://www.viva64.com/en/a/0075/), [2014](https://www.viva64.com/en/b/0251/), and [2018](https://www.viva64.com/en/b/0584/)), rechecking the framework was worth it. The new check's result supported a simple, but very important idea: static analysis should be used regularly! Our articles show that the PVS-Studio analyzer can find a wide variety of errors. Project authors often quickly fix the errors we describe. However, all this has nothing to do with the benefits of regular static code analysis. When static code analysis is built into the development process, developers quickly find and fix errors in new or recently edited code. Fixing code at this stage is the cheapest. Alright, enough theory! Let's take a look what the Qt 6 code has in store for us. And while you are reading this article, why don't you [download PVS-Studio](https://www.viva64.com/en/pvs-studio-download/) and request a trial key. See for yourself what the static analyzer can find in your projects :). Dates ----- Lately we've been noticing one more code pattern that tends to attract an increasing number of bugs. Of course, these code fragments are not as significant as [comparison functions](https://www.viva64.com/en/b/0509/) or the [last line](https://www.viva64.com/en/b/0260/) in similar code blocks. We are talking about code that works with dates. Such code can be difficult to test. So it comes as no surprise that these untested functions may process some arguments inadequately and return an incorrect result. We've already described a couple of similar cases in the following article: "[Why PVS-Studio Doesn't Offer Automatic Fixes](https://www.viva64.com/en/b/0776/)". Qt also fell prey to that trend and has occasional problems with code that processes dates. So here is where we start. **Fragment #1: Error Status Misinterpreted** First, let's see how the developer wrote the function that accepts a month's abbreviated name and returns its number. ``` static const char qt_shortMonthNames[][4] = { "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec" }; static int fromShortMonthName(QStringView monthName) { for (unsigned int i = 0; i < sizeof(qt_shortMonthNames) / sizeof(qt_shortMonthNames[0]); ++i) { if (monthName == QLatin1String(qt_shortMonthNames[i], 3)) return i + 1; } return -1; } ``` If successful, the function returns the month number (a value from 1 to 12). If the month's name is incorrect, the function returns a negative value (-1). Note that the function cannot return 0. However, the function above is used where the developer expects it to return null in case of an error. Here is the code fragment that uses the *fromShortMonthName* function incorrectly: ``` QDateTime QDateTime::fromString(QStringView string, Qt::DateFormat format) { .... month = fromShortMonthName(parts.at(1)); if (month) day = parts.at(2).toInt(&ok); // If failed, try day then month if (!ok || !month || !day) { month = fromShortMonthName(parts.at(2)); if (month) { QStringView dayPart = parts.at(1); if (dayPart.endsWith(u'.')) day = dayPart.chopped(1).toInt(&ok); } } .... } ``` The program never reaches the code that checks the month number for null, and continues to run with an incorrect negative month number. The PVS-Studio analyzer sees a whole bunch of inconsistencies here and reports them with four warnings at once: * V547 [CWE-571] Expression 'month' is always true. qdatetime.cpp 4907 * V560 [CWE-570] A part of conditional expression is always false: !month. qdatetime.cpp 4911 * V547 [CWE-571] Expression 'month' is always true. qdatetime.cpp 4913 * V560 [CWE-570] A part of conditional expression is always false: !month. qdatetime.cpp 4921 **Fragment #2: Error in Date Processing Logic** Let's take a look at the function that returns a number of seconds. ``` enum { .... MSECS_PER_DAY = 86400000, .... SECS_PER_MIN = 60, }; int QTime::second() const { if (!isValid()) return -1; return (ds() / 1000)%SECS_PER_MIN; } ``` The function above can return a value in the range of [0..59] or an error status of -1. Here is one location where the use of this function is very strange: ``` static qint64 qt_mktime(QDate *date, QTime *time, ....) { .... } else if (yy == 1969 && mm == 12 && dd == 31 && time->second() == MSECS_PER_DAY - 1) { // There was, of course, a last second in 1969, at time_t(-1); we won't // rescue it if it's not in normalised form, and we don't know its DST // status (unless we did already), but let's not wantonly declare it // invalid. } else { .... } ``` PVS-Studio warns: V560 [CWE-570] A part of conditional expression is always false: time->second() == MSECS\_PER\_DAY — 1. qdatetime.cpp 2488 The comment in the code tells us that if something goes wrong, it's better to do nothing. However, the condition always evaluates to false and the else branch is always executed. Here is the comparison that's incorrect: ``` time->second() == MSECS_PER_DAY - 1 ``` "MSECS\_PER\_DAY — 1" equals 86399999. As we already know, the *second* function cannot return this value. This means the code has some logical error and requires refactoring. Static analyzers are powerful in a sense that they check all scenarios no matter how infrequent they are. Thus, static analysis is a good addition to unit tests and other code quality control tools. Typos ----- **Fragment #3: Suddenly, Let's Talk About… HTML!** ``` QString QPixelTool::aboutText() const { const QList screens = QGuiApplication::screens(); const QScreen \*windowScreen = windowHandle()->screen(); QString result; QTextStream str(&result); str << "Qt Pixeltool ------------ Qt " << QT\_VERSION\_STR << " Copyright (C) 2017 The Qt Company Ltd. ### Screens "; for (const QScreen \*screen : screens) str << "* " << (screen == windowScreen ? "\* " : " ") << screen << " "; str << ""; return result; } ``` PVS-Studio warns: V735 Possibly an incorrect HTML. The "" closing tag was encountered, while the "" tag was expected. qpixeltool.cpp 707 PVS-Studio contains diagnostics that don't just check code — they also look for abnormalities in string constants. The code above triggered one of these diagnostics. Such cases are quite rare, and that's what makes this one so intriguing. Someone intended to create one list, but added two tags that open this list instead of one. This is clearly a typo. The first tag must open the list, and the second one needs to close it. Here is the correct code: ``` str << ""; ``` **Fragment #4: A Double Check Within One Condition** ``` class Node { .... bool isGroup() const { return m_nodeType == Group; } .... }; void DocBookGenerator::generateDocBookSynopsis(const Node *node) { .... if (node->isGroup() || node->isGroup() || node->isSharedCommentNode() || node->isModule() || node->isJsModule() || node->isQmlModule() || node->isPageNode()) return; .... } ``` PVS-Studio warns: V501 [CWE-570] There are identical sub-expressions to the left and to the right of the '||' operator: node->isGroup() || node->isGroup() docbookgenerator.cpp 2599 This is a common typo. The fix depends on what this code is expected to achieve. If the check is duplicated by accident, one can delete it. A different scenario is also possible: some other necessary condition has been left out. **Fragment #5: One Too Many Local Variables** ``` void MainWindow::addToPhraseBook() { .... QString selectedPhraseBook; if (phraseBookList.size() == 1) { selectedPhraseBook = phraseBookList.at(0); if (QMessageBox::information(this, tr("Add to phrase book"), tr("Adding entry to phrasebook %1").arg(selectedPhraseBook), QMessageBox::Ok | QMessageBox::Cancel, QMessageBox::Ok) != QMessageBox::Ok) return; } else { bool okPressed = false; QString selectedPhraseBook = QInputDialog::getItem(this, tr("Add to phrase book"), tr("Select phrase book to add to"), phraseBookList, 0, false, &okPressed); if (!okPressed) return; } MessageItem *currentMessage = m_dataModel->messageItem(m_currentIndex); Phrase *phrase = new Phrase(currentMessage->text(), currentMessage->translation(), QString(), nullptr); phraseBookHash.value(selectedPhraseBook)->append(phrase); } ``` If you want, you can test your attention to detail and look for the error yourself. I'll even move the text down for you so that you don't see the spoiler right away. Here is a beautiful unicorn from our old collection. Maybe you haven't even seen it before :). ![Old unicorn](https://habrastorage.org/r/w1560/webt/w1/ft/p1/w1ftp1jvnp8cbzjeabn92p28h3u.png) PVS-Studio warns: V561 [CWE-563] It's probably better to assign value to 'selectedPhraseBook' variable than to declare it anew. Previous declaration: mainwindow.cpp, line 1303. mainwindow.cpp 1313 The text that originates from either of the conditional operator's branches needs to be recorded to the *selectedPhraseBook* variable. The developer felt the variable's name was too long to write it out again and copied it from the line that declares the variable. It looks like the developer hurried a little and copied the type of the variable as well: ``` QString selectedPhraseBook = ``` As a result, the else block contains an excessive local string variable that is initialized, but never used. Meanwhile, the original variable that should have been assigned a value remains empty. **Fragment #6: Operation Priority** This is a classic error pattern that we encounter quite [frequently](https://www.viva64.com/en/examples/v593/). ``` bool QQmlImportInstance::resolveType(....) { .... if (int icID = containingType.lookupInlineComponentIdByName(typeStr) != -1) { *type_return = containingType.lookupInlineComponentById(icID); } else { auto icType = createICType(); .... } .... } ``` PVS-Studio warns: V593 [CWE-783] Consider reviewing the expression of the 'A = B != C' kind. The expression is calculated as following: 'A = (B != C)'. qqmlimport.cpp 754 The *icID* variable always has a value of 0 or 1. This is clearly not what the developer intended to do. Here's the reason: the comparison to -1 comes first, and then the *icID* variable is initialized. You can use modern C++ syntax to phrase the condition correctly — as shown below: ``` if (int icID = containingType.lookupInlineComponentIdByName(typeStr); icID != -1) ``` By the way, I have already [seen](https://www.viva64.com/en/b/0251/) a similar error in Qt before: ``` char ch; while (i < dataLen && ((ch = data.at(i) != '\n') && ch != '\r')) ++i; ``` This demonstrates that developers will keep making the same mistakes over and over again until they integrate an analyzer like PVS-Studio into the development process. No one is perfect. Yes, this is a subtle hint that you should start using PVS-Studio :). **Fragment #7: The Evil Modulus Division** Often, you may need to determine whether a number is divisible by 2 without remainder. The correct way to do this is to do a modulo division by two and check the result: ``` if (A % 2 == 1) ``` However, the developers may write something like this instead: ``` if (A % 1 == 1) ``` This is wrong because the remainder of the modulo division by one is always zero. Qt also has this error: ``` bool loadQM(Translator &translator, QIODevice &dev, ConversionData &cd) { .... case Tag_Translation: { int len = read32(m); if (len % 1) { // <= cd.appendError(QLatin1String("QM-Format error")); return false; } m += 4; QString str = QString((const QChar *)m, len/2); .... } ``` PVS-Studio warns: V1063 The modulo by 1 operation is meaningless. The result will always be zero. qm.cpp 549 **Fragment #8: Overwriting a Value** ``` QString Node::qualifyQmlName() { QString qualifiedName = m_name; if (m_name.startsWith(QLatin1String("QML:"))) qualifiedName = m_name.mid(4); qualifiedName = logicalModuleName() + "::" + m_name; return qualifiedName; } ``` PVS-Studio warns: V519 [CWE-563] The 'qualifiedName' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 1227, 1228. node.cpp 1228 As far as I understand, the developer accidentally used a wrong variable name. I assume the code should read as follows: ``` QString qualifiedName = m_name; if (m_name.startsWith(QLatin1String("QML:"))) qualifiedName = m_name.mid(4); qualifiedName = logicalModuleName() + "::" + qualifiedName; return qualifiedName; ``` **Fragment #9: Copy and Paste** ``` class Q_CORE_EXPORT QJsonObject { .... bool operator<(const iterator& other) const { Q_ASSERT(item.o == other.item.o); return item.index < other.item.index; } bool operator<=(const iterator& other) const { Q_ASSERT(item.o == other.item.o); return item.index < other.item.index; } .... } ``` PVS-Studio warns: V524 It is odd that the body of '<=' function is fully equivalent to the body of '<' function. qjsonobject.h 155 No one checks boring functions like comparison operators. No one writes tests for them. Developers may take a quick look at them during code review — or skip them altogether. But that's a [bad idea](https://www.viva64.com/en/b/0509/). And that's where static code analysis comes in handy. The analyzer never gets tired and is happy to check even boring code snippets. Here the < and <= operators are implemented in the same way. This is certainly wrong. The developer may have found the code elsewhere, copy-pasted it, and then forgot to customize it. Here is the correct code: ``` bool operator<(const iterator& other) const { Q_ASSERT(item.o == other.item.o); return item.index < other.item.index; } bool operator<=(const iterator& other) const { Q_ASSERT(item.o == other.item.o); return item.index <= other.item.index; } ``` **Fragment #10: static\_cast / dynamic\_cast** ``` void QSGSoftwareRenderThread::syncAndRender() { .... bool canRender = wd->renderer != nullptr; if (canRender) { auto softwareRenderer = static_cast(wd->renderer); if (softwareRenderer) softwareRenderer->setBackingStore(backingStore); .... } ``` PVS-Studio warns: V547 [CWE-571] Expression 'softwareRenderer' is always true. qsgsoftwarethreadedrenderloop.cpp 510 First, let's take a look at this check: ``` bool canRender = wd->renderer != nullptr; if (canRender) { ``` The code makes sure that the *wd->renderer* pointer is never null inside the conditional operator. So why add one more check? What does it do exactly? ``` auto softwareRenderer = static_cast(wd->renderer); if (softwareRenderer) ``` If the *wd->renderer* pointer is not null, the *softwareRenderer* pointer cannot be null. I suspect there is a typo here and the developer intended to use *dynamic\_cast*. In this case, the code starts to make sense. If type conversion is not possible, the *dynamic\_cast* operator returns *nullptr*. This returned value should be checked. However, I may have misinterpreted the situation and the code needs to be corrected in a different way. **Fragment #11: Copied, but Not Altered** ``` void *QQuickPath::qt_metacast(const char *_clname) { if (!_clname) return nullptr; if (!strcmp(_clname, qt_meta_stringdata_QQuickPath.stringdata0)) return static_cast(this); if (!strcmp(\_clname, "QQmlParserStatus")) return static\_cast< QQmlParserStatus\*>(this); if (!strcmp(\_clname, "org.qt-project.Qt.QQmlParserStatus")) // <= return static\_cast< QQmlParserStatus\*>(this); if (!strcmp(\_clname, "org.qt-project.Qt.QQmlParserStatus")) // <= return static\_cast< QQmlParserStatus\*>(this); return QObject::qt\_metacast(\_clname); } ``` PVS-Studio warns: V581 [CWE-670] The conditional expressions of the 'if' statements situated alongside each other are identical. Check lines: 2719, 2721. moc\_qquickpath\_p.cpp 2721 Take a look at these two lines: ``` if (!strcmp(_clname, "org.qt-project.Qt.QQmlParserStatus")) return static_cast< QQmlParserStatus*>(this); ``` Someone copied and pasted them multiple times — and forgot to modify them. The way they are now, they do not make sense. **Fragment #12: Overflow Due to the Wrong Parenthesis Placement** ``` int m_offsetFromUtc; .... void QDateTime::setMSecsSinceEpoch(qint64 msecs) { .... if (!add_overflow(msecs, qint64(d->m_offsetFromUtc * 1000), &msecs)) status |= QDateTimePrivate::ValidWhenMask; .... } ``` PVS-Studio warns: V1028 [CWE-190] Possible overflow. Consider casting operands of the 'd->m\_offsetFromUtc \* 1000' operator to the 'qint64' type, not the result. qdatetime.cpp 3922 The developer foresees a case when the *int* type variable is multiplied by *1000* and causes overflow. To avoid this, the developer plans to use the *qint64* 64-bit type variable. And uses explicit type casting. However, the casting does not help at all, because the overflow happens before the casting. The correct code: ``` add_overflow(msecs, qint64(d->m_offsetFromUtc) * 1000, &msecs) ``` **Fragment #13: A Partly Initialized Array** ``` class QPathEdge { .... private: int m_next[2][2]; .... }; inline QPathEdge::QPathEdge(int a, int b) : flag(0) , windingA(0) , windingB(0) , first(a) , second(b) , angle(0) , invAngle(0) { m_next[0][0] = -1; m_next[1][0] = -1; m_next[0][0] = -1; m_next[1][0] = -1; } ``` PVS-Studio warns: * V1048 [CWE-1164] The 'm\_next[0][0]' variable was assigned the same value. qpathclipper\_p.h 301 * V1048 [CWE-1164] The 'm\_next[1][0]' variable was assigned the same value. qpathclipper\_p.h 302 Above is a failed attempt to initialize a 2x2 array. Two elements are initialized twice, while the other two got overlooked. The correct code: ``` m_next[0][0] = -1; m_next[0][1] = -1; m_next[1][0] = -1; m_next[1][1] = -1; ``` And let me say, I just love it when I see how professional developers make such silly mistakes. Don't get me wrong, but such cases demonstrate that everyone is human and can make a mistake or a typo. So, static analysis is your best friend. I think it's been about 10 years since I've started fighting sceptical — albeit professional — developers over one simple subject: such errors happen in their own code as well — students are not the only ones to breed typos in their code :). 10 years ago I wrote a note: "[The second myth — expert developers do not make silly mistakes](https://www.viva64.com/en/b/0116/)". Nothing changed since then. People keep making mistakes and pretending they don't :). ![Be like Bill](https://habrastorage.org/r/w1560/webt/_7/se/se/_7seseccfbphq54sdxbuaihaywy.png) Errors in Logic --------------- **Fragment #14: Unreachable Code** ``` void QmlProfilerApplication::tryToConnect() { Q_ASSERT(!m_connection->isConnected()); ++ m_connectionAttempts; if (!m_verbose && !(m_connectionAttempts % 5)) {// print every 5 seconds if (m_verbose) { if (m_socketFile.isEmpty()) logError( QString::fromLatin1("Could not connect to %1:%2 for %3 seconds ...") .arg(m_hostName).arg(m_port).arg(m_connectionAttempts)); else logError( QString::fromLatin1("No connection received on %1 for %2 seconds ...") .arg(m_socketFile).arg(m_connectionAttempts)); } } .... } ``` PVS-Studio warns: V547 [CWE-570] Expression 'm\_verbose' is always false. qmlprofilerapplication.cpp 495 This code will never log anything because of the conflicting conditions. ``` if (!m_verbose && ....) { if (m_verbose) { ``` **Fragment #15: Overwriting a Variable's Value** ``` void QRollEffect::scroll() { .... if (currentHeight != totalHeight) { currentHeight = totalHeight * (elapsed/duration) + (2 * totalHeight * (elapsed%duration) + duration) / (2 * duration); // equiv. to int((totalHeight*elapsed) / duration + 0.5) done = (currentHeight >= totalHeight); } done = (currentHeight >= totalHeight) && (currentWidth >= totalWidth); .... } ``` PVS-Studio warns: V519 [CWE-563] The 'done' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 509, 511. qeffects.cpp 511 The entire conditional operator makes no sense, because the *done* variable is overwritten right after it is assigned. The code could be lacking the *else* keyword. **Fragment #16-#20: Overwriting Variables' Values** Here is another example of a variable's value that is overwritten: ``` bool QXmlStreamWriterPrivate::finishStartElement(bool contents) { .... if (inEmptyElement) { .... lastNamespaceDeclaration = tag.namespaceDeclarationsSize; // <= lastWasStartElement = false; } else { write(">"); } inStartElement = inEmptyElement = false; lastNamespaceDeclaration = namespaceDeclarations.size(); // <= return hadSomethingWritten; } ``` PVS-Studio warns: V519 [CWE-563] The 'lastNamespaceDeclaration' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 3030, 3036. qxmlstream.cpp 3036 The *lastNamespaceDeclaration* variable's first assignment may have happened by accident. It is probably okay to delete this line. However, we could be facing a serious logical error. Four more warnings indicate the same error patterns in the Qt 6 code: * V519 [CWE-563] The 'last' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 609, 637. qtextengine.cpp 637 * V519 [CWE-563] The 'm\_dirty' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 1014, 1017. qquickshadereffect.cpp 1017 * V519 [CWE-563] The 'changed' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 122, 128. qsgdefaultspritenode.cpp 128 * V519 [CWE-563] The 'eaten' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 299, 301. qdesigner.cpp 301 **Fragment #21: Confusion Between Null Pointer and Empty String** ``` // this could become a list of all languages used for each writing // system, instead of using the single most common language. static const char languageForWritingSystem[][6] = { "", // Any "en", // Latin "el", // Greek "ru", // Cyrillic ...... // No null pointers. Empty string literals are used. "", // Symbol "sga", // Ogham "non", // Runic "man" // N'Ko }; static void populateFromPattern(....) { .... for (int j = 1; j < QFontDatabase::WritingSystemsCount; ++j) { const FcChar8 *lang = (const FcChar8*) languageForWritingSystem[j]; if (lang) { .... } ``` PVS-Studio warns: V547 [CWE-571] Expression 'lang' is always true. qfontconfigdatabase.cpp 462 The *languageForWritingSystem* array has no null pointers. Which is why the *if(lang)* check makes no sense. However, the array contains empty strings. Has the developer meant to do an empty string check? If yes, the correct code goes like this: ``` if (strlen(lang) != 0) { ``` Or you can simplify it even further: ``` if (lang[0] != '\0') { ``` **Fragment #22: A Bizarre Check** ``` bool QNativeSocketEnginePrivate::createNewSocket(....) { .... int socket = qt_safe_socket(domain, type, protocol, O_NONBLOCK); .... if (socket < 0) { .... return false; } socketDescriptor = socket; if (socket != -1) { this->socketProtocol = socketProtocol; this->socketType = socketType; } return true; } ``` PVS-Studio warns: V547 [CWE-571] Expression 'socket != — 1' is always true. qnativesocketengine\_unix.cpp 315 The *socket != -1* condition always evaluates to true, because the function above it always exits when the *socket* value is negative. **Fragment #23: What Exactly Should the Function Return?** ``` bool QSqlTableModel::removeRows(int row, int count, const QModelIndex &parent) { Q_D(QSqlTableModel); if (parent.isValid() || row < 0 || count <= 0) return false; else if (row + count > rowCount()) return false; else if (!count) return true; .... } ``` PVS-Studio warns: V547 [CWE-570] Expression '!count' is always false. qsqltablemodel.cpp 1110 To make this more simple, I'll point out the most important lines: ``` if (.... || count <= 0) return false; .... else if (!count) return true; ``` The first check indicates that if the *count* value equals to or is below 0, the state is incorrect and the function must return *false*. However, further on we see this variable compared to zero, and this case is interpreted differently: the function must return *true*. There's clearly something wrong here. I suspect that the developer intended to use the < operator instead of <=. Then the code starts to make sense: ``` bool QSqlTableModel::removeRows(int row, int count, const QModelIndex &parent) { Q_D(QSqlTableModel); if (parent.isValid() || row < 0 || count < 0) return false; else if (row + count > rowCount()) return false; else if (!count) return true; .... } ``` **Fragment #24: An Unnecessary Status?** The code below contains the *identifierWithEscapeChars* variable that looks like a redundant entity. Or is it a logical error? Or is the code unfinished? By the second check this variable is *true* in all scenarios ``` int Lexer::scanToken() { .... bool identifierWithEscapeChars = false; .... if (!identifierWithEscapeChars) { identifierWithEscapeChars = true; .... } .... if (identifierWithEscapeChars) { // <= .... } .... } ``` PVS-Studio warns: V547 [CWE-571] Expression 'identifierWithEscapeChars' is always true. qqmljslexer.cpp 817 **Fragment #25: What Do I Do With Nine Objects?** ``` bool QFont::fromString(const QString &descrip) { .... const int count = l.count(); if (!count || (count > 2 && count < 9) || count == 9 || count > 17 || l.first().isEmpty()) { qWarning("QFont::fromString: Invalid description '%s'", descrip.isEmpty() ? "(empty)" : descrip.toLatin1().data()); return false; } setFamily(l[0].toString()); if (count > 1 && l[1].toDouble() > 0.0) setPointSizeF(l[1].toDouble()); if (count == 9) { // <= setStyleHint((StyleHint) l[2].toInt()); setWeight(QFont::Weight(l[3].toInt())); setItalic(l[4].toInt()); setUnderline(l[5].toInt()); setStrikeOut(l[6].toInt()); setFixedPitch(l[7].toInt()); } else if (count >= 10) { .... } ``` PVS-Studio warns: V547 [CWE-570] Expression 'count == 9' is always false. qfont.cpp 2142 What should the function do if the *count* variable equals 9? On the one hand, the function should issue a warning and exit. Just as the code says: ``` if (.... || count == 9 || ....) { qWarning(....); return false; } ``` On the other hand, someone added special code to be executed for 9 objects: ``` if (count == 9) { setStyleHint((StyleHint) l[2].toInt()); setWeight(QFont::Weight(l[3].toInt())); setItalic(l[4].toInt()); .... } ``` The function, of course, never reaches this code. The code is waiting for someone to come and fix it :). Null Pointers ------------- **Fragments #26-#42: Using a Pointer Before Checking It** ``` class __attribute__((visibility("default"))) QMetaType { .... const QtPrivate::QMetaTypeInterface *d_ptr = nullptr; }; QPartialOrdering QMetaType::compare(const void *lhs, const void *rhs) const { if (!lhs || !rhs) return QPartialOrdering::Unordered; if (d_ptr->flags & QMetaType::IsPointer) return threeWayCompare(*reinterpret_cast(lhs), \*reinterpret\_cast(rhs)); if (d\_ptr && d\_ptr->lessThan) { if (d\_ptr->equals && d\_ptr->equals(d\_ptr, lhs, rhs)) return QPartialOrdering::Equivalent; if (d\_ptr->lessThan(d\_ptr, lhs, rhs)) return QPartialOrdering::Less; if (d\_ptr->lessThan(d\_ptr, rhs, lhs)) return QPartialOrdering::Greater; if (!d\_ptr->equals) return QPartialOrdering::Equivalent; } return QPartialOrdering::Unordered; } ``` PVS-Studio warns: V595 [CWE-476] The 'd\_ptr' pointer was utilized before it was verified against nullptr. Check lines: 710, 713. qmetatype.cpp 710 The error is easy to overlook, but everything is straightforward here. Let's see how the code uses the *d\_ptr* pointer: ``` if (d_ptr->flags & ....) if (d_ptr && ....) ``` In the first if-block the pointer is dereferenced. Then the next check suggests this pointer can be null. This is one of the most common error patterns in C and C++. [Proofs](https://www.viva64.com/en/examples/v595/). We saw quite a few errors of this kind in the Qt source code. * V595 [CWE-476] The 'self' pointer was utilized before it was verified against nullptr. Check lines: 1346, 1351. qcoreapplication.cpp 1346 * V595 [CWE-476] The 'currentTimerInfo' pointer was utilized before it was verified against nullptr. Check lines: 636, 641. qtimerinfo\_unix.cpp 636 * V595 [CWE-476] The 'lib' pointer was utilized before it was verified against nullptr. Check lines: 325, 333. qlibrary.cpp 325 * V595 [CWE-476] The 'fragment.d' pointer was utilized before it was verified against nullptr. Check lines: 2262, 2266. qtextcursor.cpp 2262 * V595 [CWE-476] The 'window' pointer was utilized before it was verified against nullptr. Check lines: 1581, 1583. qapplication.cpp 1581 * V595 [CWE-476] The 'window' pointer was utilized before it was verified against nullptr. Check lines: 1593, 1595. qapplication.cpp 1593 * V595 [CWE-476] The 'newHandle' pointer was utilized before it was verified against nullptr. Check lines: 873, 879. qsplitter.cpp 873 * V595 [CWE-476] The 'targetModel' pointer was utilized before it was verified against nullptr. Check lines: 454, 455. qqmllistmodel.cpp 454 * V595 [CWE-476] The 'childIface' pointer was utilized before it was verified against nullptr. Check lines: 102, 104. qaccessiblequickitem.cpp 102 * V595 [CWE-476] The 'e' pointer was utilized before it was verified against nullptr. Check lines: 94, 98. qquickwindowmodule.cpp 94 * V595 [CWE-476] The 'm\_texture' pointer was utilized before it was verified against nullptr. Check lines: 235, 239. qsgplaintexture.cpp 235 * V595 [CWE-476] The 'm\_unreferencedPixmaps' pointer was utilized before it was verified against nullptr. Check lines: 1140, 1148. qquickpixmapcache.cpp 1140 * V595 [CWE-476] The 'camera' pointer was utilized before it was verified against nullptr. Check lines: 263, 264. assimpimporter.cpp 263 * V595 [CWE-476] The 'light' pointer was utilized before it was verified against nullptr. Check lines: 273, 274. assimpimporter.cpp 273 * V595 [CWE-476] The 'channel' pointer was utilized before it was verified against nullptr. Check lines: 337, 338. assimpimporter.cpp 337 * V595 [CWE-476] The 'm\_fwb' pointer was utilized before it was verified against nullptr. Check lines: 2492, 2500. designerpropertymanager.cpp 2492 **Fragment #43: Within One Expression, The Use of a Pointer That Has Not Been Checked for Null** This case is almost the same as the one above. However, this time the pointer is dereferenced and checked inside one expression. This is a classic incidental error — someone was inattentive when writing and reviewing code. ``` void QFormLayoutPrivate::updateSizes() { .... QFormLayoutItem *field = m_matrix(i, 1); .... if (userHSpacing < 0 && !wrapAllRows && (label || !field->fullRow) && field) .... } ``` PVS-Studio warns: V713 [CWE-476] The pointer 'field' was utilized in the logical expression before it was verified against nullptr in the same logical expression. qformlayout.cpp 405 **Now let's take a one-minute break.** I got tired from all the writing. I think the readers are tired as well. This article can wear you out even if you are just skimming through the text :). So it's about time I get my second cup of coffee. I finished my first one at around Fragment #12. Why don't you, my readers, join me for a cup of joe — or pick your favorite drink. And while we are all taking a break, I'll wander off from the topic for a bit. I am inviting the team that develops the Qt project to consider purchasing a license for the PVS-Studio code analyzer. You can request our price list [here](https://www.viva64.com/en/order/). We will provide support and help you set up the analyzer. Yes, alright, today I am more insistent. This is something new that I'm trying :). ![PVS-Studio & Coffee](https://habrastorage.org/r/w1560/webt/3d/ax/oi/3daxoirjj4hbr1uiluck146qd-y.png) **Fragments #44-#72: No Check for the malloc Function's Product** ``` void assignData(const QQmlProfilerEvent &other) { if (m_dataType & External) { uint length = m_dataLength * (other.m_dataType / 8); m_data.external = malloc(length); // <= memcpy(m_data.external, other.m_data.external, length); // <= } else { memcpy(&m_data, &other.m_data, sizeof(m_data)); } } ``` PVS-Studio warns: V575 [CWE-628] The potential null pointer is passed into 'memcpy' function. Inspect the first argument. Check lines: 277, 276. qqmlprofilerevent\_p.h 277 You cannot simply take and use the pointer the *malloc* function returns. It is imperative you check this pointer for null, even if you are very lazy to do it. We described 4 possible reasons to do this in our article "[Why it is important to check what the malloc function returned](https://www.viva64.com/en/b/0558/)". The need to check the malloc function's output falls within that article's scope. There are more warnings, but I do not want to include them into this list, because they are too many. Just in case, I gathered 28 warnings in the following file for you: [qt6-malloc.txt](http://cppfiles.com/qt6-malloc.txt). I do, however, recommend developers to recheck the project and study the warnings themselves. I did not have a goal to find as many errors as possible. Interestingly enough, with all the important missed checks, I found completely unnecessary ones. I am talking about the new operator call, that, in case of an error, generates the *std::bad\_alloc* exception. Here is one example of such redundant check: ``` static QImageScaleInfo* QImageScale::qimageCalcScaleInfo(....) { .... QImageScaleInfo *isi; .... isi = new QImageScaleInfo; if (!isi) return nullptr; .... } ``` PVS-Studio warns: V668 [CWE-570] There is no sense in testing the 'isi' pointer against null, as the memory was allocated using the 'new' operator. The exception will be generated in the case of memory allocation error. qimagescale.cpp 245 P.S. Here the readers always ask, does the analyzer know about placement new or "new (std::nothrow) T"? Yes, it does, and no, it does not issue any false positives for them. Redundant Code ("Code Smells") ------------------------------ In some scenarios, the analyzer issues warnings to code that is correct, but excessive. It may happen, for example, when the same variable is checked twice. Sometimes it's not clear whether this is a false positive or not. Technically, the analyzer is correct, but it did not find a real error. You can probably say it's a "code smell". Since the analyzer does not like this code, other developers may not like it either and may find it difficult to support. You have to spend more time to understand what is happening. Usually I don't even discuss such warnings in my articles. It's boring to do this. However, the Qt project surprised me with how many so-called "code smells" I was able to find. Definitely more than in most projects. Which is why I decided to turn your attention to "code smells" and investigate a few such cases. I think it will be useful to refactor these and many other similar patterns. To do this, you will need to use a complete report. The report's fragments I added to this article are insufficient. So let's inspect a few scenarios that illustrate the problem. **Fragment #73: "Code Smell" — Reverse Check** ``` void QQuick3DSceneManager::setWindow(QQuickWindow *window) { if (window == m_window) return; if (window != m_window) { if (m_window) disconnect(....); m_window = window; connect(....); emit windowChanged(); } } ``` PVS-Studio warns: V547 [CWE-571] Expression 'window != m\_window' is always true. qquick3dscenemanager.cpp 60 If *window==m\_window*, the function exists. The consecutive inverse check makes no sense and just clutters the code. **Fragment #74: "Code Smell" — Weird Initialization** ``` QModelIndex QTreeView::moveCursor(....) { .... int vi = -1; if (vi < 0) vi = qMax(0, d->viewIndex(current)); .... } ``` PVS-Studio warns: V547 [CWE-571] Expression 'vi < 0' is always true. qtreeview.cpp 2219 ![What is this?](https://habrastorage.org/r/w1560/webt/aa/w_/gf/aaw_gfpc6lmox5luucvqsozmlho.png) What is this? Why write something like this? The developer can simplify the code down to one line: ``` int vi = qMax(0, d->viewIndex(current)); ``` **Fragment #75: "Code Smell" — Unreachable Code** ``` bool copyQtFiles(Options *options) { .... if (unmetDependencies.isEmpty()) { if (options->verbose) { fprintf(stdout, " -- Skipping %s, architecture mismatch.\n", qPrintable(sourceFileName)); } } else { if (unmetDependencies.isEmpty()) { if (options->verbose) { fprintf(stdout, " -- Skipping %s, architecture mismatch.\n", qPrintable(sourceFileName)); } } else { fprintf(stdout, " -- Skipping %s. It has unmet dependencies: %s.\n", qPrintable(sourceFileName), qPrintable(unmetDependencies.join(QLatin1Char(',')))); } } .... } ``` PVS-Studio warns: V571 [CWE-571] Recurring check. The 'if (unmetDependencies.isEmpty())' condition was already verified in line 2203. main.cpp 2209 At first this code seems absolutely adequate. Just normal code that creates hints. But let's take a closer look. If the *unmetDependencies.isEmpty()* condition was met and executed once, it is not going to be executed for the second time. This is not a big deal, because the author was planning to display the same message. There is no real error, but the code is overly complicated. One can simplify it like this: ``` bool copyQtFiles(Options *options) { .... if (unmetDependencies.isEmpty()) { if (options->verbose) { fprintf(stdout, " -- Skipping %s, architecture mismatch.\n", qPrintable(sourceFileName)); } } else { fprintf(stdout, " -- Skipping %s. It has unmet dependencies: %s.\n", qPrintable(sourceFileName), qPrintable(unmetDependencies.join(QLatin1Char(',')))); } .... } ``` **Fragment #76: "Code Smell" — A Complex Ternary Operator** ``` bool QDockAreaLayoutInfo::insertGap(....) { .... QDockAreaLayoutItem new_item = widgetItem == nullptr ? QDockAreaLayoutItem(subinfo) : widgetItem ? QDockAreaLayoutItem(widgetItem) : QDockAreaLayoutItem(placeHolderItem); .... } ``` PVS-Studio warns: V547 [CWE-571] Expression 'widgetItem' is always true. qdockarealayout.cpp 1167 We could be dealing with a real bug here. But I am more inclined to believe the developers reworked this code several times and got an unexpectedly and unnecessarily complicated code block with redundant statements. You can reduce it down to the following: ``` QDockAreaLayoutItem new_item = widgetItem == nullptr ? QDockAreaLayoutItem(subinfo) : QDockAreaLayoutItem(widgetItem); ``` **Fragment #77: "Code Smell" – Excessive Protection** ``` typedef unsigned int uint; ReturnedValue TypedArrayCtor::virtualCallAsConstructor(....) { .... qint64 l = argc ? argv[0].toIndex() : 0; if (scope.engine->hasException) return Encode::undefined(); // ### lift UINT_MAX restriction if (l < 0 || l > UINT_MAX) return scope.engine->throwRangeError(QLatin1String("Index out of range.")); uint len = (uint)l; if (l != len) scope.engine->throwRangeError( QStringLiteral("Non integer length for typed array.")); .... } ``` PVS-Studio warns: V547 [CWE-570] Expression 'l != len' is always false. qv4typedarray.cpp 306 Someone worried too much that a value from a 64-bit variable might not fit into the *unsigned* 32-bit variable. And used two checks at once. The second check is redundant. The following code is more than enough: ``` if (l < 0 || l > UINT_MAX) ``` Then you can safely delete the snippet below. This will not endanger your code's reliability in any way. ``` uint len = (uint)l; if (l != len) scope.engine->throwRangeError( QStringLiteral("Non integer length for typed array.")); ``` I can keep doing this, but I'll stop. I think you get the idea. One can draw a nice conclusion here: the use of PVS-Studio will benefit your code in several ways — you can remove errors and simplify your code. Other Errors. ------------- I stopped after I described 77 defects. This is a beautiful number, and I wrote more than enough to shape an article. However, this does not mean there are no more mistakes PVS-Studio can find. While studying the log, I was very quick. I skipped everything that required more than 2 minutes of my time to figure out whether it was a mistake :). This is why I always urge you not to rely on our articles that explore your errors, but to use PVS-Studio on your projects yourself instead. Conclusion ---------- Static analysis is awesome! After you introduce the PVS-Studio into your development process, it will save your time and brain cells by finding many mistakes right after you write new code. It is much more fun to gather with your team for code review and discuss high-level errors and the efficiency of the implemented algorithms instead of typos. Moreover, as my experience shows, these nasty typos are always hiding, even if you check your code with your eyes. So let the software look for them instead. If you have any more questions or objections, I invite you to read the following article: "[Why You Should Choose the PVS-Studio Static Analyzer to Integrate into Your Development Process](https://www.viva64.com/en/b/0687/)". I give this article a 90% chance to be able to answer your questions :). If you are in the 10% — [message us](https://www.viva64.com/en/about-feedback/), let's talk :).
https://habr.com/ru/post/542758/
null
null
6,119
51.75
The following JCK vm tests failed on this sun4m machine when checking proper format conversion of the returned double/float value: javasoft.sqe.tests.vm.fp.fpm025.fpm02501m2.fpm02501m1 javasoft.sqe.tests.vm.fp.fpm025.fpm02501m2.fpm02501m2 jtg-s210:[133]% uname -a SunOS jtg-s210 5.8 Generic_109291-02 sun4m sparc SUNW,SPARCstation-5 jtg-s210:[134]% psrinfo -v Status of processor 0 as of: 05/30/00 18:11:45 Processor has been on-line since 05/24/00 17:34:13. The sparc processor operates at 170 MHz, and has a sparc floating point processor. To Reproduce: ============= 1. Extract fpm02501m1.ksh and fpm02501m2.ksh from attached files: fpm02501m1.jar/fpm02501m2.jar 2. Run fpm02501m1.ksh: jtg-s210:[138]% fp02501m1.ksh D.checkDefRetDefault(i) fails; value==NaN; lap # 5 D.checkDefRetDefault(i) fails; value==-Infinity; lap # 6 97 2. Run fpm02501m2.ksh: jtg-s210:[168]% fpm02501m2.ksh D.checkDefRetDefault(i) fails; value==NaN; lap # 5 D.checkDefRetDefault(i) fails; value==-Infinity; lap # 6 97 This problem is limited to sun4m machines ONLY. xxxxx@xxxxx 2000-05-30 xxxxx@xxxxx 2002-01-21 I can reproduce it with JDK1.3.1_02 on Win NT 4.0 SP 5 on PC powered by Cyrix PR233 processor. -Xint exclude java/lang/Double isNaN exclude java/lang/Float isNaN and the test passes. Also, -XX:-CanonocalizeNodes also passes the test. canonicalized version of Test2::isNaN() __bci__use__tid____instr____________________________________ original code: . -1 0 10 if i8 == i9 then B0 else B1 canonicalized to: . -1 0 11 if d4 != d4 then B1 else B0 Bytecode and generated assembly for "Method boolean isNaN(float)": javac bytecode: 0 fload_0 1 fload_0 2 fcmpl 3 ifeq 10 6 iconst_1 7 goto 11 10 iconst_0 11 ireturn c1 generates for v9: 0xfa403410: sethi %hi(0xffffe000), %g3 0xfa403414: clr [ %sp + %g3 ] 0xfa403418: save %sp, -112, %sp 0xfa40341c: ld [ %fp + 0x5c ], %f0 0xfa403420: ld [ %fp + 0x60 ], %f1 0xfa403424: fcmpd %f0, %f0 0xfa403428: fbe,a,pn %fcc0, 0xfa403440 0xfa40342c: nop 0xfa403430: mov 1, %l0 0xfa403434: mov %l0, %o0 0xfa403438: b %icc, 0xfa403448 0xfa40343c: nop 0xfa403440: clr %l0 0xfa403444: mov %l0, %o0 0xfa403448: mov %o0, %i0 0xfa40344c: restore 0xfa403450: retl ; {return} 0xfa403454: nop 0xfa403458: nop 0xfa40345c: nop c1 generates for v8: 0xea0033d0: mov -4096 + %g3 0xea0033d4: clr [ %sp + %g3 ] 0xea0033d8: save %sp, -112, %sp 0xea0033dc: ld [ %fp + 0x5c ], %f0 0xea0033e0: ld [ %fp + 0x60 ], %f1 0xea0033e4: fcmpd %f0, %f0 0xea0033e8: fbe,a 0xea003400 0xea0033ec: nop 0xea0033f0: mov 1, %l0 0xea0033f4: mov %l0, %o0 0xea0033f8: b 0xea003408 0xea0033fc: nop 0xea003400: clr %l0 0xea003404: mov %l0, %o0 0xea003408: mov %o0, %i0 0xea00340c: restore 0xea003410: retl ; {return} /* * Below is the java source that reproduces the same bug. * javac Test2.java * java_g -Xcomp -Xcomp -XX:CompileOnly=Test2.isNaN Test2 */ public class Test2 { static boolean foo(double i) { return isNaN(i) || !isNaN(Double.NaN); } public static void main (String[] args) { /** * The first invocation of isNaN in this program generates the wrong result. * However the second invocation generates the correct result. */ System.out.println("isNaN(Double.Nan) = " + isNaN(Double.NaN)); System.out.println("isNaN(Double.Nan) = " + isNaN(Double.NaN)); } private static boolean isNaN (double v) { return (v != v); } } xxxxx@xxxxx 2000-06-02 for V8 must add a nop instruction between fcmp and fb xxxxx@xxxxx 2000-06-06 Committing to 1.4.1. xxxxx@xxxxx 2002-01-23 I was curious about the sudden appearance of this bug so I did some investigation. I checked the Merlin C1 source code that handles floating- point branches and determined that this bug should still be fixed. I looked at this bug's change log and found that indeed it had been fixed and integrated way back in June, 2000. However, on 1/21/02 the bug's state was changed to "dispatched". I didn't believe the bug was somehow reintroduced so I ran the JCK tests both directly and under "runthese" using build rc-b91 on a SPARC V8 (sun4m) machine, jtg-s212. They both pass in several modes: java; java -Xcomp; and java_g -Xcomp -XX:CompileOnly=javasoft. This bug is still fixed and integrated. xxxxx@xxxxx 2002-01-31 I do not understand the comments for this bug. I have an application that uses java.util.HashSet; many users complain on their Windows systems that they get the error message "Illegal Load factor: 0.75 at java.util.HashMap.<init>". I would like to tell these users, "install java x.y.z and the problem will be gone". All bugs I could find that matched mine were marked as duplicates of this one. Tell me, what version of java exactly will fix this bug for Windows users? Please use version numbers, not these "Merlin"/"Kestrel" code names if possible, because I cannot fine the web page that tells me which code names correspond to which version numbers!!! I have discovered a similar problem on a Dell Quad Xeon Server running JDK1.3.1_01. The error does not happen everytime I run the sample program above, but does happen approx. 25 times per 1000 runs. It is really causing havoc with my J2EE server. Does anyone have a fix for this version of the JDK or do I have to upgrade? I got "java.lang.IllegalArgumentException: Illegal Load: 0.75" error using Oracle Universal Installer for 9.2 which include JRE 1.3.1. I run Win2k with SP3 on Cyrus processor. Any workaround?
http://bugs.sun.com/bugdatabase/view_bug.do%3Fbug_id=4342148
crawl-002
refinedweb
892
65.83
Hi! I am just working on a test-program for my path-finding algorithm and I found out that it doesn't always work correct. Here are the files: path.cpp[code path.cpp]#include "path.h" void load(const string& filename){ ifstream file(filename.c_str()); map.resize(MAPSIZE); for(int i = 0;i < MAPSIZE;i++) map<i>.resize(MAPSIZE); if(file.is_open()) { string line; // read the map and store it for(int i = 0;i < MAPSIZE;i++) { if (getline(file,line,'\n')) { for(int j = 0;j < MAPSIZE;j++) { map[j]<i> = line[j]; if (line[j] == 'A') { ac = j; ar = i; } if (line[j] == 'B') { bc = j; br = i; } } } else { cout << "Error while parsing the map!" << endl; exit(-1); } } // read the start getline(file,line,'\n'); if (sscanf(line.c_str(), "%d %d", &ac, &ar) < 2) { cout << "Error while parsing the map!" << endl; exit(-1); } // read the target getline(file,line,'\n'); if (sscanf(line.c_str(), "%d %d %d %d", &bc, &br, &bw, &bh) < 4) { cout << "Error while parsing the map!" << endl; exit(-1); } } else { cout << "Couldn't open mapfile!" << endl; exit(-1); } return;} int get_h(int ac, int ar, int bc, int br){ int delta_x = abs(ac - bc); int delta_y = abs(ar - br); //return (static_cast<int>(fabs(sqrt(pow(delta_x, 2) + pow(delta_y, 2)) * 10.0f))); return ((abs(ac - bc) + abs(ar - br)) * 10);} bool node_valid(int c, int r){ if (c > MAPSIZE - 1 || c < 0 || r > MAPSIZE - 1 || c < 0) return false; return true;} bool node_in_list(int c, int r, list<CNode*>& list_){ list<CNode*>::iterator it = list_.begin(); while(it != list_.end()) { if ((*it)->get_c() == c && (*it)->get_r() == r) return true; *it++; } return false;} void delete_list(list<CNode*>& list_){ list<CNode*>::iterator it = list_.begin(); while(it != list_.end()) delete *it++;} list<CNode*>::iterator get_best_f(list<CNode*>& list_){ list<CNode*>::iterator best = list_.begin(); list<CNode*>::iterator it = list_.begin(); while(it != list_.end()) { if ((*it)->get_f() < (*best)->get_f()) best = it; it++; } return best;} bool create_path(void){ // see, if we can return immediately if (ac == bc && ar == br) return true; // add the current node to the closedlist closedlist.push_front(new CNode(ac, ar, 0, get_h(ac, ar, bc, br))); // create a target-node ( not in one of the lists ), so we can use target_reached() more easily //const CNode* tar = new CNode(target->get_c(), target->get_r(), 0, 0); // set the current node, which will be altered during the walk through the map const CNode* cur = closedlist.front(); while(cur->get_c() != bc || cur->get_r() != br) { int c, r; int i; // add the surrounding nodes to the openlist (in a random order) vector<int> tmp(8); for(i = 0;i < 8;i++) tmp<i> = i; random_shuffle(tmp.begin(), tmp.end()); for(i = 0;i < 8;i++) { int g_modifier = 10; c = cur->get_c(); r = cur->get_r(); if (tmp<i> == 0) c++; else if (tmp<i> == 1) c--; else if (tmp<i> == 2) r++; else if (tmp<i> == 3) r--; else if (tmp<i> == 4) { c++; r++; g_modifier += 4; } else if (tmp<i> == 5) { c--; r++; g_modifier += 4; } else if (tmp<i> == 6) { r--; c++; g_modifier += 4; } else if (tmp<i> == 7) { r--; c--; g_modifier += 4; } if (node_valid(c, r) && map[c][r] != '1' && !node_in_list(c, r, openlist) && !node_in_list(c, r, closedlist)) { openlist.push_front(new CNode(c, r, cur->get_g() + g_modifier, get_h(c, r, bc, br), cur)); } } // check, if the openlist is empty or the closedlist too full if (!openlist.size() || int(closedlist.size()) > MAX_NODE_CHECKS) return false; // put the node with the best f-value in the closedlist list<CNode*>::iterator best_f = get_best_f(openlist); closedlist.push_front(*best_f); openlist.erase(best_f); // change the cur-variable cur = *best_f; } // coming here, it means that there is a valid path // now track back the way to the start by accessing the parent and save it in the path-variable while(cur != closedlist.back()) { // while start not reached path.push_back(new CNode(*cur)); cur = cur->get_parent(); } return true;} void save_screenshot(const char *path){ BITMAP *bmp = create_bitmap(SCREEN_W,SCREEN_H); if(!bmp) return; unsigned char *buff = new unsigned char[4*SCREEN_W*SCREEN_H]; if(!buff) { destroy_bitmap(bmp); return; } glReadPixels(0,0,SCREEN_W,SCREEN_H,GL_RGBA,GL_UNSIGNED_BYTE,buff); int y; for(y=0;y<SCREEN_H;y++) memcpy(bmp->line[y],buff+4*SCREEN_W*(SCREEN_H-1-y),4*SCREEN_W); save_bitmap(path,bmp,NULL); destroy_bitmap(bmp); delete buff;} void draw(void){ GfxRend::FillScreen(Rgba::WHITE); // draw the grid for(int i = 0;i < MAPSIZE;i++) { GfxRend::Line(i * TILESIZE, 0.0f, i * TILESIZE, MAPSIZE * TILESIZE, Rgba::BLACK); for(int j = 0;j < MAPSIZE;j++) GfxRend::Line(0.0f, j * TILESIZE, MAPSIZE * TILESIZE, j * TILESIZE, Rgba::BLACK); } // draw the obstacles for(int i = 0;i < MAPSIZE;i++) for(int j = 0;j < MAPSIZE;j++) if (map<i>[j] == '1') GfxRend::Rect(i * TILESIZE, j * TILESIZE, TILESIZE, TILESIZE, Rgba::BLACK); // draw the openlist for(list<CNode*>::const_iterator it = openlist.begin();it != openlist.end();it++) { int x = (*it)->get_c() * TILESIZE; int y = (*it)->get_r() * TILESIZE; GfxRend::Rect(x, y, TILESIZE, TILESIZE, Rgba::GREEN); CText::o().write((*it)->get_g(), x, y, "G"); CText::o().write((*it)->get_h(), x, y + 8, "H"); CText::o().write((*it)->get_f(), x, y + 16, "F"); } // draw the closedlist for(list<CNode*>::const_iterator it = closedlist.begin();it != closedlist.end();it++) { int x = (*it)->get_c() * TILESIZE; int y = (*it)->get_r() * TILESIZE; GfxRend::Rect(x, y, TILESIZE, TILESIZE, Rgba::RED); CText::o().write((*it)->get_g(), x, y, "G"); CText::o().write((*it)->get_h(), x, y + 8, "H"); CText::o().write((*it)->get_f(), x, y + 16, "F"); } // draw the path for(list<CNode*>::const_iterator it = path.begin();it != path.end();it++) { int x = (*it)->get_c() * TILESIZE; int y = (*it)->get_r() * TILESIZE; GfxRend::Rect(x, y, TILESIZE, TILESIZE, Rgba::BLUE); CText::o().write((*it)->get_g(), x, y, "G"); CText::o().write((*it)->get_h(), x, y + 8, "H"); CText::o().write((*it)->get_f(), x, y + 16, "F"); } GfxRend::RefreshScreen(); save_screenshot("screenshot.png");} void unload(void){ for(int i = 0;i < MAPSIZE;i++) map<i>.resize(0); map.resize(0);} int main(void){ Setup::SetupProgram(); Settings::StoreMemoryBitmaps(true); Setup::SetupScreen(SCREENW, SCREENH, WINDOWED); set_color_depth(32); Settings::SetAntialiasing(true); CText::o().init(); // load the map load("map/map.map"); // create the path if (!create_path()) { cout << "Can't create path!" << endl; return 0; } // draw the path draw(); // loop while(!key[KEY_ESC]) ; // destroy the path and the lists delete_list(path); delete_list(openlist); delete_list(closedlist); // unload the map unload();}END_OF_MAIN()</code> path.h[code path.h]#ifndef __PATH_H#define __PATH_H #include <OpenLayer.hpp>#include <loadpng.h>#include <vector>#include <string>#include <fstream>#include <iostream>#include <list> #include "node.h"#include "text.h" using namespace ol;using std::vector;using std::string;using std::ifstream;using std::cout;using std::endl;using std::list;using std::pair; vector< vector<char> > map;list<CNode*> openlist;list<CNode*> closedlist;list<CNode*> path;int ac;int ar;int bc;int br;int bw;int bh; const static int SCREENW = 800;const static int SCREENH = 600;const static int MAPSIZE = 20;const static int TILESIZE = 30;const static int MAX_NODE_CHECKS = 10000; #endif</code> node.cpp[code node.cpp]#include "node.h" CNode::CNode(int c, int r, int g, int h, const CNode* parent){ c_ = c; r_ = r; g_ = g; h_ = h; f_ = g + h; parent_ = parent;} CNode::~CNode(){} CNode::CNode(const CNode& src){ c_ = src.c_; r_ = src.r_; g_ = src.g_; h_ = src.h_; f_ = src.f_; parent_ = src.parent_;}</code> node.h[code node.h]#ifndef __NODE_H#define __NODE_H class CNode{ private: protected: int c_; int r_; int g_; int h_; int f_; const CNode* parent_; public: CNode(int c, int r, int g, int h, const CNode* parent = 0); virtual ~CNode(); CNode(const CNode& src); inline int get_c(void) const { return (c_); } inline int get_r(void) const { return (r_); } inline int get_g(void) const { return (g_); } inline int get_h(void) const { return (h_); } inline int get_f(void) const { return (f_); } inline const CNode* get_parent(void) const { return ((!parent_) ? this : parent_); } virtual bool operator < (const CNode& node) const { return (f_ > node.f_); }}; (I will omit text.cpp and text.h because they aren't necessary) I attached a picture of the problem. The yellow circle shows where the algo makes the mistake. IMO, the path should go straight down instead of doing a zigzag. Finally, here is the map-file for this example: map.map[code map.map]00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000111111111111110000000000000000000000000000000000000000000000011111111111111111000000000000000000000000000000000000000000000000000000000000000000111111110111110000001000000001000000000010111111110000000000100000000000003 316 17 1 1</code> The first two numbers after the grid are the starting position in columns and rows and the second four are the position of the target (the two 1s don't have any meaning yet). Thanks for your help! Is the the minimum amount of code? Perhaps you could narrow it down to say a function that has the problem? Is it create_path? Explain what c, r, g, h, and f are. -- Tomasu: Every time you read this: hugging! Ryan Patterson - <> Is the the minimum amount of code? Yes, I guess. If I took something out, I might have taken out the error too. Perhaps you could narrow it down to say a function that has the problem? Is it create_path? Most likely! Or one of the functions it calls like get_best_f() or get_h(). I think the error is somewhere within the heuristic, but I dunno. Explain what c, r, g, h, and f are. Well, c and r are the column and the row of the node. g, h and f are the values used by every normal A* algorithm:g - cost to move from the start to this nodeh - cost to move from this node to the end (heuristic)f - g + h (determines which node to take next) I know it's not very easy. I needed to post all the code, because in most cases, it works correct, but only in very rare cases (as this one) it doesn't. Looks to me like you can only create unidirectional paths with your method. Why aren't nodes stored as a graph where the links contain the g, h, and f? [edit]Added a negative. Looks to me like you can only create unidirectional paths with your method. But there is a horizontal path in there as well! I changed the g-value to 14 instead of ten for all diagonal movements exactly like [url]this tutorial[/url] suggested. Why aren't nodes stored as a graph where the links contain the g, h, and f? Could you explain that a bit more in detail? I don't know what you are talking about right now. No, unidirectional as in point A to B, and not point B to C or even point B to A (though the latter could be accomplished by traversing the lsit backwards). What I mean is having each node store a list of Link structures. Each Link structure holds a Node that can be directly reached from its node, as well as the g, f, and h. This way simply coat the level with nodes and then move between them. They don't have to be on every tile. For instance: 00000000000000000000 00000000000000000000 00000000000000000000 00000000000000000000 00000000000000000000 00000000000000000000 00000000000000000000 00000000000000000000 02000000000000000000 02111111111111112000 00200000000000020000 00020000000000000000 00211111111111111111 00000000000000000000 00000000000000000000 00000020000002000000 00000011111111211111 00000010200002010000 00000012111111110000 00000010200000020000 The 2s are nodes, and the character simply moves in a straight line between them. Oh sorry, I missunderstood you! Your approach is not A*, but it seems to be more like raycasting. I want the A*-way. The units shouldn't be able to move between nodes that are not adjacent to each other in one step. I hope you understood me now. No, my method is A*, you just don't use a grid. A* is simply using weighted steps to compute a path along a graph. A graph is just a bunch of locations that are connected. The way you have it now, every square is a node, and every node is connected to its 8 adjacent nodes. Every connection has a weight of 1, because it takes the same amount of time to move to any square. The way I suggest, you only place nodes at important places (corners), and the weight of each connection equals the distance. Perhaps a better way to demonstrate what I mean is: Use your code to find a route from A to B, then from B to C, then from C to B. It looks as if g was not taken into account, making diagonal movement as fast as straight horizontal/vertical one.Could you try to make run a straight north-south line? if it zigzags, you got your problem narrowed down. Then try putting an absurdly high value of g for diagonal movement, to "forbid" it, and run your original problem: if the algorithm uses ANY diagonal, your code definitely doesn't take g into account. edit: (heuristic from above) : //return (static_cast<int>(fabs(sqrt(pow(delta_x, 2) + pow(delta_y, 2)) * 10.0f))); return ((abs(ac - bc) + abs(ar - br)) * 10); Your heuristic expects diagonal movement to be as fast as horizontal one. So it takes the x+1 step getting closer (from a large perspective) to the final goal, as a future step will be able to take it (at no cost) off this dead-end.Since your actual step costs are 10 and 14, you need a heuristic which weights 10 * min(abs(delta_x), abs(delta_y)) + 14 * abs(abs(delta_x) - abs(delta_y)) Well, in his model diagonal movement is as fast as normal up/down movement. [edit]Now you're saying it isn't... Is that so? [edit]In the case that you don't want diagonal movement to be weighted more than straight, simply bias it towards moving in a straight line. Say moves which are in the same direction as the previous move come at a -5 cost, adjust to suit. (see my edit above) I changed the g-value to 14 instead of ten for all diagonal movements When I first saw the post, there was indeed already a bunch of g_modifier+=4 in 4 of 8 direction cases, (an edit probably) I have an update on my problem. It seems like my algorithm has problems with concave obstacles (see attached). The path leads slightly in the concave structure, then out again, but it should really go around it. As you see in the screenshot, it doesn't matter whether it is horizontal, vertical or diagonal, so this shouldn't be the error. I am pretty sure the error lies in the create_path-function, so here is the shortened, wrapped in a class, path-finding-algo: ac - start-columnar - start-rowbc - end-columnbr - end-row path.h path.cpp I hope this makes it easier for you to help me with my problem. Thanks! Well, it seems the diagonal moves are still not considered significantly more costly than straight moves. Any change if g = 15 for them (instead of g = 14) ? or g = 20? or g = 100 ?Even 14 is an approximation of sqrt(2)/2*10 after all. I attached the map with g = 20!The path is now correct, but it takes so much more time to calculate, which can't be right. edit: and g = 15 is approximately the same as 14. Lot's of source code to go through. Create an animation much as you have done your screenshots. Show a frame for each iteration in the loop you have in create_path, so that you visually can see how the algorithm works. Perhaps the most likely problem is that you are checking nodes which wouldn't need to be checked? In that case you will see it immediately if you do the above for debug purposes. I think I'm stuck there, sorry. The g h and f values on the graph should be enough for someone experienced with A* to troubleshoot further, but I'm not one edit: (heuristic)... slightly overestimates the remaining distance (...) On the rare occasion when the resulting path is not the shortest possible, it will be nearly as short made you reinvent the wheel.Today's lesson: When in doubt, ask google. I will look into your suggestions, thanks! Have you tried using another metric (weight function, or whatever the name is in graph theory) weigh the edges? Right now, you're using ; try using instead and see if that helps. I'm not sure if it will; from the description of the problem it sounds like nodes aren't properly recalculated or updated when you backtrack through the closed list, but I haven't looked at your code in detail. I also haven't touched my own in several years, so I'm probably a bit rusty on the details. I don't know A* but I've implemented a path finder like yours, only I never implemented diagonal movements. Do you have trouble with your code or with your algorithm? Or do you want us to find out that? My algo checks every single possible cell recursively, which takes time, but the result is the shortest tried A-star a long time ago (2003) in Aether, We used the manhattan distance formula and it worked great. I tried other methods but it seems that this one was optimal for our game which uses 8 directions. You could try visiting this site for more information. Hope this helps. _______________________________________________________[Website]
https://www.allegro.cc/forums/thread/562059
CC-MAIN-2018-17
refinedweb
2,883
64.71
Jmulator is a helper library created to facilitate the usage of keyboard emulators (also called keyboard encoders) in ActionScript. Even though a developer could easily use a keyboard encoder in the Flash and AIR runtimes without this library, the advantages offered by the it are as follows : - All keyboard codes dispatched by the board have already been researched for you. To use the library a developer simply create an object matching the board and listen to an event matching the label printed on the board itself; - It’s possible to easily enable or disable the OS typematic feature. Typematic mode is when the OS repeatedly sends key down events when a key is being held down; - It’s possible to easily listen to only on events, only off events or both on and off events; - Listen to one specific input by using the input’s name or to all inputs at once. Three board models can be used with the library: Ultimarc I-PAC VE, Ultimarc I-PAC (untested) and Ultimarc I-PAC 4 (untested). Sample import cc.cote.jmulator.emulators.IpacVe; import cc.cote.jmulator.events.IpacVeEvent; var emu:IpacVe = new IpacVe(stage); emu.addEventListener(IpacVeEvent.INPUT_2COIN, test); function test(e:IpacVeEvent):void { trace(e); }
https://www.as3gamegears.com/debug/jmulator/
CC-MAIN-2021-17
refinedweb
206
50.87
See also: IRC log Tobie: Hi. Editor of the Generic Sensor API spec, for Intel. Tobie: Lots of sensors being used nowadays. ... In mobile devices, now IoT. Lot of different sensors to expose. Goal of the project is to have a common base to expose sensors to developers. ... Sensor spec are often highly underspecified, which leads to interoperability issues. ... Also, exposing new sensors to the Web should not mean having to re-think the problem space again and again. Topics such as privacy and security should be common to all. ... These are the reasons why we're doing that. ... The spec itself does not specify any concrete interface. ... I'm looking at already published APIs to instantiate the spec. Also looked at other APIs such as the Automotive ones. ... Two main kinds of interaction with sensors: ... 1. continuous monitoring ... 2. change based on a threshold. [showing example 3 from spec] Tobie: The example shows the two use cases. ... Third case would be device orientation where you want to continuously know how the device is in space. One of the things that is missing from existing sensor specifications is better integration with hardware in terms of latency: low-level vs. high-level. The low-level concept is pretty useful in some use cases. Louay: Support for multiple sensors of the same type? Tobie: Totally. I forgot to mention that. Louay: Can the browser do some discovery? Tobie: Discovery is a very interesting problem. It very much depends on what kind of sensors you have. GPS chips would be local to the device, but as soon as you move to vehicles or WoT or Bluetooth networks, that's tough. ... Discovery is not in this version of the spec. We decided to tackle this in a future revision of the spec. ... There are on-going works such as the Bluetooth API. Anssi: The spec provides an extension point. Louay: The WoT IG discusses this, with the Thing API for instance. I have demos around that, come to the hall to see it. ahaller: One of our works will be the Semantic Sensor ontology. ... How will you know what type of sensor you will be dealing with? Anssi: It's about feature detection Tobie: I have a section about that, but it's really about local sensors. ... There will be some information, not written in the spec. I want to make sure that concrete specs can extend that easily. I'm very much looking at the Generic Sensor API as a low-level building block that you can build on top of. Nick_Doty: Thinking about privacy issues. Discovering sensors individually is useful but I'm curious how we're going to do that for generic sensors. Tobie: A lot of sensors have privacy problems. Many of them are common to most of them. ... So it seems useful to tackle these common problems in the spec ... All of the boilerplate privacy/security concerns would be in the spec, while specific ones will be defined in concrete specs. ... Things such as naming sensors, exposing high-level or low-level sensors. ... When I started looking into it, I noticed that people kind of copy-pasted bits from other specs. I'm trying to rationalize this approach. Kenneth: What is the timeline? Riju: On the Chromium side, I'll start implementing next week. End of this year, I will release Ambient Light implementation. Tobie: Right, that spec will be written on top of the Generic Sensor API. Nick_Doty: other browser vendors interest? Anssi: work started based on Mozilla feedback. ... We want to fix issues that arised in first device APIs now. ... I cannot speak on behalf of other browser vendors. Microsoft has it on its roadmap. ... So there's some commitment. Apple is the only missing block here, I think. Shijun_Sun: I'm from Microsoft. Do you think there will be some impact from the Device APIs WG to other working groups? Tobie: Yes, I can't speak for other groups but that's the goal. ... Geolocation would be a good candidate. Shijun_Sun: It would be good to bring the topic for Media Capture, joint work between DAP and WebRTC. <npdoty> Geolocation discussed device orientation this week <npdoty> So I'm not sure we are all on the same page about where that api is going Shijun_Sun: We've been looking at device rotations, tied to the camera Tobie: If you have use cases for exposing that to developers, it would make a lot of sense to use the same API. Naming might be used to split ambient light sensors that are not linked to the camera from those that are in the device for instance. <anssik> please feel free to open issues for topics raised at this session at Tobie: You certainly don't think of getUserMedia in terms of a sensor thing, but the rest fits well, I think Anssi: Please open an issue on the GitHub repo, people! jyasskin: Do you have a polyfill? Tobie: Yes, we played with a few specs, including the Geolocation API. <npdoty> Whoa, are we going to hang off of window rather than navigator? Tobie: There are polyfills. I should have a working demo, but broke it on my way here, sorry about that. ... I don't think we're going to be able to polyfill device orientation in a way that solves performance issues, obviously! ... I think I have a polyfill for the Battery API. <npdoty> I think there is interest in an absoluteorientationevent in DeviceOrientation that might fit that use case <npdoty> currently supported by Safari and interest from Chrome Dave_Raggett: Wondering about the relation with the Streams API ... Electrocardiogram would be an example of streaming sensor. <jyasskin> npdoty: IDL interfaces produce a window.InterfaceName. One can suppress that, but not reparent it. Tobie: If you look at async primitive space, you have Promises for one-off events, then you have Observable for a bunch of events. The sensor space we're dealing with sits somewhere in the middle. ... I think it would be a mistake to not make the distinction between streams and discrete events. Our mental model is different. Dave_Raggett: heartbeat is just an example. You could have regular different measurements and other things. <Mek> jyasskin: that doesn't mean it has to be window.sensors as opposed to navigator.sensors (but yeah, all the types would still pollute global scope) Tobie: Two main kinds of sensors: precise interval measurement, and those where you're really interested in the notion of change. <jyasskin> Mek: Yep. I think Tobie said he's dropped the foo.sensors namespace entirely. Tobie: Most sensors pull the data out, and then pretend to push it when a threshold is met. ... Heartbeat example is interesting because it's a high-level sensor that could be built on top of lower-level sensors. ... The data you get out is a computation of other types of sensor data. Mikko_Terho: Confused about high-level / low-level Tobie: For the purpose of the spec, those that are tied to an implementation, we call them low-levels. Dave_Raggett: There are different families of sensors in terms of patterns. ... In the IoT space, lots of people think in terms of reading a value. Not really thinking in terms of streams. There is perhaps a taxonomy of different sensor types independent of the implementation. <npdoty> For Geolocation, we consider it a feature, rather than a bug, of having a datatype agnostic to the particular mechanism to determine it <npdoty> Is it useful to define a lot of low level sensors if we don't expect them to be used or for users to understand them? Mikko_Terho: For high-level sensors, you'll get a callback, basically. At the low level, you do the monitoring. My recommendation is that the high-level API output chould be changes. ... Then if you want to monitor a complete stream, you need to change the model. Riju: we're not considering cameras as sensors per se. Dave_Raggett: Then there's some sort of assumption of what sensor types are. ... Have you considered the notion of timestamp? Tobie: Every reading is timestamped with a high-res timestamp. ... Based on the local clock. Claes: Have you considered to use the Streams API in the end, for a future version perhaps? Tobie: If you need a stream, I think you do not need something like the Sensor API. ... I'm thinking about 1Khz sensors, roughly. Claes: The Streams API is a good starting point. Lots of functionality from it could be reused. I think it should be considered. Dave_Raggett: Maybe the assumptions should be made explicit at the beginning of the spec. Tobie: OK, I'm happy to look into it, e.g. for the device orientation API. Anssi: I think Claes is looking at it from the perspective of raw sockets. Tobie: Could you put an issue on the GitHub? Claes: Yes. Adam_Alfar: [missed comment] Mikko_Terho: Also need a way to agree on what constitutes a change betwee browser and sensor Tobie: Links back to discussion on threshold we had yesterday. Mikko_Terho: It goes beyond that. I might be only be interested in 5 degree changes in temperature for instance. Tobie: Right, the API addresses some use cases but not all of them. Mikko_Terho: I would differentiate these 2 cases <npdoty> Yeah, there seems like an efficiency and privacy/minimization advantage to letting the site request the level of detail it needs Riju: This API gives something to developers that is not exposed by other concrete APIs: the ability to set the frequency with which you get readings. Dave_Raggett: Continuously-changing variables can be approximated with discrete values. Have you discussed this? Tobie: No. Anssi: I think we don't have concrete sensor products in mind for this. Dave_Raggett: You could measure a sound, for instance. Tobie: That's where I would say that this out of scope. ... Would you be willing to open an issue? <anssik> please feel free to open issues for topics raised at this session at Dave_Raggett: Yes. [some discussion on the topic] <riju> Dave for scope, also look at and comment there Shinju_Sun: Do you think it would be useful to give developers the accuracy of the measurement? For some sensors, the accuracy may depend on the value, fluctuating. Tobie: Yes, accuracy is in. ... specific to each measurement. ... already in the draft. Shinju_Sun: I noticed you're using Promise already. And also talk about low latency. What do you consider low latency? Tobie: The API is EventTarget so you're listening to callbacks. Promises are used as a convenience method for one-off readings. ... One of the things I want to do for performance is to see how this can be tied to requestAnimationFrame ... The most critical requirements I've seen are linked to virtual reality, not to throw up ;) ... If it's combined with animation frame rates and WebGL, it should be doable to have the right latency. ... I'm trying to figure out how to do that in the spec. I don't even know if it's doable from an implementation perspective. Virginie_Galindo: I'm from Gemalto. There are some concerns about sensor data connections. ... There were some new models starting to be discussed: you may want to read data from sensors without anyone listening to that. ... Have you thought about transferring sensor data in a cyphered way? ... Has it been discussed? DKA: That's what I was going to ask. Tobie: I'm not sure I understand the question properly. Virginie_Galindo: The question is about indicating whether the reading was transfered securely. ... You need to have secret keys. You may want to convey data that is conveyed in clear and data that is not in clear. Tobie: Would that be available to the Web application? Virginie_Galindo: I'm really thinking about the future. Please think about that. Anssi: I think we ran the spec to the security group for review already. Tobie: I want to understand the use cases more precisely. Nick_Doty: you want encrypted values coming from the sensors that the browser would not be able to decrypt? Virginie_Galindo: One use case is that of a sensor provides cyphered values, not readable by the Web app (for privacy reason). Second use case is aggregation several data readings by the Web app and encryption by the Web app afterwards. Kenneth: you may want to send the data without reading it to a server. Tobie: A kind of opaque reading is what you need. DKA: Related to permissions and fingerprinting and stuff like that. Regarding permissions, in the Powerful features spec, it talks about knowledge of external devices as being a powerful feature. Francois: I think that's already in the spec Tobie: Yes. ... Permission is an implementation detail. You just scope that out in the spec. DKA: The spec needs to be clear about that though. Tobie: Right, the spec will have a section that is larger than the title it has right now. ... There is HTTPS concerns, so it's pulled that as a requirement in that spec. DKA: I would also encourage you to take a look at fingerprinting. Tobie: I started to read that on my way here. Great stuff (altough this broke my laptop). ahaller2: Fingerprinting may be already a lost case. DKA: Discussed on the TAG. <dka> Unsanctioned tracking: Dave_Raggett: Detecting when a Web site looks suspicious. Can this be detected? Could the spec say something about that? Third-party apps could perhaps monitor sensors somehow. Nick_Doty: Researchers have done that in a way that instruments the browser. Tobie: You really want to handle that somewhere else. Dave_Raggett: Yes but that could be called out in a Privacy section. Johannes: Can the value be opaque to this API? Does the API really have to understand what it's dealing with? Tobie: Does the operating system or user agent need to understand the value? ... This is the usual security vs. performance issue question. <npdoty> I think there is a privacy advantage to the fusion approach as well, potentially Tobie: You can build a pedometer using data from a gyroscope. You have to read it at something like 60Hz. You're draining the battery. Now if you let the underlying system do that, it may be able to do sensor fusion and expose a pedometer, thus improving performance and saving battery. ... The more you move to the sensor, the better the results in terms of performance. ... I don't really have good use cases in mind for opaque readings. For the use cases that we've looked at that are around non-opaque readings, it's better to move closer to the sensor. Louay: About connectivity issues with remote sensors. Do we need a feature in the Generic Sensor API? Tobie: Yes, probably some error events. Part of handling the error could be to create a new sensor again to talk to the sensor again. I don't expect the sensor to survive disconnection issues for the time being. [breakout session adjourned]
https://www.w3.org/2015/10/28-dap-minutes.html
CC-MAIN-2021-25
refinedweb
2,508
67.76
Microchip bootloader encryption jobs RSA encryption project with GUI ay... [login to view URL] Then Throw New InvalidOperationException("Private key is not loaded") End If Dim num As Integer = [login to vi... .. proyectos .... ..: [login to view [login to view Lütfen detayları görmek için Kaydolun ya da Giriş Yapın. .. Lütfen detayları görmek için Kaydolun ya da Giriş Yapın. Hi, I have some issue with the code. Your job would be to fix it, and would implement some algorithm. Details will be shared later. Having encryption knowledge is helpful Thanks! .. [login to view URL]; import [login to view URL]; import [login to view URL]; import [login to view URL]; import [login to view URL]; public class Java_AES_Cipher { private static int CIPHER_KEY_LEN = 32; private static String CIPHER_NAME = "AES/CBC/PKCS5PADDING";
https://www.tr.freelancer.com/job-search/microchip-bootloader-encryption/
CC-MAIN-2018-47
refinedweb
127
57.47
Hi, I am using matplotlib with python to generate a bunch of charts. I use IDLE, have numpy version 1.0, and pylab version 2.5 with TkAgg if that is relavant. My code works fine for a single iteration, which creates and saves 4 different charts. The trouble is that when I try to run it for the entire set (about 200 items) it can run for 12 items at a time, generating 48 charts. On the 13th, I get an error from matplotlib that says it can’t access data. However, if I start the program at the point it just failed, it works fine and will create the charts for the next 12 before failing. I assume that I am not closing the files properly somehow or otherwise misallocating memory. I tried just reimporting pylab each iteration, but that didn’t help. This is the function that creates a chart: #create and save the figure def CreateFigure(state, facility, unit, SO2, increment, year, P99): size = len(SO2) #Create Plot figure(1, figsize=(10,8)) bar(range(1, size+2), SO2, width=0.1, color='k') grid(True) xlim(0,size) ylim(0, 1.1*SO2[-1]) ylabel('SO2 [lb/hr]') heading = ConstructFigName(state, facility, unit, increment, year) title(heading) #set handles xticklines = getp(gca(), 'xticklines') xgridlines = getp(gca(), 'xgridlines') xticklabels = getp(gca(), 'xticklabels') yticklines = getp(gca(), 'yticklines') #set properties setp(xticklines, visible=False) setp(xgridlines, visible=False) setp(xticklabels, visible=False) setp(yticklines, visible=False) axhspan(P99, P99, lw=3, ec='r', fc='r') ax = gca() P99 = '%0.1f' % P99 text(0.01, 0.95, '99th Percentile: '+P99+' lb/hr', transform=ax.transAxes) figpath = ConstructFigPath(state, facility, unit, increment, year) savefig(figpath) close() Can you see the problem? If there is nothing obviously wrong… I tried to run the memory leak algorithm from the website, but the line: a2 = os.popen(‘ps -p %d -o rss,sz’ % pid).readlines() generates an empty list rather than information about about memory, and I am not clear on how to modify it to do something useful. thanks, -Lisa
https://discourse.matplotlib.org/t/memory-problem/6407
CC-MAIN-2019-51
refinedweb
348
55.84
- Defining Functions and Variables - Pointers - Different ways of passing arguments - An understanding of inline functions (for last example) - OOP (helps alot, specially in last example) const int ci = 5; In the above single line of code, I defined an int, but what is that const keyword? it changes the normal int definition into a constant int definition. Which means the value of ci cannot be edited at all. If you try to change the value of ci along the program, you will get a compile error. Isn't that nice? you cannot change the value of const variables by mistake Const can be used in many situations and places. Look at the following uses... Const Pointer The address of a constant pointer cannot be changed, but the value it points to, can be changed. int * const p = &i; Read it from right to left: p is a constant pointer to an int, which points to the address of i. Totally simple Pointer to Const What if we want to define a pointer to a constant int? Have Great attention: const int * p = &i; Again, read from right, p is a pointer to an int which is const. int const * p = &i; By reading from right to left we get: p is a pointer to a const int It's the same as the one before functionally, so we can use both equally, use the one you are more comfortable with The value p points to cannot be changed, but the address p points to Can be changed. Const Pointer to Const This is a mix of the previous two versions mentioned, have a look: int const * const p = &i; p is a constant pointer to a constant int, neither the value nor the address of p can be changed. Pay extra attention: Const variables and pointers need assignment and definition at the Same time, otherwise you get a compilation error. Const As Return There are times that, you have to return a const reference/value in a function, sometimes they play an important role to avoid bugs: char* func() { return "hey"; } this function returns a pointer to a char array, actually returning a constant memory location, what happens if we write such code: char * p = func(); p[0] = 'a'; This tries to change the value of a character in the array, but that memory location was constant and not meant for changing, which causes the program to crash with a nasty "Segmentation Fault" To solve this mistake and make it safe, use const like this: const char* func() { return "hey"; } Now, You cannot mistakenly change the value like before, as the pointer returned is a pointer to constant char Const As Argument Constant objects come very handy as function arguments, it's like a promise to compiler that you Will Not change the value of that argument inside the scope of the function (or other cases, whichever you define). The syntax is so simple and similar to what we had before: void print(const int a) { cout<<a; } Here we didn't want to change a, just a simple print. Now that we defined the argument with const, we can have calls such as print(4) which we couldn't have without the const. OK, were is the perfect use of this feature?... When passing by Reference! When you pass an object by reference to a function, you are allowed to directly change it's value, if we want to restrict the function and avoid any changes be made(and benefit passing by reference), we have to make this it const: void print(const LargeType& a) { //do some stuff } Now, you are using the benefit of passing by reference and avoiding any changes made to the variable at the same time. I am going to show you a class example with lots of const usage at the end... now we have 1 more section... Const In Classes By defining a member function as constant, you are banning that member function from making any changes to member variables. The syntax is as follows: class foo { int val; public: void const_member_func() const { cout<<val<<endl; } }; Note the place of this const, this member function is banned from changing any member variables (here "val"). There are times when you Have to make a member function constant. Look at the following example: class foo { int val; public: const int& ret() { return val; } foo(const int& a) : val(a) {} }; int main() { const foo f1(2); //the const makes a problem with the current class definition cout<<f1.ret()<<endl; return 0; } This code gives a compilation error. The reason is that, f1 is a constant foo. So the compiler has to make sure all members of the object remain unchanged, just change the class definition like this to correct the error: class foo { int val; public: const int& ret() const { return val; } foo(const int& a) : val(a) {} }; int main() { const foo f1(2); //Now it doesn't have any problems cout<<f1.ret()<<endl; return 0; } OK... now I am going to write a sample class containing some useful usages of const: #include <iostream> using namespace std; class foo { int val; public: foo() {} //default constructor foo(const int& value) : val(value) {} //default constructor with initialization void operator=(const foo& right) { //operator= to assign data val = right.val; } const int& ret() const { //a function to return the private member, just for fun :D return val; } void set(const int& a) { //function for setting the member variable, operator= does this too val = a; } friend ostream& operator<<(ostream& o, const foo& right); }; ostream& operator<<(ostream& o, const foo& right) { o<<right.val; return o; } int main () { foo f1(3); foo f2 = foo(2); cout<<f1<<" "<<f2<<endl; f1.set(57); f2 = f1; cout<<f1<<" "<<f2<<endl; return 0; } See? I used const a lot here... it's quite useful in most programs, you are just restricting yourself and avoiding any mistakes you may make by changing things when you don't want to. Hope this tutorial could help you understand the usage of constants, as they are very useful! Refer to [ Here ] for reference on const.
http://www.dreamincode.net/forums/topic/152314-using-constant/
CC-MAIN-2017-22
refinedweb
1,029
62.21
of an OS plumber Suppose you want to convert the contents of an element into HTML making each line of text a separate paragraph. To do this you need to have a way of splitting the element text into a series of strings using newline (‘/n’) as the delimiter. This post demonstrates how to do this using both XSLT1 and XSLT2 processors. A frequent requirement when transforming XML documents is to remove some or all of a document namespaces. This post demonstrates how to remove all namespaces in a document or retain certain namespaces. This post demonstrates three methods of handling multiple default namespaces when transforming XML documents using an XSL stylesheet. This post demonstrates ways of converting the attributes of elements of an XML document into either CSV or into their own elements. Enter your email address
http://blog.fpmurphy.com/tag/xml/page/2
CC-MAIN-2017-30
refinedweb
138
62.38
CodePlexProject Hosting for Open Source Software :-) You’re like the audience plant to ask just the right questions :-). They are the same. LINQ Expr Trees v1 evolved into Expr Trees v2 which are exactly the DLR tress. All the sources you see in our codeplex project are the sources we’re shipping in CLR 4.0 for all of the Expr Trees code. What might be confusing is that until we RTM CLR 4.0, we change the namespaces on codeplex. We need to do that so that if you have a .NET 3.5 C# app that both uses LINQ and hosts, say, IronPython, then LINQ works as well as the DLR. You just can hand those threes back and forth, which would be a very corner case scenario if one at all. When we hit RTM, the codeplex sources will will have the same namepace, but we won’t build the Microsoft.scripting.core.dll in our .sln file. Bill From: esforbes [mailto:notifications@codeplex.com] Sent: Thursday, March 26, 2009 5:00 PM To: Bill Chiles Subject: Expression Trees + DLR Trees - why the duplication? [dlr:51467] From: esforbes Not sure if this is the right place to ask this, but here goes: Why is there so much duplication between these two incredibly similar tasks? Why couldn't Expression Trees have been implemented as DLR Trees (or vice versa)? First, we’re backward compatible. Second, neither C# nor VB extended their LINQ support or lambda expressions to use any of the new ET stuff, so no LINQ provider will ever see anything they haven’t seen before in CLR 4.0. bill From: esforbes [mailto:notifications@codeplex.com] Sent: Thursday, March 26, 2009 5:11 PM To: Bill Chiles Subject: Re: Expression Trees + DLR Trees - why the duplication? [dlr:51467] Oooh - hehe, okay that makes sense, especially considering all the talk I've been hearing about expression trees being augmented in .NET 4. I guess that 'augmenting' in this case meant replacing them with DLR trees. =) That's very good news - but what ramifications will this have on existing LINQ providers, since the expression trees they consume are getting augmented in ways the original designers probably didn't anticipate? Yes, if you manually do that, the provider will choke :-). Our design goal was to introduce things so that if current binaries were coded defensively, then we should always fall into their ‘else’ or ‘default’ clause for unexpected node or property settings. We made sure that any existing nodes have no new semantics based on property values or new properties we added that current binaries would not have known to detect and change their behavior. Cheers, From: esforbes [mailto:notifications@codeplex.com] Sent: Thursday, March 26, 2009 11:21 PM To: Bill Chiles Subject: Re: Expression Trees + DLR Trees - why the duplication? [dlr:51467] What if someone constructs an expression tree using these more advanced nodes, and tries to send it to a LINQ provider? Cool! It might be worth adding that even today you can create a subtype of Expression, ball it up into a tree, and hand that to a provider who won’t know what to do with it. That’s why they should have good defense ‘else’ and ‘default’ branches for failing gracefully already :-). From: esforbes [mailto:notifications@codeplex.com] Sent: Friday, March 27, 2009 10:11 AM To: Bill Chiles Subject: Re: Expression Trees + DLR Trees - why the duplication? [dlr:51467] Ahh, I understand - excellent. =) I'm very much looking forward to this in .NET 4. =) Thanks for the information! Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://dlr.codeplex.com/discussions/51467
CC-MAIN-2017-39
refinedweb
632
73.37
This. also read: Using JSON forms with AJAX Introduction Lift has several different ways of interacting with forms and AJAX, and you can quite easily configure Lift to create an AJAX form using SHtml. SHtml contains many useful methods and should be your main port of call for all of the out-of-the-box AJAX functionality. One of the other interesting facilities that this object provides is the ability to create forms that are serialized and sent to the server using JSON. Listing 1 details an example of using a JSON form. Listing 1 Implementing JSON Form import scala.xml.NodeSeq import net.liftweb.util.JsonCmd import net.liftweb.util.Helpers._ import net.liftweb.http.{SHtml,JsonHandler} import net.liftweb.http.js.{JsCmd} import net.liftweb.http.js.JsCmds.{SetHtml,Script} class JsonForm { def head = Script(json.jsCmd) #1 def show = { "#form" #>((ns: NodeSeq) => SHtml.jsonForm(json, ns)) #2 } object json extends JsonHandler { #3 def apply(in: Any): JsCmd = SetHtml("json_result", in match { case JsonCmd("processForm", _, params: Map[String, Any], _) => #4 <p>Publisher: {params("publisher")}, #4 Title: {params("title")}</p> #4 case x => <span class="error">Unknown issue handling JSON: {x}</span> }) } } #1 Registers JSON serialize function #2 JSON form wrapper #3 Custom JSON handler #4 Parameter handling Here, you can see that this is a basic class with a few snippet methods. #1 details a snippet that simply adds a JavaScript element to the page. Typically, you would call this method from the <head> tag within your template so that the head is merged into the main page template. The point of this is that the JsonHandler implementation— in this instance the JSON object—contains JavaScript that registers a function to serialize the passed object to JSON. #2 is a snippet method setup that simply binds to the jsonForm method passing the JsonHandler instance and the passed NodeSeq. This generates a <form> element that will wrap the fields in your template. A point of note here is that the fields in your template won’t have randomized names because they are not generated with SHtml helpers. They are “raw” from the template so, from a security perspective, this is important to keep in mind. Listing 2 shows the full contents of the template. Section #3 is the implementation of the JsonHanlder, but specifically the point of interest is #4. This defines the handling of parameters passed from the browser. As params is a Map, you must be careful to request only the keys that actually exist because this could throw a runtime exception if the key you are expecting does not exist. Listing 2 Template implementation for JSON form <div class="lift:surround?with=default;at=content"> <head> <script type="text/javascript" src="/classpath/jlift.js" /> #1 <lift:json_form.head /> #1 </head> <h2>JSON Form</h2> <div id="form" class="lift:JsonForm.show"> #2 <p>Book Name: <br /><input type="text" name="title" /></p> #3 <p>Publisher: <br /> <select name="publisher"> #3 <option value="manning">Manning>/option> #3 <option value="penguin">Penguin</option> #3 <option value="bbc">BBC</option> #3 </select> #3 </p> <p><input type="submit" /></p> #3 </div> <hr /> <h2>JSON Result</h2> <div id="json_result">>/div> #4 </div> #1 Includes jlift.js #2 Calls snippet method #3 Input fields #4 Result element Lift actually provides a specialized client-side JavaScript library with a set of helper functions for conducting operations such as serializing forms, handling collections, and so forth, called jlift.js. Here it is included at #1, which is important; otherwise, the AJAX functionality would not operate as expected. When the library is included and the head method in the JsonForm class is called from the template, you will be left with something similar to the following: <script src="/classpath/jlift.js" type="text/javascript"></script> <script type="text/javascript"> //<![CDATA[ function F950163993256RNF(obj){ liftAjax.lift_ajaxHandler('F950163993256RNF='+ encodeURIComponent(JSON.stringify(obj)), null,null); } //]]> </script> You need to call the head method from your template so the rendered output includes this function. also read: Summary We covered how you can leverage the SHtml AJAX builder methods to create page elements that trigger server-side actions. The powerful thing about Lift‘s AJAX system is that you can capture the logic to be executed upon making the specific callback within a Scala function. Using JSON forms with AJAX in Lift Framework… Thank you for submitting this cool story – Trackback from JavaPins…
http://www.javabeat.net/using-json-forms-with-ajax-in-lift-framework/
CC-MAIN-2014-42
refinedweb
738
54.52
21 December 2009 06:30 [Source: ICIS news] (Recasts fourth paragraph) By Malini Hariharan MUMBAI (ICIS news)--Southeast Asian petrochemical producers have limited legal options to defer or block the implementation of a zero-tariff regime that will take effect next year, a trade lawyer said on Monday. Indonesian polyolefin producers were actively pushing for a postponement of the new trade policy, fearing competitive imports from ?xml:namespace> Six of the 10 members of the Association of Southeast Asian Nations (ASEAN) will implement the zero-tariff next year, including Brunei, Malaysia and the Philippines. But it would be difficult for “It is pretty much impossible,” he said. Polymer producers in the FTA terms allow suspension of tariff concessions by a country only “if it can be determined that increased imports have caused injury or economic damage to local companies,” said Sim. “But in the history of FTAs this has very rarely been implemented. And even if this is put in place it would be a temporary measure – say for a period of 3-5 years,” he added. Seeking antidumping actions from their respective governments is a second option available to petrochemical players wary of the new trade regime. “If the industry is worried about a flood of imports they can go in for this option by proving that pricing was unfair and that the local industry suffered material injury. This type of action is possible and can be extended for an indefinite period,” Sim said. This route, however, entails high legal fees and would take a while to enforce. Companies also have to wait for a few months before they can initiate action. “You have to build a record. You cannot say on 2nd January that there is dumping. You need time to build the case; usually 6-8 months is enough to get data to make a claim,” he added. “The simple option [of raising import duties] ended when the FTA was signed. Now they have the safeguard option, which is untested, or antidumping, he added. Meanwhile, the Indonesian media has reported that the country was seeking to delay the implementation of the ASEAN-China FTA that would likewise come into force on 1 January 2010. There is a provision in the ASEAN-China FTA for a temporary delay in tariff reduction by reclassifying goods as ‘sensitive’ and ‘highly sensitive’ products. The duty elimination could then be delayed to 1 January 2015. But the problem for It was also uncertain whether “Either way, for
http://www.icis.com/Articles/2009/12/21/9320703/no-easy-escape-for-asean-petchems-under-fta-in-10-lawyer.html
CC-MAIN-2015-11
refinedweb
415
60.35
Test Run - Testing Silverlight Apps Using Messages By James McCaffrey | March 2010 I am a big fan of Silverlight and in this month’s column I describe a technique you can use to test Silverlight applications. Silverlight is a complete Web application framework that was initially released in 2007. The current version, Silverlight 3, was released in July 2009. Visual Studio 2010 provides enhanced support for Silverlight, in particular a fully integrated visual designer that makes designing Silverlight user interfaces a snap. The best way for you to see where I’m headed in this article is to take a look at the apps themselves. Figure 1 shows a simple but representative Silverlight application named MicroCalc. You can see that MicroCalc is hosted inside Internet Explorer, though Silverlight applications can also be hosted by other browsers including Firefox, Opera and Safari. Figure 1 MicroCalc Silverlight App Figure 2 shows a lightweight test harness, which is also a Silverlight application. Figure 2 Test Harness for MicroCalc In this example, the first test case has been selected. When the button control labeled Run Selected Test was clicked, the Silverlight test harness sent a message containing the selected test case input data to the Silverlight MicroCalc application under test. This test case data consists of instructions to simulate a user typing 2.5 and 3.0 into the input areas of the application, selecting the Multiply operation, and then clicking the Compute button. The application accepted the test case data and programmatically exercised itself using test code that is instrumented into the application. After a short delay, the test harness sends a second message to the application under test, requesting that the application send a message containing information about the application’s state—namely, the value in the result field. The test harness received the resulting message from the application and determined that the actual value in the application, 7.5000, matched the expected value in the test case data, and displayed a Pass test case result in the harness comments area. This article assumes you have basic familiarity with the C# language, but does not assume you have any experience with Silverlight. In the sections of this column that follow, I first describe the Silverlight application under test. I walk you through the details of creating the lightweight Silverlight-based test harness shown in Figure 1, then I explain how to instrument the application. I wrap up by describing alternative testing approaches. The Application Under Test Let’s take a look at the code for the Silverlight MicroCalc application that is the target of my test automation example. I created MicroCalc using Visual Studio 2010 beta 2. Silverlight 3 is fully integrated into Visual Studio 2010, but the code I present here also works with Visual Studio 2008 with the Silverlight 3 SDK installed separately. After launching Visual Studio, I clicked on File | New | Project. Note that a Silverlight application is a .NET component that may be hosted in a Web application, rather than a Web application itself. In the New Project dialog, I selected the C# language templates option. Silverlight applications can also be created using Visual Basic, and you can even create Silverlight libraries using the new F# language. I selected the default Microsoft .NET Framework 4 library option and the Silverlight Application template. Silverlight contains a subset of the .NET Framework, so not all parts of the .NET Framework 4 are available to Silverlight applications. After filling in the Name (SilverCalc) and Location (C:\SilverlightTesting) fields, I clicked the OK button. (Note that SilverCalc is the Visual Studio project name, and MicroCalc is the application name.) Visual Studio then prompted me with a New Silverlight Application dialog box that can be confusing to Silverlight beginners. Let’s take a closer look at the options in Figure 3. Figure 3 New Silverlight Application Dialog Box Options The first entry, “Host the Silverlight application in a new Web site,” is checked by default. This instructs Visual Studio to create two different Web pages to host your Silverlight application. The next entry is the name of the Visual Studio project that contains the two host pages. The project will be added to your Visual Studio solution. The third entry in the dialog box is a dropdown control with three options: ASP.NET Web Application Project, ASP.NET Web Site, and ASP.NET MVC Web Project. A full discussion of these options is outside the scope of this article, but the bottom line is that the best general purpose option is ASP.NET Web Application Project. The fourth entry in the dialog box is a dropdown to select the Silverlight version, in this case 3.0. After clicking OK, Visual Studio creates a blank Silverlight application. I double-clicked on the MainPage.xaml file to load the XAML-based UI definitions into the Visual Studio editor. I modified the default attributes for the top-level Grid control by adding Width and Height attributes and changing the Background color attribute: By default, a Silverlight application occupies the entire client area in its hosting page. Here I set the width and height to 300 pixels to make my Silverlight application resemble the default size of a WinForm application. I adjusted the color to make the area occupied by my Silverlight application clearly visible. Next I used Visual Studio to add the labels, three TextBox controls, two RadioButton controls and a Button control onto my application as shown in Figure 1. Visual Studio 2010 has a fully integrated design view so that when I drag a control, such as a TextBox, onto the design surface, the underlying XAML code is automatically generated: After placing the labels and controls onto my MicroCalc application, I double-clicked on the Button control to add its event handler to the MainPage.xaml.cs file. In the code editor I typed the following C# code to give MicroCalc its functionality: private void button1_Click( object sender, RoutedEventArgs e) { double x = double.Parse(textBox1.Text); double y = double.Parse(textBox2.Text); double result = 0; if (radioButton1.IsChecked == true) result = x * y; else if (radioButton2.IsChecked == true) result = x / y; textBox3.Text = result.ToString("0.0000"); } I begin by grabbing the values entered as text into the textBox1 and textBox2 controls and converting them to type double. Notice that, to keep my example short, I have omitted the normal error-checking you’d perform in a real application. Next I determine which RadioButton control has been selected by the user. I must use the fully qualified Boolean expression: You might have expected that I’d use the shortcut form: I use the fully qualified form because the IsChecked property is the nullable type bool? rather than plain bool. After computing the indicated result, I place the result formatted to four decimal places into the textBox3 control. MicroCalc is now ready to go and I can hit the F5 key to instruct Visual Studio to run the application. By default, Visual Studio will launch Internet Explorer and load the associated .aspx host page that was automatically generated. Visual Studio runs a Silverlight test host page through the built-in Web development server rather than through IIS. In addition to an .aspx test host page, Visual Studio also generates an HTML test page that you can manually load by typing its address into Internet Explorer. The Test Harness Now that you’ve seen the Silverlight application under test, let me describe the test harness. I decided to use Local Messaging to send messages between the harness and the application. I began by launching a new instance of Visual Studio 2010. Using the same process as described in the previous section, I created a new Silverlight application named TestHarness. As with the MicroCalc application, I edited the top-level Grid control to change its default size to 300x300 pixels, and its background color to Bisque in order to make the Silverlight control stand out clearly. Next I added a Label control, two ListBox controls and a Button control to the harness design surface. After changing the Content property of the Button control to Run Selected Test, I double-clicked the button to generate its event handler. Before adding the logic code to the handler, I declare a class-scope LocalMessageSender object and test case data in the MainPage.xaml.cs file of the harness so that the harness can send messages to the application under test: The LocalMessageSender class is contained in the System.Windows.Messaging namespace so I added a reference to it with a using statement at the top of the .cs file so that I don’t have to fully qualify the class name. I employ a simple approach for my test case data and use a colon-delimited string with fields for test case ID, first input value, second input value, operation and expected result. Next I add class scope string variables for each test case field: These variables aren’t technically necessary, but make the test code easier to read and modify. Now I instantiate a LocalMessageReceiver object into the MainPage constructor so that my test harness can accept messages from the application under test: The LocalMessageReceiver object constructor accepts three arguments. The first argument is a name to identify the receiver—this will be used by a LocalMessageSender object to specify which receiver to target. The second argument is an Enumeration type that specifies whether the receiver name is scoped to the global domain or to a more restricted domain. The third argument specifies where the receiver will accept messages from, in this case any domain. Next I wire up an event handler for the receiver, and then fire up the receiver object: Here I indicate that when the test harness receives a message, control should be transferred to a program-defined method named HarnessMessageReceivedHandler. The Listen method, as you might expect, continuously monitors for incoming messages sent from a LocalMessageSender in the application under test. Now I instantiate the sender object I declared earlier: Notice that the first argument to the sender object is the name of a target receiver object, not an identifying name of the sender. Here my test harness sender will be sending messages only to a receiver named AppReceiver located in the application under test. In other words, receiver objects have names and will accept messages from any sender objects, but sender objects do not have names and will send messages only to a specific receiver. After instantiating the sender object, I wire up an event handler for the SendCompleted event. Now I can load my test cases and handle any exceptions: I simply iterate through the test case array, adding each test case string to the listBox1 control. If any exception is caught, I just display its text in the listBox2 control used for comments. At this point I have a sender object in the harness that can send test case input to the application, and a receiver object in the harness that can accept state information from the application. Now I go back to the button1_Click handler method I added earlier. In the handler, I begin by parsing the selected test case: Now I’m ready to send test case input to the Silverlight application under test: I stitch back together just test case input. I do not send the test case ID or the expected value to the application because only the harness deals with those values. After displaying some comments to the listBox2 control, I use the SendAsync method of the LocalMessageSender object to send the test case data. I prepend the string “data” so that the application has a way to identify what type of message is being received. My button event handler finishes up by pausing for one second in order to give the application time to execute, and then I send a message asking the application for its state information: Recall that I wired up an event handler for send completion, but in this design I do not need to perform any explicit post-send processing. The final part of the harness code deals with the message sent from the Silverlight application to the harness: private void HarnessMessageReceivedHandler(object sender, MessageReceivedEventArgs e) { string actual = e.Message; listBox2.Items.Add( "Received " + actual + " from application"); if (actual == expected) listBox2.Items.Add("Pass"); else listBox2.Items.Add("**FAIL**"); listBox2.Items.Add("========================"); } Here I fetch the message from the application, which is the value in the textBox3 result control, and store that value into a variable named actual. After displaying a comment, I compare the actual value that was sent by the application with the expected value parsed from the test case data to determine and display a test case pass/fail result. Instrumenting the Silverlight Application Now let’s examine the instrumented code inside the Silverlight application under test. I begin by declaring a class-scope LocalMessageSender object. This sender will send messages to the test harness: Next I instantiate a receiver in the MainPage constructor to accept messages from the test harness, wire up an event handler, and start listening for messages from the harness: As before, note that you assign a name to the receiver object, and that this name corresponds to the first argument to the sender object in the harness. Then I deal with any exceptions: I display exception messages in the textBox3 control, which is the MicroCalc application result field. This approach is completely ad hoc, but sending the exception message back to the test harness may not be feasible if the messaging code throws the exception. Now I handle messages sent by the test harness: The test harness sends two types of messages. Test case input data starts with “data” while a request for application state is just “response.” I use the StartsWith method to determine if the message received by the application is test case input. If so, I use the Split method to parse the input into variables with descriptive names. Now the instrumentation uses the test case input to simulate user actions: In general, modifying properties of controls, such as the Text and IsChecked properties in this example, to simulate user input is straightforward. However, simulating events such as button clicks requires a different approach: The Dispatcher class is part of the Windows.Threading namespace so I added a using statement referencing that class to the application. The BeginInvoke method allows you to asynchronously call a method on the Silverlight user interface thread. BeginInvoke accepts a delegate, which is a wrapper around a method. Here I use the anonymous delegate feature to simplify my call. BeginInvoke returns a DispatcherOperation object, but in this case I can safely ignore that value. The Dispatcher class also has a CheckAccess method you can use to determine whether BeginIvoke is required (when CheckAccess returns false) or whether you can just modify a property (CheckAccess returns true). I finish my instrumentation by dealing with the message from the test harness that requests application state: If the message received is just the string “response,” I grab the value in the textBox3 control and send it back to the test harness. The test system I describe in this article is just one of many approaches you can use and is best suited for 4-4-4 ultra-light test automation. By this I mean a harness that has an expected life of 4 weeks or less, consists of 4 pages or less of code, and requires 4 hours or less to create. The main advantage of testing using Messages compared to other approaches is that the technique is very simple. The main disadvantage is that the application under test must be heavily instrumented, which may not always be feasible. Two important alternatives to the technique I have presented here are using the HTML Bridge with JavaScript and using the Microsoft UI Automation library. As usual, I’ll remind you that no one particular testing approach is best for all situations, but the Messages-based approach I’ve presented here can be an efficient and effective technique in many software development scenarios. Dr. James McCaffrey works for Volt Information Sciences Inc., where he manages technical training for software engineers working at the Microsoft Redmond, Wash., campus. He has worked on several Microsoft products including Internet Explorer, and MSN Search. Dr. McCaffrey is the author of “.NET Test Automation Recipes” (Apress, 2006). He can be reached at jammc@microsoft.com. Thanks to the following technical experts for reviewing this article: Karl Erickson and Nitya Ravi
https://msdn.microsoft.com/ee336032.aspx
CC-MAIN-2019-18
refinedweb
2,763
52.29
On Sun, 05 Aug 2007, Deepak Tripathi wrote: > Damyan Ivanov wrote: >> You're free to find other sponsors, of course. > > so i think now you can sopnsor it. I talked with Deepak on -mentors earlier about this package, and had the following concerns which whoever ends up sponsoring it should be aware of and either be comfortable with or require resolution of. 1. libwhisker2-perl isn't in CPAN 2. It actually installs a module called LW2.pm instead of Whisker2.pm in the perl namespace 3. It doesn't use strict; or use warnings; and the actual coding style of the module is needlessly opaque. 4. It likely could be entirely replaced with something that uses LWP or similar instead of (possibly incorrect) reimplementation of the same. 5. The only thing that uses it is nikto, and it's quite likely that the use of LW.pm can be entirely replaced with LWP. Don Armstrong -- More than any other time in history, mankind faces a crossroads. One path leads to despair and utter hopelessness. The other, to total extinction. Let us pray we have the wisdom to choose correctly. -- Woody Allen
http://lists.debian.org/debian-perl/2007/08/msg00058.html
CC-MAIN-2013-20
refinedweb
192
67.45
Realtime Client Listens to changes in a PostgreSQL Database and via websockets. This is for usage with Supabase Realtime server. Pre-release verion! This repo is still under heavy development and the documentation is evolving. You're welcome to try it, but expect some breaking changes. Usage Creating a Socket connection You can set up one connection to be used across the whole app. import 'package:realtime_client/realtime_client.dart'; var client = Socket(REALTIME_URL); client.connect(); Socket Hooks client.onOpen(() => print('Socket opened.')); client.onClose((event) => print('Socket closed $event')); client.onError((error) => print('Socket error: $error')); Subscribing to events You can listen to INSERT, UPDATE, DELETE, or all * events. You can subscribe to events on the whole database, schema, table, or individual columns using channel(). Channels are multiplexed over the Socket connection. To join a channel, you must provide the topic, where a topic is either: realtime- entire database realtime:{schema}- where {schema}is the Postgres Schema realtime:{schema}:{table}- where {table}is the Postgres table name realtime:{schema}:{table}:{col}.eq.{val}- where {col}is the column name, and {val}is the value which you want to match Examples // Listen to events on the entire database. final databaseChanges = client.channel('realtime:*'); databaseChanges.on('*', (e, {ref}) => print(e) ); databaseChanges.on('INSERT', (e, {ref}) => print(e) ); databaseChanges.on('UPDATE', (e, {ref}) => print(e) ); databaseChanges.on('DELETE', (e, {ref}) => print(e) ); databaseChanges.subscribe() // Listen to events on a schema, using the format `realtime:{SCHEMA}` var publicSchema = client.channel('realtime:public'); publicSchema.on('*', (e, {ref}) => print(e) ); publicSchema.on('INSERT', (e, {ref}) => print(e) ); publicSchema.on('UPDATE', (e, {ref}) => print(e) ); publicSchema.on('DELETE', (e, {ref}) => print(e) ); publicSchema.subscribe(); // Listen to events on a table, using the format `realtime:{SCHEMA}:{TABLE}` var usersTable = client.channel('realtime:public:users'); usersTable.on('*', (e, {ref}) => print(e) ); usersTable.on('INSERT', (e, {ref}) => print(e) ); usersTable.on('UPDATE', (e, {ref}) => print(e) ); usersTable.on('DELETE', (e, {ref}) => print(e) ); usersTable.subscribe(); // Listen to events on a row, using the format `realtime:{SCHEMA}:{TABLE}:{COL}.eq.{VAL}` var rowChanges = client.channel('realtime:public:users:id.eq.1'); rowChanges.on('*', (e, {ref}) => print(e) ); rowChanges.on('INSERT', (e, {ref}) => print(e) ); rowChanges.on('UPDATE', (e, {ref}) => print(e) ); rowChanges.on('DELETE', (e, {ref}) => print(e) ); rowChanges.subscribe(); Removing a subscription You can unsubscribe from a topic using channel.unsubscribe(). Disconnect the socket Call disconnect() on the socket: client.disconnect() Duplicate Join Subscriptions While the client may join any number of topics on any number of channels, the client may only hold a single subscription for each unique topic at any given time. When attempting to create a duplicate subscription, the server will close the existing channel, log a warning, and spawn a new channel for the topic. The client will have their channel.onClose callbacks fired for the existing channel, and the new channel join will have its receive hooks processed as normal. Channel Hooks channel.onError( (e) => print("there was an error $e") ); channel.onClose( () => print("the channel has gone away gracefully") ); onErrorhooks are invoked if the socket connection drops, or the channel crashes on the server. In either case, a channel rejoin is attempted automatically in an exponential backoff manner. onClosehooks are invoked only in two cases. 1) the channel explicitly closed on the server, or 2). The client explicitly closed, by calling channel.unsubscribe() Credits - - ported from realtime-js library License This repo is liscenced under MIT.
https://pub.dev/documentation/realtime_client/latest/
CC-MAIN-2020-50
refinedweb
572
53.37
Implement the CSS Editing Object Model (see URL above). This is needed for direct class assignment to the selection (bug 16255) in the editor. Patch pending. Created attachment 145757 [details] [diff] [review] First candidate patch This candidate patch is fully functional but not optimized/verified. I already use it in Nvu and it's very very cool. If the string list allocation fails, this code crashes, no? (In reply to comment #2) > If the string list allocation fails, this code crashes, no? Boris: right. I said not optimized, not verified. But the basis is here. Created attachment 146257 [details] [diff] [review] better fix This is a better patch. I am ok to put #ifdef MOZ_STANDALONE_COMPOSER everywhere but I feel this could be useful (a) for Midas and other self-modifyable web pages (b) for tests. Comment on attachment 146257 [details] [diff] [review] better fix bz, could you review please ? Thanks a lot. I won't be able to get to a review for at least a week... At first glance, though, this interface needs some documentation (like clearly explaining what it actually does). The question I raised in your blog still stands. Why should this be on a selector-specific interface as opposed to a more general CSSOM interface (like nsIDOMDocumentCSSStyle or something)? Comment on attachment 146257 [details] [diff] [review] better fix Marking r- in the hope of getting a response to comment 6 and because of the problems listed below: >Index: dom/public/nsIDOMClassInfo.h >+ eDOMClassInfo_CSSSelectorQuery_id, Please add this at the end of the list like the comments say to do. >Index: content/html/style/src/nsCSSStyleRule.cpp >+ mAtom->ToString(atomString); >+ classString.Append(atomString); I think it would be better to get the UTF8 string from the atom (no copy) and then AppendUTF8toUTF16. Saves a string copy... >+nsCSSSelector::GetSelectorList(PRUint32 aSelectorFilter, nsIDOMDOMStringList >+ mTag->ToString(tagString); This clobbers the namespace prefix, no? I don't think you want that. >+ nsCOMPtr<nsDOMStringList> list = do_QueryInterface(aStringList); I'm not sure how this is supposed to work, exactly... You're not QIing to an interface here. >+ mClassList->ToDOMStringList(NS_LITERAL_STRING("."), aStringList); It makes sense to call ToDOMStringList AppendToDOMStringList instead. Perhaps the first arg should be a PRUnichar, not an nsAString? Why is this code passing around the sheet? It just wants the nsINameSpace, so it should pass that around, I think. This code doesn't seem to differentiate between negated and non-negated simple selectors. This should probably be clearly documented in the interface. >+DOMCSSStyleRuleImpl::GetSelectorList(PRUint32 aSelectorFilter, nsIDOMDOMStringList ** aStringList) >+{ >+ if (!Rule()) { >+ return NS_OK; You need to null out *aStringList before returning here. >+ *aStringList = list; >+ NS_IF_ADDREF(*aStringList); No need for the IF part. "list" cannot be null here. Just do: NS_ADDREF(*aStringList = list); Given this function, why doesn't the rule's GetSelectorList just take an nsDOMStringList* instead of taking nsIDOMDOMStringList*? Since the rule has to add things to the list, that would make a lot of sense to me... >+CSSStyleRuleImpl::GetSelectorList(PRUint32 aSelectorFilter, nsIDOMDOMStringList * aStringList) >+ nsCSSSelectorList * selectorList = mSelector; >+ while (selectorList) I think this wants to be a for loop. >Index: content/html/style/src/nsICSSStyleRuleDOMWrapper.h >+class nsICSSStyleRuleDOMWrapper : public nsISupports Why this change? This makes all DOM style rules 4 bytes bigger; is there really a good reason for it? >Index: content/html/style/src/nsCSSStyleSheet.cpp >@@ -1002,6 +1014,7 @@ >+ Why this change? >+CSSStyleSheetImpl::GetSelectorListInternal(PRUint32 aSelectorFilter, nsIDOMDOMStringList * aStringList, nsIDOMCSSRuleList * aRuleList) Wouldn't a lot of this logic be better off in the relevant rules themselves? Then new rule impls would not require changes to this code (eg see @-moz-document rules). >+CSSStyleSheetImpl::GetSelectorList(PRUint32 aSelectorFilter, nsIDOMDOMStringList ** aStringList) >+ if (nsnull == mRuleCollection) { !mRuleCollection, please. Any reason this code isn't just using GetRules()? >+ *aStringList = list; >+ NS_IF_ADDREF(*aStringList); Again, "list" can't be null here. >Index: content/html/style/public/nsICSSStyleSheet.h >+ NS_IMETHOD GetSelectorListInternal(PRUint32 aSelectorFilter, nsIDOMDOMStringList * aStringList, Weird indent. >Index: content/base/src/nsDocument.cpp >+ PRInt32 sheetCount = GetNumberOfStyleSheets(false); The GetNumberOfStyleSheets() call no longer needs a boolean (and needed a PRBool before anyway; the code as written would break some platforms). >+ nsIStyleSheet *sheet = GetStyleSheetAt(sheetIndex, PR_FALSE); Again, no need for the PR_FALSE >+ *aStringList = list; >+ NS_IF_ADDREF(*aStringList); Again, no need for the "IF" part.
https://bugzilla.mozilla.org/show_bug.cgi?id=240080
CC-MAIN-2016-30
refinedweb
690
50.23
Johan Danforth's Blog I upgraded to Win7rc as fast as I could from the beta and pretty much all the problems I had on the beta are gone! I’ve never experienced a beta OS as stable as this one, it’s nothing short of impressive. Hangs on reboot/shutdown are gone. Annoying stuttering sound probs are gone. File copy probs are gone. Screen update probs gone. I’m running 64-bit on a DELL Precision M4400 and the funny thing is that think I’ve not installed any extra drivers except for the touchpad and the initial network drivers, and it runs like a charm. It was a nightmare to get it working properly on 64-bit Vista, even when it came pre-installed with it from DELL! (Shame on you DELL to ship a box that doesn’t even run properly on the pre-installed software!!) I will have to install the extra software to get the built-in 3G mobile network running and that was kind of tricky on the beta, and the finger-print reader I’m not even going to bother with. Glad to read about the upcoming RC release of Windows 7. Just wondering how I should “migrate” to the RC from by current beta as I’ve spent days and days to get it in pretty good shape. Lots of job, but Win7 is just worth it. From the Askperf webby:… The HD of the new DELL Precision M4400 I have crashed, burned and died yesterday. I heard the screams 3 rooms away while drinking my morning coffee. It was horrific. Luckily it was powered on during the night, so the Windows Home Server (WHS) had a full backup. Had a few probs though: - the recovery cd didn’t have network drivers for my LAN card - the drivers stored in the special folder on the backup were 64-bit, which isn’t supported by the recovery cd (DUH!!) - had to download 32-bit drivers from DELL and put on USB drive Tip: make sure your c-drive matches the size of the backup, and create a “dummy” FAT recovery partition on your new HD which mimics the DELL RECOVERY partition. Also, make sure your USB drive is disconnected once you don’t need it anymore. Make sure it’s disconnected before you finish the backup. Also, eject the recovery cd when rebooting, just to be sure! I had to do the restore 3 times before it managed to make my c-drive bootable! Quite annoying… I’m having problems in my 64 bit Windows 7 beta with the built in support for unzipping zip-files containing *lots* of files. It often hangs near the end and I have to reboot to sort things out. Downloaded 64 bit version of 7-Zip and it work fine – so far. You might want to try that one out if you have the same problems. Eventually it may also hang, but I’ll try it for a while and shout out if I notice any probs. Some time ago I noticed a peak in Writespace downloads and I started to get some emails from people with requests for new features and stuff, which is fun. I saw from the stats that Lifehacker had a couple of articles on Writespace, as well as Danish PC-World, some Japanese site (I have no idea what the site is about :) and a few other places. I had no idea if Writespace actually worked in Japanese, but apparently it does! If I get some time for it I’ll release a Word 2003 version and add a few of the requested new features. More jQuery and Json… To fill a listbox (select) with items from a Json call. I got this helper class to handle the options/items: public class SelectOption { public String Value { get; set; } public String Text { get; set; } } A sample action/method in ASP.NET MVC that returns Json: public JsonResult GetJson() { var list = new List<SelectOption> { new SelectOption { Value = "1", Text = "Aron" }, new SelectOption { Value = "2", Text = "Bob" }, new SelectOption { Value = "3", Text = "Charlie" }, new SelectOption { Value = "4", Text = "David" } }; return Json(list); } Some HTML and jQuery to fill the list at page load: <select id="MyList" /> <script type="text/javascript"> $(document).ready(function() { $.getJSON("/Json/GetJson", null, function(data) { $("#MyList").addItems(data); }); }); $.fn.addItems = function(data) { return this.each(function() { var list = this; $.each(data, function(index, itemData) { var option = new Option(itemData.Text, itemData.Value); list.add(option); }); }); }; $("#MyList").change(function() { alert('you selected ' + $(this).val()); }); </script> I’m stacking a few things here related to Json, jQuery and ASP.NET MVC so that I can get to them later on. JQuery to grab some Json when a web page is loaded: $(document).ready(function() { $.getJSON("/MyController/MyJsonAction", null, function(data) { //do stuff with data here }); }); The smallish code in the ASP.NET MVC controller: public JsonResult GetJson() { return Json(GetList()); } It’s possible to use anonymous types with Json() as well, which is very useful but may be harder to unit test. I’ve blogged earlier about the problems with Cassini and WCF on Windows 7 Beta (build 7000) and your best bet is to install IIS locally test your services in there. Now, there might be some problems getting IIS to read your service certificate and my colleague Tomas helped me get things running. I thought I might as well blog it here so that I got it documented… Open a VS2008 Command Prompt (I ran it as administrator) and create a certificate, then add it to your local store: makecert.exe -sr LocalMachine -ss My -a sha1 -n CN=localhost -sky exchange -pe certmgr.exe -add -r LocalMachine -s My -c -n localhost -r CurrentUser -s TrustedPeople Then you have to give IIS access to the private part of the certificate and Tomas found some sample code to let you do that. The FindPrivateKey.exe source code is available on MSDN. Keep working on the command prompt: FindPrivateKey.exe My LocalMachine -n "CN=localhost" Note the output for private key directory and filename, for example: Private key directory: C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys Private key file name: 288538e27a2aebe9f77d2506bf6c836a_adf55683-4eae-4544-bbd1-d6844a44e538 Then use them to feed the final call to give the default IIS-user access to the private key, for example: CACLS.exe C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys\288538e27a2aebe9f77d2506bf6c836a_adf55683-4eae-4544-bbd1-d6844a44e538 /G "IIS_IUSRS":R That should be it, and it worked on our machines. Right, so I ran the new Web Platform Installer 2.0 Beta on my WHS and it seems (so far) to have worked out quite well. I created a new MVC website with File->New… and published it over to an MVC-directory on WHS (I have set that up earlier, with a file share and everything). Now, the version of IIS on Windows Home Server is IIS 6 (because WHS runs on Windows 2003 kind of), so ASP.NET MVC won’t work right out of the box. I was hoping that the Web Platform Installer would have taken care of that, but apparently not. Phil Haack wrote a long, and very detailed blog post about how to set things up on IIS 6, so I’m following that, adding support for the .mvc extension, changing the routes in Global.asax and so on, and voila it works: Now, I want to set up extension-less URL’s, which is prettier than having to have that .mvc extension in the URL. Phil cover this in his blog post as well, so I’m adding wildcard mappings, and here goes: Isn’t that just great? I love my home server, maybe I can host my Writespace Clickonce installer on my own server? Not too worried about the load on the server :) Watch this space for some sample stuff that will be located on my own server *big grin* I’ve been thinking of setting up ASP.NET MVC 1.0 on my WHS and also start learning some Silverlight stuff, so I took a risk, went to the Microsoft Web Platform Installer page and clicked on the Beta 2.0 link. Downloaded the installer, marked ASP.NET MVC and the most necessary options and let it go. Should work, right? It had to reboot once to get Windows Installer 4.5 in, but it continued to chew on and after a few minutes: Yay! Now I just have to get an MVC test application on there somehow… brb.
http://weblogs.asp.net/jdanforth/default.aspx
crawl-002
refinedweb
1,427
70.63
Hello everyone! I haven't been able to spend time programming for some months now, so I decided to brush up my very few skills by writing simple programs. I opted for "Guess the number" since it's pretty basic stuff (or so i thought). My main trouble was that I wanted to let the player quit by typing "quit", instead of asking for a number, for example "0". If any other newbie has tried out this program, the problem is when the user tries to input anything other than a number. Python (not without reason) doesn't type-cast it into anything with int(), so the program crashes. The following is my solution to this problem, but it doesn't look very elegant at all. I would appreciate any suggestions on making the code better. Thanks! from random import * print("\n* * * Welcome! * * *\n\nGo ahead and try to guess a number from 1 to 10. If you feel like quitting, type QUIT\n") seed() playing = True validating = True number = randint(1,10) while playing: while validating: guess = input("Your guess? ") try: int(guess) break except: if str.upper(guess) == "QUIT": print ("Goodbye!") validating = False playing = False else: print("\nI don't think that was a number\n") if playing == False: break if int(guess) == 0: print ("Goodbye!") playing = False elif int(guess) < number: print("\nA little higher...\n") elif int(guess) > number: print("\nA little lower...\n") else: print("\nYou got it!\n") playing = False
http://www.gamedev.net/topic/637343-any-suggestions-for-better-code-python-guess-the-number/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024
CC-MAIN-2013-48
refinedweb
246
75.3
How do i fix my jquery-ui autocomplete problem (details below)? I followed your tutorial on elastic search/search kick and implementing jquery autocomplete. It works when I use "Test" as the data, however when I try to use my data from posts table, it doesn't work. So when my posts controller file's autocomplete action looks like this, it works: def autocomplete render json: ["Test"] end When I change it to this, and add the changes to the post model file, it stops working - controller file: def autocomplete render json: Post.search(params[:term], fields: [{title: :text_start}], limit: 10).map(&:title) end post.rb model file: searchkick text_start: [:title] any help is much appreciated. It's really starting to bug me because it should be quick and easy like your tutorial, I have asked multiple questions on stack overflow, no response and researched for days, spent hours doing several different tutorials with different methods. I subscribed to your website hoping you could help. thank you Hey Kosta, I'm not sure entirely what's wrong, but here's what I would do: - Take this and run it in your rails console: Post.search(params[:term], fields: [{title: :text_start}], limit: 10).map(&:title) - If this doesn't return an array of string like the test example, then you know this is the problem and you can start fiddling with that to get it to return the right format. Maybe you just needed to reindex or something simple. - If it does return an array of strings, then go into your browser and check that the JSON it receives is correct. You can print it out in the browser with a console.logto verify that. - If you are receiving the correct JSON in the browser, then you can check to make sure this gets passed over to the autocomplete correctly. You know this probably isn't the case because the test JSON did work, so it's the least likely to be the problem. That should help you break apart the problem to figure out what's the piece causing the problem and it'll probably be obvious once you get there. :)
https://gorails.com/forum/how-do-i-fix-my-jquery-ui-autocomplete-problem-details-below
CC-MAIN-2021-17
refinedweb
361
70.43
Most of us are familiar with the MVC type UI frameworks such as Struts and Spring MVC. They use a similar Model, View and Controller type architecture and each one has either XML or annotation type configuration. Apache Wicket uses a component based approach to web development and uses convention over configuration. After initial framework setup, the minimum two files a developer needs to create to create a web page are a Java class and an HTML file and that’s it, no scriptlets or JSP tags necessary. The convention is that the Java class and the HTML file need to have the same name and be in the same package. So the HTML file takes the place of a JSP, the Java class takes the place of Controller, and we’ll look briefly at Wicket’s Model concept in a bit. Because Wicket is a component based framework, reusable components can be created that can be used on many different pages. For example, wicket has an HTML data table component which has sortable headers, paging and can be styled with CSS by simply creating an instance of it on a page, configuring it, and attaching a style sheet. At my current client, we use several of these data tables on a single page and we created our own component by wrapping it to simplify the API. As another example a project exists called wiquery which wraps JQuery UI into Wicket components. So if I needed a JQuery UI date picker, all I would need to do is instantiate the component on the page and put an id in the HTML page. All the JavaScript is wrapped into that component so there is no need to write any (more on that in another post). Let’s take a quick look at the framework setup. Wicket provides a maven archetype to get started so there’s no need to start from scratch. We can start by going to the Wicket Quickstart Page and following the instructions. At the time of this writing, I used version 1.5-RC3. This archetype provides the web.xml, WebApplication class (configuration), and a simple web page. Here’s how we do that: mvn archetype:generate -DarchetypeGroupId=org.apache.wicket -DarchetypeArtifactId=wicket-archetype-quickstart -DarchetypeVersion=1.5-RC3 -DgroupId=org.acme.wicket.example -DartifactId=wicket-example -DinteractiveMode=false Now, let’s take a look at the framework pieces that the above command gives us: web.xml Here all that is needed is the filter that directs the requests to Wicket: <filter> <filter-name>wicket.wicket-example</filter-name> <filter-class>org.apache.wicket.protocol.http.WicketFilter</filter-class> <init-param> <param-name>applicationClassName</param-name> <param-value>org.acme.wicket.example.WicketApplication</param-value> </init-param> </filter> <filter-mapping> <filter-name>wicket.wicket-example</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> WebApplication The web application is the central configuration class. This is where the home page is set (the page that is loaded when a user goes to the base url) and other things (which we won’t get into in this post) like Spring integration, exception handling, security setup, and other configuration. package org.acme.wicket.example; import org.apache.wicket.protocol.http.WebApplication; /** * Application object for your web application. * If you want to run this application without deploying, * run the Start class. * * @see org.acme.wicket.example.Start#main(String[]) */ public class WicketApplication extends WebApplication { /** * @see org.apache.wicket.Application#getHomePage() */ @Override public Class getHomePage() { return HelloWorldPage.class; } /** * @see org.apache.wicket.Application#init() */ @Override public void init() { super.init(); // add your configuration here } } The archetype also provides a simple home page class and HTML file (which I’m not going to post here). Let’s create a simple “Hello World” WebPage again the only two things we’ll need is a Java class and an HTML file, here’s the Java class: package org.acme.wicket.example; import org.apache.wicket.markup.html.WebPage; import org.apache.wicket.markup.html.basic.Label; import org.apache.wicket.markup.html.link.Link; public class HelloWorldPage extends WebPage { private static final long serialVersionUID = 1L; public HelloWorldPage() { add(new Label("helloWorld", "Hello World!!!")); add(new Link ("nextPage") { private static final long serialVersionUID = 1L; @Override public void onClick() { setResponsePage(HomePage.class); } }); } } Here’s the HTML file: <html xmlns: <head> <title>Hello World</title> </head> <body> <div wicket:helloWorld goes here</div> <a wicket:Go to the next page...</div> </body> </html> So what happened is the wicket:id was attached to an HTML tag and the content of that tag got replaced with “Hello World!!!”. This works with any HTML tag, we could have used a span, table td, even the body tag. A Label in wicket is the most basic component you can use to display some value, but there are many other components that work this way and will replace a div or span with an HTML table or Ajax Tree view (Wicket AJAX is another post). Most built in components will take what’s called a Model which allows a lot of flexibility in how to display data on the page. I’ll go into more detail about it in another post. A link was also created to link to another web page, but links can be used in other ways also. In addition to pages, Wicket has the concept of a Panel which is a section of a web page. A panel can be placed anywhere on a page inside any tag (as mentioned above). A page can have many panels on it and a panel could have panels on it, so there is a lot of flexibly in how pages are created. At my current client we have a single page website with panels for the header, left navigation and content and we only replace the content panel when we create new screens (like a mini-portal). I hope you are inspired to try out this framework. I’ve really enjoyed working with it at my client and it’s made web development fun. It’s hard to show the true value of this framework with such a simple example, but it really shows as a project goes along. Creating a new custom component is a lot cleaner than copying bits of XML and JSP around and it really shines as a more and more reusable components are created. Just the fact that you can use wiquery to create multiple JQuery UI components on a single page without writing a line of JavaScript has a lot of value. Thank you for reading and look for more posts on Apache Wicket in the near future. One thought on “Component UI Development with Apache Wicket” Nice Post Thank you!
https://objectpartners.com/2011/05/05/component-ui-development-with-apache-wicket/
CC-MAIN-2019-13
refinedweb
1,125
54.12
#include <residue_view.hh> Inherits ost::mol::ResidueBase. Atoms are added with ResidueView::AddAtom() and removed with ResidueView::RemoveAtom(). Like other view operations, these operations only affect the view and do not alter the structure and topology of the underlying residue. Definition at line 39 of file residue_view.hh. Create invalid ResidueView. Create new residue view. Add atom to view. If ViewAddFlag::CHECK_DUPLICATES is set, the method will ensure that the handle is not already included in the view. Add atom to view. If ViewAddFlag::CHECK_DUPLICATES is set, the method will ensure that the view is not already included in this view. This method is an alias for `residue_view.AddAtom(atom_view.GetHandle())` Apply entity visitor to whole chain. Find residue by residue handle. Find atom by name. return number of atoms in this residue view. get list of atoms in this view Get entity's center of atoms (not mass weighted). Returns the center of all the atoms in an entity. This is similar to GetCenterOfMass(), but the atoms are not mass weighted Get entity's center of mass (mass weighted). get parent chain view. get entity Get entity's geometric center. Returns the geometric center of the entity's bounding box by calculating (GetGeometricStart()+GetGeometricEnd())/2 get handle this view points to get index of residue view in chain Get entity's mass. Check whether the view includes the the given atom. remove given atom from view all bonds involving this atom will be removed from the view as well. remove all atoms all bonds involving one of the atoms will be removed from the view as well. set the index of residiue view in chain should be called from chainview whenever indexes change Get internal view data. Definition at line 69 of file residue_view.hh. Get internal view data. Definition at line 65 of file residue_view.hh. Definition at line 41 of file residue_view.hh.
http://test.openstructure.org/doxygen/dc/db5/classost_1_1mol_1_1_residue_view.html
crawl-003
refinedweb
316
69.58
Date: Fri, 18 Aug 2006 12:47:58 -0500 From: joetaxpayer Subject: Re: Which is best: 401K, ESOP (with 15% discount from employer), Roth, or ... ? Newsgroups: misc.invest.financial-plan processed by UCSD_GL-v2.1 on mailbox7.ucsd.edu; Fri, 18 August 2006 10:47:44 -0700 (PDT) iQBVAwUAROX9Tvl/I4+O31e5AQGwZQIA3qkdlc0Byr/69CO/gqtFXvRX7v+VVykr SeD3Kf7OYjpyAxfO/QdP1EcfY4BwrjOFsULtBsTd/yGOvVZKfnR3xA== =We94 franklin.bowen@gmail.com wrote: >. 1) Max out the ESOP and be prepared to flip the stock when it hits the account. This is a 17% return (100/85) on an average holding period of 3 months. (deposits over 6 months, right?) Even if you take on some debt, this pays off. There are variation on this plan, such as sell only 90% of the stock. You'd get all your money back, plus a bit more, and have 10% invested in this stock. It would take years till this became over weighted in your portfolio. 2) If the 401 offers a match, put in as much money as gets matched. 3) MAX out the Roth, if you are eligible. 4) go back to the 401 till that's maxed. The fear of having too much pre-tax money in retirement is valid, but unlikely. You'd first have to in a low bracket now, say 15%, which is single taxable up to $29.7K, but somehow save enough to force withdrawals above that amount to jump to the 25% bracket. JOE
http://www.info-mortgage-loans.com/usenet/posts/13821-22754.misc.invest.financial-plan.shtml
crawl-002
refinedweb
240
75.71
Suppose I use Entity Framework 4.0, how to generate data transfer object manually? Need example with C#. Hi ardore; To create a Data Transfer Object / DTO, all you need to do is to create a new class which have public accessors for the properties you wish to publish. You can wish to fill a collection of these object by using it in the Select clause of the Linq to EF query. For example lets say you query the Customer table which has 10 columns but you only need three columns to be sent to the receiver of this collection then you can do this. // DTO class public class MyDTO { public string CompanyName { get; set; } public string ContactName { get; set; } public string PhoneNo { get; set; } } List<MyDTO> customerDTO = (from c in db.Customer select new MyDTO() { CompanyName = c.CompanyName, ContactName = c.Contact, PhoneNo = c.Phone }).ToList(); That is it..
https://social.msdn.microsoft.com/Forums/en-US/eecd80bf-78fa-4668-8b27-a17de319b548/generate-dto-from-entity-framework?forum=adodotnetentityframework
CC-MAIN-2020-45
refinedweb
149
66.74
SDK: LPC54628 2.3.0 When I config a new project with this SDK, a head file (fat fs) miss when compiled: fsl_dspi.h. I search sdk directory, no such file. ../fatfs/fatfs_source/fsl_sdspi_disk.c:39:22: fatal error: fsl_dspi.h: No such file or directory #include "fsl_dspi.h" anyone can help me? Thanks I have a similar problem with using SD card(fat fs) on KL27Z - SDK 2.3.0. I'm also missing fsl_dspi.h. I found that in SDK v. 2.0 fsl_dspi is a driver for SPI and in the 2.3 version this driver is replaced with fsl_spi. I think that in fsl_sdspi_disk - the spi should be configured with the new driver. (rewrite functions spi_init: dspi_transfer_t = spi_transfer_t ,spi_exchange,spi_set_frequency, etc.). I didn't have time to implement this changes, so I don't know if it will work.
https://community.nxp.com/thread/468577
CC-MAIN-2018-34
refinedweb
143
71
Feature #8948 Frozen regex Added by sawa (Tsuyoshi Sawada) over 6 years ago. Updated 4 months ago. Description =begin I see that frozen string was accepted for Ruby 2.1, and frozen array and hash are proposed in. I feel there is even more use case for a frozen regex, i.e., a regex literal that generates a regex only once. It is frequent to have a regex within a frequently repeated portion of code, and generating the same regex each time is a waste of resource. At the moment, we can have a code like: class Foo RE1 = /pattern1/ RE2 = /pattern1/ RE3 = /pattern1/ def classify case self when RE1 then 1 when RE2 then 2 when RE3 then 3 else 4 end end end but suppose we have a frozen Regexp literal //f. Then we can write like: class Foo def classify case self when /pattern1/f then 1 when /pattern1/f then 2 when /pattern1/f then 3 else 4 end end end =end ago We already have immutable (created only once) regexps: it is always the case for literal regexps and for dynamic regexps you need the 'o' flag: /a#{2}b/o. So there are in practice immutable, but currently not #frozen?. Do you want to request it? I think it makes sense. You can check with #object_id to know if 2 references reference the same object. def r; /ab/; end r.object_id => 2160323760 r.object_id => 2160323760 def s; /a#{2}b/; end s.object_id => 2153197860 s.object_id => 2160163740 def t; /a#{2}b/o; end t.object_id => 2160181200 t.object_id => 2160181200 ago besides regexps being frozen, there might still be a use case for regexp literals that would only be allocated once: def r1; /ab/; end; r1.object_id #=> 70043421664620 def r2; /ab/; end; r2.object_id #=> 70043421398060 def r3; /ab/f; end; r3.object_id #=> 70043421033140 def r4; /ab/f; end; r4.object_id #=> 70043421033140 i think it's in the same vein as #8579 and #8909. Updated by sawa (Tsuyoshi Sawada) over 6 years ago jwille, I agree with the use case, but it would be difficult to tell which regexes are intended to be the same, so I would not request that feature. Probably, it makes sense to have all static regexes frozen, and have the f flag freeze dynamic regexes as well. I can't think of a use case for a regex that is immutable but not frozen. I am actually not clear about the difference. ago jwille, My understanding with the case of string in your example is that the two strings would count as different strings, but for respective method calls would not create new strings. It would mean one of the string can be "ab" and the other a different string such as "cd". If that is what you intended for your regex examples, then there is no difference.
https://bugs.ruby-lang.org/issues/8948?tab=properties
CC-MAIN-2020-24
refinedweb
478
82.14
LordPaysoMembers Content Count9 Joined Last visited About LordPayso - RankNewbie - Ok. SO i accidentally figured this out. Initially we were setting the texture of our Sprites with the fromImage() method. I then re-factored the initial setting to use the fromFrame() method and now it is working. Apologies for not being clear form the start. - No errors printed to the console. It is in the WebView browser used for Cordova. - On our Android 4.1.* devices the seTexture method is called but the texture does not change/update. I can see this visibly and when I log out the value of the sprite's texture source, it remains the texture that it was initially set to. - REally cannot think of a way around this apart from using a display object container and then removing/adding sprites as appropriate. - Is there anyway to clear a texture prior to applying a new one? Have added a spritesheet to no avail like so: var loader = new PIXI.AssetLoader(['images/daggers/sprDagger.json']); loader.onComplete = function(elm){ console.dir(PIXI.TextureCache); }; loader.load();And then call the frame: dagger.setTexture(PIXI.Texture.fromFrame(prefix + dagger.number.toString() + '0.png')); setTexture() of Sprite not called LordPayso posted a topic in Pixi.jsI am updating the texture of a sprite when it is rotated to reflect different shading like so: d.updateDaggerShade = function(dagger, color, pos) { var prefix; (color === 'white') ? prefix = 'w' : prefix = 'b'; dagger.setTexture(PIXI.TextureCache[prefix + dagger.number.toString() + (pos + 1).toString()]); console.log(dagger.texture.baseTexture.source.src); return; };Where d is the sprite object. All the textures have been cached also. This works fine on desktop, iPhone but not Android. We are using Cordova to wrap the web code to deploy as an app. Would using a sprite sheet be a more robust alternative? I do not need to animate the textures, just change them when the rotation has finished. Sprites duplicated when dragging to new position on Android LordPayso replied to LordPayso's topic in Pixi.jsI think we have attributed this to clearRect() failure in this version of Android. Sprites duplicated when dragging to new position on Android LordPayso replied to LordPayso's topic in Pixi.jsJust when i wrap it in Phonegap for Android Sprites duplicated when dragging to new position on Android LordPayso posted a topic in Pixi.jsHi, We have just started using Pixi.js and are noticing some visual artefacts in Android. We are building a game, much like chess, where a user starts of with some pieces and drags them to a board layout. the starting position for the pieces is at the bottom of the UI away from the board. The problem is that when the sprites are dragged from their starting position to the board we still see a representation of that sprite in its original position in the UI. This is only happening on Android and we are using web technologies with Phonegap for the build. It is proving really hard to fix and/or debug due to the nature of the build process and it is fine on web and iOS also. Any pointers would be well received I have done some more testing i do not believe the sprites to be actually duplicated, I used the following code to check that they were the only sprites present on the board: PIXI.DisplayObjectContainer.prototype.contains = function(child) { return (this.children.indexOf( child ) !== -1); }
http://www.html5gamedevs.com/profile/4718-lordpayso/
CC-MAIN-2019-18
refinedweb
572
57.98
Subject: Re: [Boost-cmake] Building boost "with an uninstalled build" and linking to boost libs From: Brian Davis (bitminer_at_[hidden]) Date: 2010-03-25 00:10:03 On Wed, Mar 24, 2010 at 4:04 PM, Michael Jackson < mike.jackson_at_[hidden]> wrote: > I have been playing with "ExternalProject_Add" and I don't think you are > going to be able to do what you are trying to do. I was using the following: > > Yes exactly which is why I switched to "with an uninstalled build". I also made reference to the brokenness and chicken-and-the-egg problem > >. > > Yep I know... exactly. Here is the question I asked on the CMake mailing list: Why can't Find_package know where a package will be (as well) before compilation not just where it is after. I think the build system has every thing it will need to know this in the build spec and if it does not... well time to redesign CMake. As far as the CMake Users Mailing list... I have: I mean think about it for a moment... BoostCore.cmake generates a list of dependencies and specifies where the output files will be. Why do we need to do a build to FindBoost. We know where it will be. The targets need to be specified (from my current understanding) in BoostCore.cmake, and maybe they are... I am still trying to digest this file, but seemingly it does not as ${Boost_FILESYSTEM_LIBRARY} and friends are only generated in FindBoost, but not in BoostCore.cmake which I am sure is by design based on CMake methodologies. There have been references made to about boost "abusing" CMake as a build system. I look at the Boost Build system as a godsend expecialy when I was developing for embedded targets and doing cross compilation to various targets. IMHO I think CMake can use to grow and learn from these types of stress tests. I mean how hard is it to create a build tool that can preform dependency checking across multiple 3rdParty libs? I am presume that inorder for CMake to be able to handle this there needs to be the concept of namspaces for variables relating to the 3rdParty libs eliminating the name clashing problem (much like BoostCore.cmake prepends almost all :-( variables with BOOST_ . I also liked the /boost//filesystem syntax in Jamfiles.v2. The Boost.Build v2 version of projects seemed much cleaner and solved this namespace issue (at least as far as dependencies were concerned - the problem was still there as far as global variables were concerned) I could also build all build variants in one command line. Not so with CMake, at least not that I can determine. -- Brian J. Davis
https://lists.boost.org/boost-cmake/2010/03/0895.php
CC-MAIN-2022-33
refinedweb
452
71.55
I am trying to make a program opens a file called input.txt and sums all of the numbers in it along with counting the number of numbers. So far i have Code Java: import java.util.Scanner; import java.io.*; import java.io.File; import java.io.IOException; public class test1 { public static void main(String[] args) throws IOException { String fileName = "input.txt"; Scanner fileScan = new Scanner(new File(fileName)); int sum = 0; int count = 0; File f = new File(fileName); Scanner scanFile = new Scanner (fileName); String line; while(scanFile.hasNext()) { line = scanFile.next(); count++; String integer; while (scanFile.hasNext()){ integer = scanFile.next(); Im really confused on how to add up integers from each line and save it to sum.
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/5776-im-so-confused-counting-summing-printingthethread.html
CC-MAIN-2015-06
refinedweb
121
70.39
Sass, Compass, and the Rails 3.1 Asset Pipeline Note: Today’s guest post is courtesy of Wynn Netherland, CTO at Pure Charity in Dallas and co-author of Sass and Compass in Action, a CSS book. He’s a web designer and a front-end developer, as well as a CSS geek. Check him out on GitHub and Twitter. TL;DR Compass and the Rails 3.1 Asset Pipeline can play nice to improve how you create, maintain, and serve stylesheet assets, but you’ll may need to tweak your setup to boost speed during your stylesheet development. Since the advent of dynamic web pages, there has remained a dichotomy in web development. Developers tend to keep server-side code and templates in one spot and all the other static images, stylesheets, and client-side scripts in another. Agency-driven, split team workflows in which designers hand off static pages to web developers to break them into server-side templates have reinforced this model. We even call our stylesheets, images, and scripts assets, as if they were meant to be deposited into our /public folder, not to be handled with our application code. The Rails 3.1 Asset Pipeline breaks down this firewall and makes static assets first-class citizens in your web application. While it also handles images, CoffeeScript, and many other content types, let’s explore what the asset pipeline does for stylesheet authors. What does the Pipeline do for me? For serving CSS, the asset pipeline performs a few key functions. Concatenation. Stylesheets can be stitched together from multiple source files, reducing the number of HTTP requests for CSS assets. The default application.css in Rails 3.1 is a manifest and tells the pipeline which source files to concatenate and serve to the browser: /* * . */ By default, all stylesheets will be included. Sprockets, the gem that powers the pipeline, allows for more granular control: /* *= require vars *= require typography *= require layout */ Just like view partials, we now get the benefit of organization by splitting up large stylesheets across several source files.Minification. Whitespace and comments are removed from stylesheets before getting served up to the browser, reducing file size. Fingerprinting. In Rails 3.1, cache-busting fingerprinting is baked right into the asset filename instead of relying on querystring parameters which have drawbacks in multi-server environments and some transparent proxy servers. Pre-processing. Perhaps the biggest feature of the asset pipeline is preprocessing. Stylesheets can now be authored in Sass, Less, even ERB (gasp!), introducing dynamic methods to create static stylesheets. Compass in the pipeline Many of the above benefits have been available previously with Compass, the stylesheet framework for Sass. It’s no surprise that someone recently asked on the Compass mailing list if Compass was somehow obsolete with the arrival of the asset pipeline. While it is true that there is some overlap in functionality between Compass and the asset pipeline, Compass does so much more. In addition to concatenation, minification, and preprocessing via Sass, most importantly Compass provides powerful modules for common stylesheet tasks including: - CSS3. The Compass CSS3 module provides Sass mixins for CSS3 features, allowing you to target multiple browsers’ vendor namespaces using a single syntax. - CSS sprites. Reducing the number of HTTP requests is a key factor of web application performance. The sprite helpers will create CSS sprites from a folder of separate assets, handling all the geometry for sprite layout, allowing you to reference icons by name. - Grid frameworks. Compass has great support for Blueprint, 960.gs, Susy, Grid Coordinates, and more. - Typography. Compass makes it easy to create and maintain vertical rhythm, tame lists, and style links with its typography helpers. - Plugins and so much more. There is a growing list of community plugins that make it easier to package styles for use across projects. Installation To use Compass inside the asset pipeline, be sure to add Compass version 0.11.0 (or edge if you’re brave) to the asset group in your Gemfile: group :assets do gem 'sass-rails', '~> 3.1.4' gem 'coffee-rails', '~> 3.1.1' gem 'uglifier', '>= 1.0.3' gem 'compass', '~> 0.11.0' end Once you’ve updated your gem bundle, you can now use Compass mixins in your stylesheets. Changes for Compass users If you’ve used Compass prior to Rails 3.1, there are some changes you should be aware of when using Compass in the asset pipeline. Choose a bundling option. First, decide if you want to let Sprockets manage your stylesheet bundles or simply use Sass partials. You can use Sprockets manifest require and include directives in your Sass files, however there is a simpler approach to include CSS assets in your stylesheets. Simply rename any CSS files to use the .scss. file extension and you can use them as partials in your Sass stylesheets, even if your main stylesheets use the indented syntax and .sass extension. Watch your assets. If you use the Compass helpers image-url, font-url, etc., note that these helpers will now resolve to /assets instead of /images and /fonts. This means you’ll need to put your images and fonts in app/assets/images and app/assets/fonts respectively instead of their previous homes in the public folder. Optimizing for development For all of its features, the asset pipeline comes with tradeoffs. The biggest impact is speed. With the asset pipeline, the entire Rails stack is loaded on each asset request. For small applications, this is trivial. For larger applications with many gem dependencies (especially those employing Rails engines) where many classes are reloaded with every request in the development environment, assets may render much more slowly when rendered via the asset pipeline. Tweak your setup If slow asset compilation is slowing down your front-end development, take a look at the rails-dev-tweaks gem from Wavii. The gem provides the ability to tweak Rails’ autoload rules, preventing reloading between requests in development. The default rules skips reloading classes for asset, XHR, and favicon requests: config.dev_tweaks.autoload_rules do keep :all skip '/favicon.ico' skip :assets skip :xhr keep :forced end You should notice a speed bump, but keep in mind that if you have custom Sass functions, you’ll want to bounce the server to see those changes. Precompile engine assets You can speed up your development environment considerably by precompiling assets as you would in production, effectively bypassing the asset pipeline altogether. While this is encouraged for team members who aren’t modifying stylesheets, it doesn’t help stylesheet authors much. There is one noticeable exception - Rails engines. Rails engines like Devise, Spree, and Refinery are powerful. Just by configuring a gem and running a generator you can add authentication, storefront, or CMS features to your Rails app with little effort. If your app uses engines, and you find your development environment begins to slow to a crawl, make sure your engine assets aren’t clogging the pipeline. In the case of Spree, we can improve asset performance by precompiling assets with a Rails Rake task: rake assets:precompile RAILS_ENV=development RAILS_ASSETS_NONDIGEST=true This will compile all application assets into /public/assets, allowing Rails to serve them without needing to recompile on each request: drwxr-xr-x 17 wynn staff 578 Nov 2 08:36 . drwxr-xr-x 8 wynn staff 272 Nov 2 08:35 .. drwxr-xr-x 16 wynn staff 544 Nov 2 08:36 admin -rw-r--r-- 1 wynn staff 155 Nov 2 08:30 application.css -rw-r--r-- 1 wynn staff 143 Nov 2 08:30 application.css.gz drwxr-xr-x 7 wynn staff 238 Nov 2 08:36 creditcards drwxr-xr-x 3 wynn staff 102 Nov 2 08:36 datepicker -rw-r--r-- 1 wynn staff 1150 Nov 2 08:19 favicon.ico drwxr-xr-x 6 wynn staff 204 Nov 2 08:36 icons drwxr-xr-x 4 wynn staff 136 Nov 2 08:36 jqPlot drwxr-xr-x 15 wynn staff 510 Nov 2 08:36 jquery-ui drwxr-xr-x 3 wynn staff 102 Nov 2 08:35 jquery.alerts drwxr-xr-x 3 wynn staff 102 Nov 2 08:35 jquery.jstree -rw-r--r-- 1 wynn staff 6694 Nov 2 08:36 manifest.yml drwxr-xr-x 5 wynn staff 170 Nov 2 08:36 noimage -rw-r--r-- 1 wynn staff 1608 Nov 2 08:19 spinner.gif drwxr-xr-x 6 wynn staff 204 Nov 2 08:36 store With precompiled assets, our load times are reduced dramatically, however we won’t see changes to our own stylesheets. In the example above, we can simply remove application.css and application.css.gz so that those will be compiled on each request via the asset pipeline. However, having our engine-provided stylesheets precompiled is a big win. As we mentioned above, one of the big gains of the asset pipeline is concatenating our stylesheets into a reduced number of files. If you are serving multiple application-specific stylesheets, consider precompiling all your assets and then removing your current work-in-progress stylesheet from /public/assets. Share your thoughts with @engineyard on Twitter
http://blog.engineyard.com/2011/sass-compass-and-the-rails-3-1-asset-pipeline
CC-MAIN-2017-17
refinedweb
1,529
64
pg-mem alternatives and similar modules Based on the "Other" category. Alternatively, view pg-mem alternatives based on common mentions on social networks and blogs. Lowdb9.2 1.8 L5 pg-mem VS LowdbSmall JavaScript database powered by Lodash. NeDB9.0 0.0 L2 pg-mem VS NeDBEmbedded persistent database written in JavaScript. Keyv5.7 1.6 pg-mem VS KeyvSimple key-value storage with support for multiple backends. Mongo Seeding3.9 5.6 pg-mem VS Mongo SeedingPopulate MongoDB databases with JavaScript and JSON files. Finale2.7 0.3 pg-mem VS FinaleRESTful endpoint generator for your Sequelize models. @databases2.6 6.9 pg-mem VS @databasesQuery PostgreSQL, MySQL and SQLite3 with plain SQL without risking SQL injection. database-js1.5 0.0 pg-mem VS database-jsWrapper for multiple databases with a JDBC-like pg-mem or a related project? Popular Comparisons README pg-mem is an experimental in-memory emulation of a postgres database. ❤ It works both in Node or in the browser. ⭐ this repo if you like this package, it helps to motivate me :) 👉 See it in action with pg-mem playground - [Usage](#-usage) - [Features](#-features) - [Libraries adapters](#-libraries-adapters) - [Inspection](#-inspection) - [Development](#-development) 📐 Usage Using Node.js As always, it starts with an: npm i pg-mem --save Then, assuming you're using something like webpack, if you're targeting a browser: import { newDb } from 'pg-mem'; const db = newDb(); db.public.many(/* put some sql here */) Using Deno Pretty straightforward :) import { newDb } from ''; const db = newDb(); db.public.many(/* put some sql here */) Only use the SQL syntax parser ❤ Head to the pgsql-ast-parser repo ⚠ Disclaimer The sql syntax parser is home-made. Which means that some features are not implemented, and will be considered as invalid syntaxes. This lib is quite new, so forgive it if some obvious pg syntax is not supported ! ... And open an issue if you feel like a feature should be implemented :) Moreover, even if I wrote hundreds of tests, keep in mind that this implementation is a best effort to replicate PG. Keep an eye on your query results if you perform complex queries. Please file issues if some results seem incoherent with what should be returned. Finally, I invite you to read the below section to have an idea of you can or cannot do. 🔍 Features Rollback to a previous state pg-mem uses immutable data structures (here and here), which means that you can have restore points for free! This is super useful if you intend to use pg-mem to mock your database for unit tests. You could: 1) Create your schema only once (which could be a heavy operation for a single unit test) 2) Insert test data which will be shared by all test 2) Create a restore point 3) Run your tests with the same db instance, executing a backup.restore() before each test (which instantly resets db to the state it has after creating the restore point) Usage: const db = newDb(); db.public.none(`create table test(id text); insert into test values ('value');`); // create a restore point & mess with data const backup = db.backup(); db.public.none(`update test set id='new value';`) // restore it ! backup.restore(); db.public.many(`select * from test`) // => {test: 'value'} Custom functions You can declare custom functions like this: db.public.registerFunction({ name: 'say_hello', args: [DataType.text], returns: DataType.text, implementation: x => 'hello ' + x, }) And then use them like in SQL select say_hello('world'). Custom functions support overloading and variadic arguments. ⚠ However, the value you return is not type checked. It MUST correspond to the datatype you provided as 'returns' (it won't fail if not, but could lead to weird bugs). Custom types Not all pg types are implemented in pg-mem. That said, most of the types are often equivalent to other types, with a format validation. pg-mem provides a way to register such types. For instance, lets say you'd like to register the MACADDR type, which is basically a string, with a format constraint. You can register it like this: db.public.registerEquivalentType({ name: 'macaddr', // which type is it equivalent to (will be able to cast it from it) equivalentTo: DataType.text, isValid(val: string) { // check that it will be this format return isValidMacAddress(val); } }); Doing so, you'll be able to do things such as: SELECT '08:00:2b:01:02:03:04:05'::macaddr; -- WORKS SELECT 'invalid'::macaddr; -- will throw a conversion error If you feel your implementation of a type matches the standard, and would like to include it in pg-mem for others to enjoy it, please consider filing a pull request ! (tip: see the INET type implementation as an example, and the pg_catalog index where supported types are registered) Extensions No native extension is implemented (pull requests are welcome), but you can define kind-of extensions like this: db.registerExtension('my-ext', schema => { // install your ext in 'schema' // ex: schema.registerFunction(...) }); Statements like create extension "my-ext" will then be supported. 📃 Libraries adapters pg-mem provides handy shortcuts to create instances of popular libraries that will be bound to pg-mem instead of a real postgres db. - pg-native - node-postgres (pg) - pg-promise (pgp) - slonik - typeorm - knex - mikro-orm See the wiki for more details 💥 Inspection Intercept queries If you would like to hook your database, and return ad-hoc results, you can do so like this: const db = newDb(); db.public.interceptQueries(sql => { if (sql === 'select * from whatever') { // intercept this statement, and return something custom: return [{something: 42}]; } // proceed to actual SQL execution for other requests. return null; }); Inspect a table You can manually inspect a table content using the find() method: for (const item of db.public.getTable<TItem>('mytable').find(itemTemplate)) { console.log(item); } Manually insert items If you'd like to insert items manually into a table, you can do this like that: db.public.getTable<TItem>('mytable').insert({ /* item to insert */ })) You can subscribe to some events, like: const db = newDb(); // called on each successful sql request db.on('query', sql => { }); // called on each failed sql request db.on('query-failed', sql => { }); // called on schema changes db.on('schema-change', () => {}); // called when a CREATE EXTENSION schema is encountered. db.on('create-extension', ext => {}); Experimental events pg-mem implements a basic support for indices. These handlers are called when a request cannot be optimized using one of the created indices. However, a real postgres instance will be much smarter to optimize its requests... so when pg-mem says "this request does not use an index", dont take my word for it. // called when a table is iterated entirely (ex: 'select * from data where notIndex=3' triggers it) db.on('seq-scan', () => {}); // same, but on a specific table db.getTable('myTable').on('seq-scan', () = {}); // will be called if pg-mem did not find any way to optimize a join // (which leads to a O(n*m) lookup with the current implementation) db.on('catastrophic-join-optimization', () => {}); - Why this instead of Docker ? TLDR : It's faster. Docker is overkill. - What if I need an extension like uuid-ossp ? TLDR: You can mock those - How to import my production schema in pg-mem ? TLDR: pg_dump with the right args - Does pg-mem supports sql migrations ? TLDR: yes. - Does pg-mem supports plpgsql/other scripts/"create functions"/"do statements" ? TLDR: kind of... Detailed answers in the wiki 🐜 Development Pull requests are welcome :) To start hacking this lib, you'll have to: - Use VS Code - Install mocha test explorer with HMR support extension npm start - Reload unit tests in VS Code ... once done, tests should appear. HMR is on, which means that changes in your code are instantly propagated to unit tests. This allows for ultra fast development cycles (running tests takes less than 1 sec). To debug tests: Just hit "run" (F5, or whatever)... VS Code should attach the mocha worker. Then run the test you want to debug. Alternatively, you could just run npm run test without installing anything, but this is a bit long.
https://nodejs.libhunt.com/pg-mem-alternatives
CC-MAIN-2021-21
refinedweb
1,343
55.64
A guest blog post by Tom Barker, a software engineer, an engineering manager, a professor and an author who can be reached at @tomjbarker. In my previous post I talked about D3 as an option for creating data visualizations in JavaScript on the web. The main issue with getting started with D3 is the learning curve around the depth of low-level control it exposes for developers – D3 isn’t a wrapper to create data visualizations as much as it is a wrapper for line drawing in SVG. On the opposite end of the spectrum we have the JavaScript InfoVis Toolkit. InfoVis was created by Nicolas Garcia Belmonte in 2008 and acquired by Sencha Labs in 2010. You can download InfoVis from. The basic idea behind using InfoVis is to first define your data, and then instantiate a new object from the selection of predefined chart types defined in InfoVis, and finally load your data into the object. Let’s take a look at each of these steps. Defining Your Data The predefined chart types generally expect data to be JSON objects formatted in a specific way. The charts that show aggregates – the area, bar, and pie charts – generally require data to be formatted like this: The color property is an array of strings that specify the colors that should be used for each grouping. The label property is an array of strings that correlate to the labels on the axes for each grouping. And the values property is quite literally an array that holds objects that specify the values for each grouping. The node type charts – the trees, and rgraphs – expect a hierarchical JSON data structure much like you see here: The idea here is that each child then contains an id, a name and an array of children. Let’s take a practical example. We’ll create a JSON object to represent quarterly totals for different groups in a given department or company. These totals can be earnings, bug backlog numbers, or whatever you want them to be. We’ll use this data later in this article to craft several charts. Instantiate Your New Object To create a new chart in InfoVis you must instantiate a new object from the list of predefined chart types that InfoVis supports. At the time of this writing those supported chart types are: - AreaChart - BarChart - PieChart - TreeMap - Force Directed - Radial Graph - Sunburst - Icicle - SpaceTree - HyperTree While this may be a limiting factor if you want to create a chart that is not yet supported, it can also be seen as an opportunity. The InfoVis project is completely open source and available on Github. Anyone is welcome to fork the project and submit updates. To create a new chart, instantiate a new object from the constructor of the chart type that you want to use. The root namespace for InfoVis is $jit – for JavaScript InfoVis Toolkit – so the format would be: For a practical example we could just take the data structure that we created above and apply it to a new bar chart: We declared a new variable barChart and assigned it the return from the constructor call. We instantiated a new object of type $jit.BarChart and passed in a number of configuration parameters. Reading the names of the parameters it should be fairly clear what each one does, and these are just a small sampling of the configuration options available to you. Load Your Data into the Object Note that a necessary step to have your charts rendered to the page is to have an HTML element that exists on the page to draw the chart into. We will pass the id of this element into the injectInto parameter of each chart’s constructor. So for the above bar chart there exists a div on the page that looks like the following: After creating the object, call the loadJSON() method and pass in the data structure. This produces the grouped bar chart: Let’s take the same data structure and pass it into a pie chart by calling the $jit.PieChart constructor: You must make sure that a div exists on the page for the pie chart. You can either re-target the existing div or create a new one. For this sample code I opted to create a new one with an id of “pie.” I followed this same pattern for the remainder of the examples in this article. This produces the following pie chart: Finally let’s take that same data and pass it into an area chart by calling the $jit.AreaChart constructor. This produces the area chart shown here: This is great for aggregate type charts, but what if we wanted to see an example of a hierarchical chart? Let’s first populate genealogical information into a JSON data structure. Next we would instantiate a new HyperTree chart and pass in this data structure: This produces the visualization that you can see here: Detailed API documentation for these and the rest of the functionality in InfoVis can be found at. Pros and Cons From the above information it should be clear that by using InfoVis you can create rich, interactive charts. InfoVis is ideal for quickly producing professional looking data visualizations. The limitation with InfoVis is if you want to use a chart type that is not yet supported, like a time series, bubble chart, or spark line. For more details about data visualization, see the resources below from Safari Books Online. Not a subscriber? Sign up for a free trial.
https://www.safaribooksonline.com/blog/2013/11/25/intro-to-infovis-a-data-visualization-primer/
CC-MAIN-2016-50
refinedweb
925
58.92
I recently read a blog post by Santiago Valdarrama about developer programming problems. He said that any programmer should be able to solve five problems in an hour or less with a programming language of his choice. My language of choice is PowerShell, which is probably not what he had in mind when he wrote the blog post. So far, I’ve demonstrated In adding numbers in an array three different ways, merging lists together and solving the Fibonacci series. Problem 4: The Maximal Combination Problem 4 is described thusly: Write a function that given a list of non negative integers, arranges them such that they form the largest possible number. For example, given [50, 2, 1, 9], the largest formed number is 95021. This one got me thinking. PowerShell has a cmdlet for sorting, but it won’t do what I want it to – it will sort numerically or lexically in ascending or descending mode. This isn’t flexible enough for my purposes. At the heart of every sort operation is a comparator – two elements in the list will be compared and one will be shown to be “before” the other. For example, 8 is before 9 (when sorting ascending). Now, let’s take our problem. In order to compare two numbers, we have to look at their combination. Given 5 and 50, 5 comes before 50 because 5-50 is bigger than 50-5 when you push them together. We need to encode that logic in our comparator. Fortunately, we have the full power of .NET at our disposal. Explicitly, there is a sort method on the System.Collections.ArrayList object that has a Sort method that takes a custom IComparer object. Every sort operation ultimately compares two things in the list – the IComparer interface allows us to specify a custom ordering. First of all, we need to get an ArrayList of strings. Let’s take a look at the code in two pieces: # # Custom sort routine for the ArrayList # $comparatorCode = @" using System.Collections; namespace Problem4 { public class P4Sort: IComparer { public static void Sorter(System.Collections.ArrayList foo) { foo.Sort(new P4Sort()); } public int Compare(object x, object y) { string v1 = (string)x + (string)y; string v2 = (string)y + (string)x; return v2.CompareTo(v1); } } } "@ Add-Type -TypeDefinition $comparatorCode The Compare method is used to compare our two numbers. The Sorter method is a static method that will do an in-place sort of the provided ArrayList usign the custom comparator. A quick note – you can only add this type once. You will likely have to restart your PowerShell session if you make changes to it. Now, let’s look at my cmdlet: function ConvertTo-BiggestNumber { [CmdletBinding()] [OutputType([string])] Param ( # Param1 help description [Parameter(Mandatory=$true, Position=0)] [int[]] $Array ) Begin { $stringArray = new-object System.Collections.ArrayList # Convert the original list to an arraylist of strings for ($i = 0 ; $i -lt $Array.Length ; $i++) { $stringArray.Add($Array[$i].toString()) | Out-Null } } Process { [Problem4.P4Sort]::Sorter($stringArray) [string]::join("", $stringArray.ToArray()) } } This starts by converting the array we are provided to a string ArrayList as required by our custom type, and then sorts it using the .NET Framework Sort method we imported using Add-Type. Finally, we join the array list together. Use it like this: $a = @( 60, 2, 1, 9) ConvertTo-BiggestNumber -Array $a You will get the output 96021. That leaves the fifth puzzle. Unfortunately, I was not able to find a neat solution in PowerShell to the fifth problem. I ended up dropping down to embedded C# – my solution came pretty close to the authors solution to the same problem. Whether this one is a suitable question on an interview is an open debate – I contest that this doesn’t actually test programming skills but rather logic skills. Given the problem, you either see how to do it or you don’t. If you don’t then no amount of coding skills is going to solve the problem.
https://shellmonger.com/tag/powershell-2/
CC-MAIN-2017-13
refinedweb
662
65.83
Board index » Smalltalk All times are UTC -- Erzherzog Johannstrasse 28 A-8054 Seiersberg AUSTRIA 1. Testing, I know I keep doing it but I'm tesating :-) 2. anybody knows the Wiki's URL? Thanks :) 3. errors when doing 'make test' 4. Have you ever done 'namespace delete ::'? 5. linking Tasm obj's or lib's to (ugh, I know) Pascal 6. execfile('bla.py'), can bla.py know its full path 7. It's stupid but I don't know fortran :-((( 8. Probably it's easy, but I don't know how to do :( 9. Smalltalk (Enfin) pgmr's needed... 10. SMALLTALK PRO'S WANTED: ParcPlace, Enfin in SoCal and Dallas 11. EASEL'S Object Studio: Synchronicity & Enfin 12. Enfin's email and phone number
http://computer-programming-forum.com/3-smalltalk/d73f3118c27115a2.htm
CC-MAIN-2020-40
refinedweb
125
88.94
ldexp - load exponent of a floating point number #include <math.h> double ldexp(double x, int exp); The ldexp() function computes the quantity x * 2exp. An application wishing to check for error situations should set errno to 0 before calling ldexp(). If errno is non-zero on return, or the return value is NaN, an error has occurred. Upon successful completion, ldexp() returns a double representing the value x multiplied by 2 raised to the power exp. If the value of x is NaN, NaN is returned and errno may be set to [EDOM]. If ldexp() would cause overflow, ±HUGE_VAL is returned (according to the sign of x), and errno is set to [ERANGE]. If ldexp() would cause underflow, 0 is returned and errno may be set to [ERANGE]. The ldexp() function will fail if: - [ERANGE] - The value to be returned would have caused overflow. The ldexp() function may fail if: - [EDOM] - The argument x is NaN. - [ERANGE] - The value to be returned would have caused underflow. No other errors will occur. None. None. None. frexp(), isnan(), <math.h>. Derived from Issue 1 of the SVID.
http://pubs.opengroup.org/onlinepubs/007908775/xsh/ldexp.html
CC-MAIN-2015-32
refinedweb
186
66.74
10 January 2011 23:11 [Source: ICIS news] HOUSTON (ICIS)--Dow Chemical plans to restart polyethylene (PE) units at its ?xml:namespace> The restart of the PE units would be contingent on the successful restart of the “Dow does not expect to be able to return to normal shipping volumes until at least March 1, 2011,” the letter said. Dow has not declared force majeure for its plastics business, but the company may reassess the situation as it learns more about the impact of the power failure on its PE units. Company officials declined to disclose production capacity figures for PE, but ICIS plants and projects listed the capacity at 815,000 tonnes/year. The.
http://www.icis.com/Articles/2011/01/10/9424600/Dow-expects-St-Charles-Louisiana-PE-restart-in-3-4-weeks.html
CC-MAIN-2013-20
refinedweb
115
52.02
Http. Reactive programming Reactive programming is all about programming with asynchronous data streams. There are several libraries which do that in scala: akka-stream, scalaz-stream, play’s iteratees or RxScala. It allows developers to create, consume and manipulate streams of data. Streams are non-blocking and provide a control over thread and memory consumption. http4s According to the http4s docs it is a minimal, idiomatic Scala interface for HTTP and Scala’s answer to Ruby’s Rack, Python’s WSGI, Haskell’s WAI, and Java’s Servlets. An Http4s service transforms a Request into scalaz Task[Response]. It integrates with scalaz-streams and bodies of requests and responses are represented as streams of bytes. Http4s also provides an asynchronous HTTP client which works similar like the server. Example I implemented a simple server and a client. The server allows to download and upload a file. The client downloads a file and immediately uploads it back to the server using streaming. The whole file is never stored in the memory. import scalaz.stream._ import scalaz.concurrent.Task import scodec.bits.ByteVector trait Streaming { val bufferSize = 128 def read(path: String): Process[Task, ByteVector] = Process .constant(bufferSize).toSource .through(io.fileChunkR(path, bufferSize)) def write(path: String, data: EntityBody) = (data to io.fileChunkW(path)).run } The Streaming trait has two functions. The read function is for reading a file and it creates a stream of bytes. The write function takes a body stream as argument and pipes it to a stream which writes bytes to a file. import org.http4s._ import org.http4s.dsl._ import org.http4s.server._ object StreamingServer extends App with Streaming { val service = HttpService { case GET -> Root / "download" => Ok(read("avatar.png")) case req @ POST -> Root / "upload" => write("uploaded", req.body).flatMap(_ => Created()) } BlazeBuilder.bindHttp(8080) .mountService(service, "/") .run .awaitShutdown() } StreamingServer exposes two HTTP services: GET /download and POST /upload. The first one just reads the content of a file. The second one streams the body into a file. Both services use functions defined in the Streaming trait for that. import org.http4s.dsl._ import org.http4s.client._ object StreamingClient extends App { client .prepare(GET(uri(""))) .flatMap(response => client.prepare(POST(uri(""), response.body)) ).run } The client executes the download request and then streams the content into upload endpoint. Thanks to http4s this is trivial. See data-streaming-example for source code and instructions how to build and run. Summary Http4s provides a nice DSL which integrates seamlessly with scalaz-concurrent and scalaz-stream. If you are looking for alternative http libraries, you should take a look at http4s. Resources Http4s examples Introduction to scalaz-stream Introduction to Reactive Programming
https://immutables.pl/2016/01/16/data-streaming-using-http4s-and-scalaz-stream/
CC-MAIN-2019-43
refinedweb
449
60.61
Minimize the sum of node values by filling a given empty Tree such that each node is GCD of its children Introduction In this article we'll look at a problem using Binary Trees to solve it. Binary Trees contain a lot of intriguing qualities, and we'll learn about them in this blog. Binary tree problems are common in coding interviews and programming competitions. The number of nodes in a complete binary tree can be counted in a variety of ways, as discussed in this article. A tree is made up of nodes, which are individual things. Edges link nodes together. Each node has a value or piece of data and may or may not have a child node. The root refers to the tree's first node. Let’s start with the problem statement. Problem Statement The aim is to calculate the minimum sum of all the node's values of the given Tree such that the value of each node must equal the value of GCDs of its children, given a Binary Tree consisting of N nodes with no values and an integer X that represents the value of the root node. Aside from that, no two siblings can have the same worth. GCD Concept The greatest common divisor (GCD) of two or more numbers is the exact divisor number that divides them. It's also known as the greatest common factor (HCF). Because both integers can be divided by 5, the largest common factor of 15 and 10 is 5. The greatest common factor of two or more numbers is called GCD. A factor number that is the highest among the others. Input Output Input Output Approach Both of the offspring can have the value of X and 2*X, where X is the parent's value, to minimize the total. If a node only has one child, its value will be the same as its parent node. The depth of each subtree for each node will be examined to determine which child should have a value of X and 2*X to obtain the least total. The kid with the most depth will be assigned a value of X so that it can be transferred to more of its offspring, while another will be assigned a value of 2*X. - Find each node's depth and save it in a map using the node's address as the key. - Now, beginning from the root node, conduct the DFS Traversal, assigning a value of X to each node that has a higher depth than its sibling in each call. In every other case, use the number 2*X. - Find the sum of both left and right child values while backtracking and return the entire sum, i.e. the sum of the left child, right child, and current node values in each call, after the previous step. - Print the result returned from the DFS Call as the smallest sum feasible after completing the preceding steps. C++ implementation #include <bits/stdc++.h> using namespace std; class Node { public: int data; Node *left, *right; Node(int data) { this->data = data; left = NULL; right = NULL; } }; class Tree { public: unordered_map<Node*, int> depth; int findDepth(Node* current_data) { int max_start = 0; if (current_data->left) { max_start = findDepth(current_data->left); } if (current_data->right) { max_start = max(max_start, findDepth(current_data->right)); } return depth[current_data] = max_start + 1; } int dfs(Node* current_data, bool flag, int parValue) { if (parValue != -1) { if (flag) current_data->data = parValue; else current_data->data = parValue * 2; } int l = 0, r = 0; if (current_data->left && current_data->right) { if (depth[current_data->left] > depth[current_data->right]) { l = dfs(current_data->left, 1, current_data->data r = dfs(current_data->right, 0, current_data->data); } else { l = dfs(current_data->left, 0, current_data->data); r = dfs(current_data->right, 1, current_data->data); } } else if (current_data->left) { l = dfs(current_data->left, 1, current_data->data); } else if (current_data->right) { r = dfs(current_data->right, 1, current_data->data); } return l + r + current_data->data; } int minimumSum(Node* root) { findDepth(root); return dfs(root, 1, -1); } }; int main() {); Tree t; cout << t.minimumSum(root); return 0; } Output 22 Java implementation import java.util.*; public class Main { static class Node { public int data; public Node left, right; public Node(int data) { this.data = data; left = right = null; } } static HashMap<Node, Integer> depth = new HashMap<>(); static int findDepth(Node current_data) { int max_data = 0; if (current_data.left != null) { max_data = findDepth(current_data.left); } if (current_data.right != null) { max_data = Math.max(max_data, findDepth(current_data.right)); } depth.put(current_data, max_data + 1); return depth.get(current_data); } static int dfs(Node current_data, int flag, int parValue) { if (parValue != -1) { if (flag == 1) current_data.data = parValue; else current_data.data = parValue * 2; } int l = 0, r = 0; if (current_data.left != null && current_data.right != null) { if (depth.containsKey(current_data.left) && depth.containsKey(current_data.right) && depth.get(current_data.left) > depth.get(current_data.right)) { l = dfs(current_data.left, 1, current_data.data); r = dfs(current_data.right, 0, current_data.data); } else { l = dfs(current_data.left, 0, current_data.data); r = dfs(current_data.right, 1, current_data.data); } } else if (current_data.left != null) { l = dfs(current_data.left, 1, current_data.data); } else if (current_data.right != null) { r = dfs(current_data.right, 1, current_data.data); } return (l + r + current_data.data); } static int minimumSum(Node root) { findDepth(root); return dfs(root, 1, -1); } public static void main(String[] args) {); System.out.print(minimumSum(root)); } } Output 22 Python implementation class Node: def __init__(self, data): self.data = data self.left = None self.right = None depth = {} def findDepth(current_data): mx = 0 if (current_data.left != None): mx = findDepth(current_data.left) if (current_data.right != None): mx = max(mx, findDepth(current_data.right)) depth[current_data] = mx + 1 return depth[current_data] def dfs(current_data, flag, parValue): if (parValue != -1): if flag: current_data.data = parValue else: current_data.data = parValue * 2 l, r = 0, 0; if (current_data.left != None and current_data.right != None): if ((current_data.left in depth) and (current_data.right in depth) and depth[current_data.left] > depth[current_data.right]): l = dfs(current_data.left, 1, current_data.data) r = dfs(current_data.right, 0,current_data.data) else: l = dfs(current_data.left, 0, current_data.data) r = dfs(current_data.right, 1, current_data.data) elif (current_data.left != None): l = dfs(current_data.left, 1, current_data.data) elif (current_data.right != None): r = dfs(current_data.right, 1, current_data.data) return (l + r + current_data.data) def minimumSum(root): findDepth(root) return dfs(root, 1, -1) X = 2 root = Node(X) root.left = Node(-1) root.right = Node(-1) root.left.left = Node(-1) root.left.right =Node(-1) root.left.right.left = Node(-1) root.left.right.right = Node(-1) root.left.right.right.left = Node(-1); print(minimumSum(root)) Output 22 Complexities Time complexity: O(n) Reason: As we perform the DFS Traversal, starting with the root node, and add a value of X to each node that has more depth than its sibling in each call. So the complexity is O(n). Auxiliary Space: O(1) Reason: As we Store the depth of each node in a map with the node address as the key spece required is O(1). Frequently Asked Questions - In a tree, what is a node? A tree is made up of nodes, which are individual things. Edges link nodes together. Each node has a value or piece of data and may or may not have a child node. The root is the tree's very first node. - What is tree programming? A tree is a type of hierarchical data structure that is made up of nodes. Value is represented by nodes, which are connected by edges. The root node is the only node in the tree. This is where the tree comes from, so it doesn't have any parents. - What is DFS in the graph? The depth-first search (DFS) technique is used to traverse or explore data structures such as trees and graphs. The algorithm starts from the root node (in the case of a graph, any random node can be used as the root node) and examines each branch as far as feasible before retracing. - What is the node's degree in a tree? The degree of a node is defined as the total number of subtrees associated to that node. A leaf node's degree must be zero. The maximum degree of a node among all the nodes in the tree is the tree's degree. - Is it possible for a tree node to have two parents? Yes, you may have "children" and "parents" in the same node. Because the graph is no longer tree-structured, you won't be able to utilize a TreeModel; instead, you'll need to use a GraphLinksModel. Conclusion In this article, we have solved a binary tree problem. Where we have to fill the provided empty Tree with nodes that are GCD of their offspring to minimize the total of node values. Want to explore more related to this topic click here. If you want to learn more attempt our Online Mock Test Series on CodeStudio now! Happy Coding!
https://www.codingninjas.com/codestudio/library/minimize-the-sum-of-node-values-by-filling-a-given-empty-tree-such-that-each-node-is-gcd-of-its-children
CC-MAIN-2022-27
refinedweb
1,485
52.26
write blocks of data to a file #include <sys/types.h> #include <sys/disk.h> int block_write( int fildes, long block, unsigned nblock, void *buf ); The block_write() function writes nblock blocks of data to the file associated with the open file descriptor fildes, from the buffer pointed to by buf, starting at block number block (blocks are numbered starting at 1). A block is 512 bytes long, as defined by _BLOCK_SIZE. This function is not only useful for direct updating of raw blocks on a block special device (for example, raw disk blocks) but may also be used for high-speed updating of database files (for example). (The speed gain is through the combined seek/write implicit in this call, and the ability to transfer more than the write() function's limit of INT_MAX bytes at a time.) If nblock is zero, block_write() returns zero, and has no other results. Upon successful completion, the block_write() function returns the number of blocks actually written to the disk associated with fildes. This number will never be greater than nblock. The value returned may be less than nblock if one of the following occurs: The implementation limit is specified by the macro _MAX_BLOCK_OPS in the file <sys/disk.h>. If a write error occurs on the first block and one of the sync flags is set, block_write() returns -1, and sets errno to EIO. If one of the sync flags is set, block_write() doesn't return until the blocks are actually transferred to the disk. If neither of the flags is set, block_write() causes the blocks to be placed in the cache and scheduled for writing as soon as possible, but returns before the write takes place. Upon successful completion, block_write() returns an integer indicating the number of blocks actually written. Otherwise, it returns -1, sets errno to indicate the error, and the contents of the buffer pointer to by buf are left unchanged. #include <stdio.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <sys/disk.h> #include <limits.h> #include <sys/types.h> #include <sys/fsys.h> /* * NOTE: This will modify data on the disk, so you * might wish to consider the effects before * running this sample code. */ char buf[_BLOCK_SIZE]; int main( void ) { int fd; int num_blocks; /* open a disk for writing */ fd = open( "/dev/fd0", O_WRONLY ); num_blocks = block_write( fd, 1L, 1, buf ); if( num_blocks == -1 ) { perror( "block_write failed" ); return (EXIT_FAILURE); } return (EXIT_SUCCESS); } QNX block_read(), errno
https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/qnx/block_write.html
CC-MAIN-2022-33
refinedweb
410
63.7
35249/need-help-extracting-schema-make-use-for-avro-file-in-python Hi, nice question. So what I daily use is Python v3.4 and the Avro v1.7.7 (What do you use?) And it is quite simple actually. For the schema file I suggest you use the following code I have written to help you out to print out the generated schema: reader = avro.datafile.DataFileReader(open('file_name.avro',"rb"),avro.io.DatumReader()) schema = reader.meta print(schema) I hope this helps, let me know if you need anything else. Cheers! I had a similar requirement. I had ...READ MORE You can use pandas module to do ...READ MORE Here is what i found and was ...READ MORE This works fine for me: while True: ...READ MORE if you google it you can find. ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE You can simply the built-in function in ...READ MORE I went through the Python documentation and ...READ MORE For Python 3, try doing this: import urllib.request, ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/35249/need-help-extracting-schema-make-use-for-avro-file-in-python
CC-MAIN-2020-16
refinedweb
194
79.97
How can I access the methods in the Students class. When I print myList it uses the toString in the students class, I'm not sure how to access the other methods. Is that even possible? PS I already have the try and catch in my program, I just didn't post here to make the code shorter. public class DatabaseAccess <T> { public T[] database; public ArrayList<T>myList = new ArrayList<T>(); public ArrayList<T> testlist = new ArrayList<T>(); public void userInterface(){ int count = 1; for (T i : myList){ System.out.println(count++ + ": " + i); } } public void readDatabase(){ try { ObjectInputStream in = new ObjectInputStream(new FileInputStream("grocery.bin")); database = (T[]) in.readObject(); for (int i = 0; i < database.length; i++){ myList.add(database[i]); } myList.add((T) "\n"); myList.add((T) "\tStudents:"); ObjectInputStream in1 = new ObjectInputStream(new FileInputStream("students.bin")); database = (T[]) in1.readObject(); for (int i = 0; i < database.length; i++){ myList.add(database[i]); } } } public class Student implements Serializable, Comparable{ private String name, id, address; private double gpa; public Student(String name, String id, double gpa){ this.name = name; this.id = id; this.gpa = gpa; } public int compareTo(Object object){ Student student = (Student ) object; if(this.gpa < student.getGpa()) return -1; else if(this.gpa > student.getGpa()) return 1; else return 0; } public String getName(){ return name;} public String getId(){ return id;} public double getGpa(){ return gpa;} public String toString(){ return (String.format("%16s%10.4s%8s", name, id, gpa));} } Assuming you mean within DatabaseAccess, in theory, you don't. Now, you COULD use something like the following, but it is a horrible idea and absolutely reeks of bad design (no offense meant - I just don't want to give the impression that this code is at all okay or that you should use it - just that it is technically possible): for (T t : myList) { if (t instanceof Student) { Student student = (Student) t; student.getGpa(); } else if (t instanceof Food) { ... } } As far as DatabaseAccess is concerned, it's not a Student. It's a random object of type T which we know basically nothing about. If you can't create a common interface between the Student and Food to do what you want, you have to ask yourself why this DatabaseAccess needs to know the specifics of the classes and figure out what the design should really look like to separate things more sanely. Are you doing some custom printing? Maybe the formatting should be done within the Student/Food class. Are you, e.g. processing user input and manipulating data from the objects in the database? Separate the concerns of database access and data manipulation. Changing a GPA shouldn't necessarily require you to know whether the data is being stored in an ObjectOutputStream or an Oracle Database. So, have the DatabaseAccess deal with the database, and define some other class that works with a DatabaseAccess<Student> through some defined interface, and have that handle the Student-specific manipulation. Without knowing precisely what you're trying to do, it's hard to give solid advice beyond: if you think you need to do what you're asking, you might want to step back and think if there might be a better way to approach the problem so that you don't.
https://codedump.io/share/NtqdTgnNRX25/1/generics-and-arraylist
CC-MAIN-2017-13
refinedweb
542
57.16
Release Notes There is a refresh of the existing Gold release of Cascades, and this refresh is available through the BlackBerry 10 Native SDK update site.. You can find more information about new features, known issues, and fixed issues (including installation issues for the BlackBerry 10 Native SDK) in the Native SDK release notes. We also have an upgrade guide available that can help you transition your beta 4 app to the release version for BlackBerry 10. Check out the upgrade guide here. This document contains the following sections: Highlights - Cascades New project wizard Creating a new Cascades project in the QNX Momentics IDE has been improved. You can now select the type of application to create, and then choose a template that's most appropriate for that type of app. Active text styling You can now use rich text markup to apply styles to active text in your apps. For example, you can add links that are generated automatically in text content, such as URLs, email addresses, and phone numbers. New in this release Cascades UI framework Built-in controls - Text controls - You can now set restrictions on the length of text that users can type in text input controls, such as text fields and text areas. Also, you can turn off the interpretation of rich text in text controls, if you don't want your apps to support rich text. Other features - Virtual keyboard - You can use APIs to prevent the virtual keyboard from being displayed in your apps. This feature can be useful if your app handles text input in a special way that doesn't use the virtual keyboard. - Active text - You can now use rich text markup to apply styles to active text in your apps. For example, you can add links that are generated automatically in text content, such as URLs, email addresses, and phone numbers. - Resource loading - You can now specify the loading effect on an ImageView control. This can be useful if you want the asset resources in your app to have the same non-blocking loading behavior as content resources have. Cascades doesn't make any distinction between assets that are accessed using relative paths and assets that are accessed using absolute paths. Cascades platform APIs - Dialog boxes, prompts, and toasts - Several additions and improvements have been made to these controls, including the following: - You can use the emoticonsEnabled property to display smiley faces as emoticons in a SystemDialog, SystemCredentialsPrompt, SystemPrompt, or SystemProgressDialog. - You can associate an action with the return key in a SystemDialog, SystemListDialog, SystemPrompt, SystemCredentialsPrompt, or SystemProgressDialog using the returnKeyAction property. - You can add a custom button to a SystemPrompt, SystemCredentialsPrompt, SystemDialog, SystemProgressDialog, or SystemListDialog. - The buttonAreaLimit property has been added to the SystemListDialog, SystemPrompt, SystemCredentialsPrompt, and SystemProgressDialog classes. -. -. Other features - New project wizard - Creating a new Cascades project in the QNX Momentics IDE has been improved. You can now select the type of application to create, and then choose a template that's most appropriate for that type of app. Also, the IDE automatically detects when your device OS version and SDK version are different and displays a warning. You can then choose to find an SDK version that matches your device OS version. QML Preview compatibility checks are also performed to give you the best possible performance in the QML Preview. - Improving build performance of native projects - By default, projects are set up to build one file at a time. To significantly improve build performance, you can use the QNX Momentics IDE parallel build feature. To enable the parallel build feature: 1. In the C/C++ perspective, right-click the project name and select Properties.2. Click C/C++ Build and then click the Behavior tab.3. Select). Known limitations Issues with sending output to the console In this release, the qDebug() and console.log() functions (and related functions qWarning(), qFatal(), and qCritical())); #ifndef QT_NO_DEBUG qInstallMsgHandler(myMessageOutput); #endif ... } After this, qDebug() calls will be logged to the console. However, you should keep in mind that this approach may adversely affect the performance of your applications. QML preview considerations There are several considerations you should keep in mind as you work with the QML preview in the QNX Momentics IDE. Using QML preview with multiple files open Depending on system resources available, users may experience an unexpected termination of the QNX Momentics IDE when keeping more than 10 QML files open in the editor while QML preview is enabled. Workaround: Keep the number of QML files open at any given time under 10, or disable the QML preview feature in Preferences. QML preview in Linux QML preview functionality is not working on Linux, and only works on Windows and Mac OS at the moment. Linux users can deploy their applications to a device or simulator to test the look of their QML components.. Note that this issue might occur even with newer graphics cards and the latest drivers. In this case, you should report this issue to Research In Motion in as much detail as possible. QML preview with C++ objects The QML preview can load and render components based on the Cascades plug-in only (in bb.cascades 1.0). Objects that are registered in C++ in your project are not understood by the QML preview and cannot be loaded and rendered. CPU usage with QML preview If you're using looping animations in your QML code with QML preview enabled, the QNX Momentics IDE may use excessive CPU. Viewers in the invocation framework Using the invocation framework, viewer-related classes (for example, InvokeViewerMessage and InvokeViewerRequest) and methods have been deprecated. The viewer-related classes and methods have been replaced by Card classes and methods. For more information on the Card API, see the Cards documentation.. You can learn more about this issue, including a proposed solution with a complete sample app, by reading the BlackBerry 10 Cascades MapView blog post. Fixed in this release Cascades UI framework6621469 Some Cascades apps could enter a state where they no longer rotated properly, and they might potentially have stopped responding and required a reset to recover.6599724 In some cases, list items disappeared when multiple items were added to a list.6569800 When you used a StandardListItem in a list, sometimes the list item retained its activated visual state (by changing color) even when the item was not active.6564609 If you double-tapped a fine cursor for text selection and then moved it before the fine cursor disappeared, the application might have terminated unexpectedly.6559969 If you had a ListView with multiple top-level items, some of the top-level items disappeared when sub-items were added to the top-level items.6520040 If you tapped and held on a fine cursor, the context menu was not invoked.6507255 Switching from English to another language didn't translate the context menu during a cut, select, copy, or dismiss action.6418535 If your application opened a Sheet and closed it using a timer, memory from the sheet might not have been freed after it was closed.6403109 If you were using your device in Arabic and you tapped and held to open a menu, text in the menu might have been truncated.6374811 Sometimes when selecting text, the text selection handles on either side of the text didn't follow your finger when moved.6304350 If you were using your device in portrait orientation and then switched to landscape orientation, your device might not have displayed as expected.6031114 (5555059) When returning from a peek transition, the TitleBar from the previous view sometimes incorrectly appeared in the returned view.6019786 Setting the imageSource property on any image control (such as ImagePaintDefinition) took longer than expected (around 3 ms). This issue reduced the creation time of the application scene.6001122 If you pressed and held on the whitespace of a StandardListItem, the context menu was not displayed.5984452 The space between title and description on a StandardListItem was smaller than intended.5960667 Cascades applications logged statements in a .log file instead of a .slog file.5384092 Right-to-left language text started on the left instead of the right when using a TextField or TextArea.3556364 When using a GridListLayout object, calling the setSpacingAfterHeader() function does not add spacing after the header. If the spacingAfterHeader property is connected to a slot function, a signal is not emitted when the setSpacingAfterHeader() function is called, as it should. Cascades platform APIs6601448 When using application targets in the invocation framework, an app may have been able to incorrectly modify the targets of other applications on the device, even if the app was not the owner of the target.6573226 When you used an SqlConnection object and tried to add an empty QVariantList, the empty list reused the previous list item's data and was added to the SqlConnection. The empty list data should not have been added.6563298 The BatteryChargingState class returned the incorrect charging state when the charger was unplugged.6531602 The appearance of the barcode detector control is now consistent with that of other controls.6520029 The Marker class is now available.6520021 The AttachmentDownloadStatus class is now available.6519357 If you were using the InvokeTargetReply class and the error() function returned a value other than None, the targetType() function returned values outside the defined range.6518347 If your application used the Card class and a SystemDialog control, and called SystemDialog.exec() to display the dialog, your application could sometimes have terminated unexpectedly.6515035 The NFC secure element APIs now support, and enforce where appropriate, the Global Platform ACF format. If you were using certain Gemalto SIMs, you may have found that you received the error response NFC_RESULT_SE_REQUEST_REJECTED from the APIs due to a "deny all" rule in the GlobalPlatform ACF applet that Gemalto load onto their SIMs. You should explicitly whitelist any particular AID that you want your application to have access to by providing an "any application can access AID X.Y.Z rule".6513573 If your application had the post_notification permission and the application ID, <id>, defined in the bar-descriptor.xml file was longer than 50 characters, the notification service terminated unexpectedly.6505004 The CalendarEventPicker and CalendarEventPickerError classes are now available.6485812 When using certain Qt classes (such as Qtimer), Cascades may not have properly unblocked these classes when an application returned to a fullscreen state from a thumbnailed state.6481846 In some cases, you might have seen a %20 in a Title instead of a space.6480612 If you were using the SqlDataAccess class, calling the execute() function on a table that didn't exist returned an error message that lacked information about the non-existence of that table. The message "No query/ Unable to fetch row" was returned.6458073 If you deleted an app whose icon was dimmed, the app might not have been deleted and might have reappeared when you restarted your device.6407624 If the user opened the application menu and selected a HelpActionItem, the user couldn't subsequently open the application menu again.6403809 Sheet controls that were opened from the application menu could be displayed only once.6162943 If the language on the device was set to Arabic, the Calculator app showed a 'k' as the decimal point.4393245 The JsonDataAccess class could not handle JSON files that were saved in UTF-8 with byte order mark (BOM). Cascades Builder6384012 If you created a Cascades project with a name that included a hyphen (-), the app could not be deployed to a device.6302191 Using the QNX Momentics IDE with older/unsupported video drivers may have resulted in the IDE terminating unexpectedly without any error. Under some conditions related to QML preview, the IDE may have terminated unexpectedly without any error. Known issues Cascades UI framework6460151 When you attempt to select text with two fingers, sometimes the text isn't selected.5988986 When specifying the SubmitKey value for a TextField, the "Submit" text is not displayed on the Enter key.5328837 When text is entered into a TextField that is longer than the TextField, there is no ellipsis at the beginning of the text5874 (3637004) When using a HeaderListItem with layoutDirection set to LeftToRight or RightToLeft, the parent ListView has infinite scrolling and non-header items are hidden.3139275 Default-sized TextField controls in Container controls with LeftToRight layouts can push some Label controls out of view by exceeding the bounds of the Container. Cascades platform APIs6761803 When using cards in an app and the app is run on the BlackBerry 10 Device Simulator, rotating the simulator may cause the UI to become distorted.6738777 When using the invocation framework and you try to query another application's filter (for example, NFC), the filter query may be successful. You should not be able to query another application's filter and should receive a TargetNotOwned error.6721812 When an app uses cards and the cards are peeked at many times in quick succession, the device may become unresponsive or the app may terminate unexpectedly.6644302 When you use the HardwareInfo class to query for device information, the modelName() function returns the incorrect model name.6609560 When using the windowGroupFullscreen() signal in the Application class, the documentation does not note that the window group ID of a window will not be available until a window message has been processed by the application.6608383 When you use the Message API to read the list of SMS messages on the device, and you try to read the body of the messages, the body can't be accessed. The message body always appears as an empty string.6602144 When using the invocation framework, if you set the file:// scheme in your URI request and invoke a target with the URI, an incorrect error may be reported. A Bad Request error is expected, but an Internal Error may be reported instead.6593094 When you add a custom URI to an InvokeTargetFilter and then retrieve the list of custom attributes, the size of the custom attribute list is reported as 0. The documentation does not specify that the returned size will be 0.6576301 When using a dialog in your app and the dialog request cannot be completed, the appropriate finished signals are not emitted. For example, this situation might occur when a PPS channel cannot be opened.6572983 If you run an app in one perimeter (either the enterprise perimeter or personal perimeter) and try to invoke another app that's in the opposite perimeter, the error that you receive is incorrect. You receive a Bad Request error, and the expected error is No Target Found.6557112 Progress toasts sometimes time out incorrectly when show() or exec() is called repeatedly.6516010 If you change the orientation of the device as a card is being loaded, sometimes the card appears distorted.6494931 The MapView API doesn't work on the BlackBerry 10 device simulator. The MapView API uses port 8128 to download its MapData. If your application is rendering a black screen only, you should ensure your firewall rules allow TCP traffic on port 8128.6465253 If you're using the QGeoPositionInfoSource class with the timeoutUpdated() signal connected to a slot function that prints the value of the replyErrorCode property, when the device running the app is stationary, the WarnStationary error code is not given. Instead, the value None is given.6430427 The BatteryInfo API returns -1 as the value of the full charge capacity the default value of false.6218365 The move() function of QListDataModel moves items incorrectly. of the BlackBerry 10 Native SDK. If invocation requests are pending, sometimes peeking and closing a card does not work.5933235 MenuManager does not create the menu if you set the TargetType to -100.5868072 Invoking multiple cards while in landscape mode distorts the user interface.5776714 After prolonged use, the peek functionality sometimes doesn't work correctly and prevents apps from launching and minimizing. Workaround: Restart the device. When a card is closed, the InvokeManager::peekEnded() signal is emitted twice. It should only be emitted once.5648426 When using an ArrayDataModel and a QListDataModel, the itemsChanged() signal is emitted when clear() is called on an empty data model.5522084 On the BlackBerry 10 device simulator, calling secondaryDisplayId() from the DisplayInfo class returns an invalid display, and no secondary display is simulated.3271117 (5601217) If you call find() on an empty GroupDataModel, a QVariantList containing a single 0 is returned (saying that we found a match at the very first item in the model) instead of an empty QVariantList as expected. Also, if you have an ArrayDataModel containing QString types of values, a call to indexOf() looking for a QString type value within the range of the model returns an index, even if the QString type value isn't in the model6628784 The application profiler does not correctly report the number of function calls in a shared library of a Cascades app.6628131 When you add a library to a Cascades project, the Add Library wizard contains instructions that are incorrect.6604815 After building a project in the QNX Momentics IDE, the project may compile correctly but, in some cases, it may show header files as unresolved.6578783 When rendering QML files with animations, the QML preview may use a large amount of CPU processing power.6477938 Restarting the QNX Momentics IDE on a Mac may trigger an erroneous "Possible crash detected" message, which can be safely ignored.6337173 When using the Memory Analysis Tool to analyze Cascades projects, the Memory Problems view takes longer than expected to populate the errors.6314659 Sometimes when you debug JavaScript code in your application after the first debug run, you receive an error and are unable to debug the app. Also, the JavaScript debugger remains after the debug session has ended. Workaround: Restart the QNX Momentics IDE and close all old debug sessions. When updating nine-slice margins for an image, changes may not be seen in QML preview. Workaround: Close and reopen QML file. The conditional breakpoint doesn't work in both the JavaScript section of a QML file and a JavaScript function of a JavaScript file.6001024 If a color is set on text, the color shown in the Properties view is incorrect, though the color appears correctly in QML preview and on the device.5156833 Syntax highlighting (pretty-printing) of some Qt classes isn't displayed properly in the IDE (for example, Qstring, QVector, QChar, and others).3674748 The debugger displays errors when watching expression values on Cascades objects. BBM Social Platform6739888 When you call ProfileBox::requestRetrieveIcon() to obtain the profile box icon that is registered with BBM Social Platform, the icon is not retrieved.6592081 Images that use transparency might not appear correctly in the user's list of BBM applications.6515937 When BBM Connect (Settings > Security and Privacy > Application Permissions) is toggled from on to off, or from off to on, the registration and permission states are not updated.6450115 When a C-only app uses the BBM Social Platform BPS. Looking for release notes for previous betas? You can find the release notes for the previous beta releases on the following pages:
http://developer.blackberry.com/native/download/releasenotes/cascades/10_0_gold_update2/
CC-MAIN-2015-35
refinedweb
3,196
51.89
I debug using eclipse. For example, if I need to debug JobTracker, I put the following lines in the hadoop script: elif [ "$COMMAND" = "jobtracker" ] ; then #HADOOP_OPTS="$HADOOP_OPTS - agentlib:jdwp=transport=dt_socket,address=8000,server=y,suspend=y" CLASS=org.apache.hadoop.mapred.JobTracker the I just start everything up using start-all.sh. I should note that I always debug code built inside of eclipse. I don't use ant. I just import Java projects from SVN and export JAR files. ben On May 24, 2006, at 8:49 AM, Dennis Kubes wrote: > Has anyone been able to successfully debug DFS and MapReduce > servers running though eclipse. I can get all the servers started > and can run MapReduce tasks inside of eclipse but I am getting both > classpath errors and debugging stalls. > > I am just curious what kinds of setups people have for doing large > scale development of DFS and MapReduce? > > Dennis > >
http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/200605.mbox/%3CF6D23946-47FE-42E7-98A0-337E1024A9B7@yahoo-inc.com%3E
CC-MAIN-2014-23
refinedweb
152
64.91
:) Another question. Are you trying to export PFSs or are you exporting a base HAMMER mount? There are going to be multiple issues with PFSs. :When this happens nfs server log error (multiple times): :lookupdotdot failed 2 dvp 0xca3087d0 : :Is core dump needed? :(will decrease hw.physmem; 2G now) : : -thomas Hmm. I will try to reproduce it. What is your system config? e.g. Memory, df output, exports list, on the server. Also your df output and fstab on the client. That particular VOP doesn't get run very often due to caching. It is probably (hopefully) a simple bug but I probably need to instrument the filesystem to track it down. A kernel core would be useful if you add code to panic the system in the correct procedure so I get some context to work with. I have included a patch for that below (adding temporary debugging panics in the right places). This would be on the server side. -Matt Matthew Dillon <dillon@backplane.com> Index: hammer_vnops.c =================================================================== RCS file: /cvs/src/sys/vfs/hammer/hammer_vnops.c,v retrieving revision 1.96 diff -u -p -r1.96 hammer_vnops.c --- hammer_vnops.c 9 Aug 2008 07:04:16 -0000 1.96 +++ hammer_vnops.c 9 Sep 2008 23:50:12 -0000 @@ -965,6 +965,7 @@ hammer_vop_nlookupdotdot(struct vop_nloo dip->obj_asof); } else { *ap->a_vpp = NULL; + panic("HAMMER DOTDOT1"); return ENOENT; } } @@ -982,6 +983,8 @@ hammer_vop_nlookupdotdot(struct vop_nloo *ap->a_vpp = NULL; } hammer_done_transaction(&trans); + if (error) + panic("HAMMER DOTDOT2"); return (error); }
http://leaf.dragonflybsd.org/mailarchive/bugs/2008-09/msg00028.html
CC-MAIN-2014-42
refinedweb
248
61.22
Issue 341 – August 26, 2019 · Archives · Subscribe Clojure Tip 💡 keep your ns declaration tidy We’ve all done it at some point. We’re really rolling on a new project and we’re itching to import a new library. So we copy and paste from the example in the README and paste it right into our code. After a few hours, the top of our file looks like this: (ns my-project.core (:use some-library) (:use another-library) (:require [clojure.string :refer [upper-case lower-case]) (:require [clojure.java.io :as io] [foo.bar :refer [baz ;; cux ]]))) (use 'favorite-lib.core) (require 'lib4clj.core) (import 'java.util.Date) It’s a mess! It’s hard to see what’s been imported, and it will slow you down. (Also: there’s a bug in there that would be way easier to spot with some tidying.) But what’s worse is the impression it makes on others. If you’re looking for a job and you send someone this code, here is what is going through their mind: “Oh, no! This person has never worked on a serious project.” At least that’s the strong impression it gives me. If you want to make a good impression, clean that up. Cleaning it up will exercise your code movement skills. You will be moving a lot of code once you get the job. It also shows respect for the person who will be reading the code. Here’s what the above code looks like when it’s neat and tidy: (ns my-project.core (:require [some-library :refer :all] [another-library :refer :all] [clojure.string :refer [upper-case lower-case]] [clojure.java.io :as io] [foo.bar :refer [baz]] [favorite-lib.core :refer :all] [lib4clj.core]) (:import java.util.Date)) It’s much easier to read and gives a more organized impression. However, I would go further. I converted use into require with :refer :all. This is usually not ideal if you’re doing it to a lot of namespaces. You should choose a namespace alias and use it instead. (ns my-project.core (:require [some-library :as sl] [another-library :as al] [clojure.string :as str] [clojure.java.io :as io] [foo.bar :refer [baz]] [favorite-lib.core :as fl] [lib4clj.core :as lib4clj]) (:import java.util.Date)) Woah! Clean as a whistle and easy on the eyes. Here are some recommendations for squeaky-clean ns declarations: - Put all dependencies into the ns. No exceptions. - Don’t use :use. In general, people recommend against it. If you need similar functionality, use :requirewith :refer :all. But see #3. - Avoid :refer :allunless you have a good reason. Laziness is not a good reason. A good reason is that it is a well-known library whose functions are named to help you know where they came from. An example is clojure.testinside of a testing namespace. - Give your namespaces short, descriptive aliases. Abbreviation is good. Use well-known aliases for common libraries, like strfor clojure.string. - Don’t :refertoo many names. A handful is okay. - Use good whitespace. New lines are important. Alignment can help readability. Keep it indented correctly. Are there any guidelines you follow? Hit reply and let me know. Book status 📖 My book is out! You can buy the book now in early access. The first three chapters are available and you’ll get updates as new chapters come out. Grokking Simplicity teaches functional programming in a friendly way, to those who are not familiar with it or who were turned off by the overly academic resources. You can get 50% off the normal prices with discount code MLNORMAND. It will work until August 31, so don’t delay. I don’t really control the sales and discounts, so I don’t know when the next one will be. Also, there’s no “look inside” on my book, so I’ll share some images of pages here. Click the picture to see a bigger version. Retraction ❌ I regret describing The Bell Curve as a “racist, pseudoscientific screed”. Last week I mentioned the book in the context of Apollo: The race to the moon. The Bell Curve is indeed a controversial book. I was told about it in High School by a teacher who said it was racist. I never read it myself. I simply parroted what he taught me to you in this newsletter. You deserve better than repeated hearsay. A couple of readers disagreed with my words. I looked into it and I realized my mistake. I know nothing about the book, and very little about the controversy around it. I’m not defending the book nor am I supporting it. But I should not have used such strong language about a topic I know nothing of. If the topic interests you, please form your own opinions. I am grateful that you put your trust in me to have my newsletter in your inbox every week. I betrayed that trust by speaking out of turn. I apologize. Conference alert 🚨 Clojure/conj CFP and Opportunity Grant applications are due soon. The conference is in November in Durham, NC.: - Testing pure functions — Pure functions are the easiest to test, so that’s where we start. We test three fairly mathematical functions ( reverse, sum, and min). We look for complete coverage of the behavior by trying to find an incorrect implementation that could pass the tests. Then we test two real-world functions I snagged from some real projects. These are a little more involved since we need to create custom generators for them. All the better to learn from! - Testing stateful systems — Once you bring in mutable state, the complexity of your system skyrockets. Now you have to deal with actions performed in arbitrary order. We look at two examples of stateful systems and how to test them. Testing mutable state is where Property-Based Testing really starts to shine. We essentially test every part of the interface in hundreds of combinations. 9 hours of video, and looking at my plan, this one might be 12 hours. But that’s just an estimate. It could be more or less. The uncertainty about it is why there’s such a discount for the Early Access Program. Brain skill 😎 focus We can often learn better in long periods of focus. Learning means challenging your brain. That can be uncomfortable. Your brain will rebel and seek out distractions. Often the most uncomfortable part is right at the beginning. Being able to focus means sticking with it long enough to get over the initial hurdle. Focus also helps you keep more stuff in your head at once. Our attention is very limited. A distracting noise or idea could bump out the hour of work you’ve done to load a difficult concept into your head. Being able to focus means eliminating those distractions so you can work with the idea efficiently. This guide from Scott Young has useful tips for increasing your ability to focus. Clojure Challenge 🤔 Last week’s challenge The puzzle in Issue 340 was to write a simplifier for the symbolic differentiator we did the week before. You can check out the submissions here. This week’s challenge permutations of a sequence The permutations of a sequence are all of the sequences with the same elements but in different orders. Here are some examples. The permutations of [1, 2] are [1, 2] and [2, 1]. The permutations of [1, 2, 3] are [1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 1, 2], and [3, 2, 1]. The number of permutations grows really fast with the size of the list. Your challenge is to write a function that generates all permutations of a list. Your function should return a lazy sequence. Note: there already is a great, fast implementation of this function in clojure.math.combinatorics As usual, please send me your implementations. I’ll share them all in next week’s issue. If you send me one, but you don’t want me to share it publicly, please let me know. Rock on! Eric Normand
https://purelyfunctional.tv/issues/purelyfunctional-tv-newsletter-341-tip-tidy-up-your-ns-form/
CC-MAIN-2020-34
refinedweb
1,357
78.14
5. public class Name { private String first; private String last; public Name(String theFirst, String theLast) { first = theFirst; last = theLast; } public void setFirst(String theFirst) { first = theFirst; } public void setLast(String theLast) { last = theLast; } } - Determines the amount of space needed for an object and creates the object - The object is already created before the constructor is called but the constructor initializes the instance variables. - Names the new object - Constructors do not name the object. - Return to free storage all the memory used by this instance of the class. - Constructors do not free any memory. In Java the freeing of memory is done when the object is no longer referenced. - Initialize the instance variables in the object - A constructor initializes the instance variables to their default values or in the case of a parameterized constructor, to the values passed in to the constructor. 5-2-2: What best describes the purpose of a class’s constructor?. 5.2.2.. 5.2.3. AP Practice¶ Cat c = new Cat (“Oliver”, 7); The age 7 is less than 10, so this cat would not be considered a senior cat. Cat c = new Cat (“Max”, “15”); An integer should be passed in as the second parameter, not a string. Cat c = new Cat (“Spots”, true); An integer should be passed in as the second parameter, not a boolean. Cat c = new Cat (“Whiskers”, 10); Correct! Cat c = new Cat (“Bella”, isSenior); An integer should be passed in as the second parameter and isSenior would be undefined outside of the class. 5-2-3: Consider the definition of the Cat class below. The class uses the instance variable isSenior to indicate whether a cat is old enough to be considered a senior cat or not. public class Cat { private String name; private int age; private boolean isSenior; public Cat(String n, int a) { name = n; age = a; if (age >= 10) { isSenior = true; } else { isSenior = false; } } } Which of the following statements will create a Cat object that represents a cat that is considered a senior cat? - I only - Option III can also create a correct Cat instance. - II only - Option II will create a cat that is 0 years old with 5 kittens. - III only - Option I can also create a correct Cat instance. - I and III only - Good job! - I, II and III - Option II will create a cat that is 0 years old with 5 kittens. 5-2-4: Consider the following class definition. Each object of the class Cat will store the cat’s name as name, the cat’s age as age, and the number of kittens the cat has as kittens. Which of the following code segments, found in a class other than Cat, can be used to create a cat that is 5 years old with no kittens? public class Cat { private String name; private int age; private int kittens; public Cat(String n, int a, int k) { name = n; age = a; kittens = k; } public Cat(String n, int a) { name = n; age = a; kittens = 0; } /* Other methods not shown */ } I. Cat c = new Cat("Sprinkles", 5, 0); II. Cat c = new Cat("Lucy", 0, 5); III. Cat c = new Cat("Luna", 5); public Cat(String c, boolean h) { c = "black"; h = true; } The constructor should be changing the instance variables, not the local variables. public Cat(String c, boolean h) { c = "black"; h = "true"; } The constructor should be changing the instance variables, not the local variables. public Cat(String c, boolean h) { c = color; h = isHungry; } The constructor should be changing the instance variables, not the local variables. public Cat(String c, boolean h) { color = black; isHungry = true; } The constructor should be using the local variables to set the instance variables. public Cat(String c, boolean h) { color = c; isHungry = h; } Correct! 5-2-5: Consider the following class definition. public class Cat { private String color; private boolean isHungry; /* missing constructor */ } The following statement appears in a method in a class other than Cat. It is intended to create a new Cat object c with its attributes set to “black” and true. Cat c = new Cat("black", true); Which of the following can be used to replace /* missing constructor */ so that the object c is correctly created?
https://runestone.academy/runestone/books/published/csawesome/Unit5-Writing-Classes/topic-5-2-writing-constructors.html
CC-MAIN-2020-05
refinedweb
708
68.7
ArangoDB Packaging With ArangoDB 3.0 we reworked the build process to be based completely on cmake. The packaging was partly done using cpack (Windows, Mac), for the rest regular packaging scripts on the SuSE OBS were used. With ArangoDB 3.1 we reworked all packaging to be included in the ArangoDB Source code and use CPack. Users can now easily use that to build their own Packages from the source, as we do with Jenkins. Community member Artur Janke (@servusoft) contributed the new ubuntu snap packaging assisted by Michael Hall (@mhall119). Big thanks for that! Download packages for Snap Ubuntu Core16.04 What are snaps Snaps are a new way of packaging and deploying software on Ubuntu Core 16 and Linux in general. As one would expect from a Linux software distribution channel snap is both simple to install and quick to update. Spoken simply, Snaps are like docker containers without docker. More precisely they share some concepts, and achieve similar results, but differ in other aspects. - Both use a system agent to administer the stack to deploy - Both use layered filesystems, so packages can derive from each others - Both offer automated distribution channels - Both offer working deployments on several linux distributions - While docker uses kernel namespaces to insulate processes into lightweight VMs, snap uses the Linux kernel AppArmor framework to manage access permissions on the system. - Docker uses complex virtual network interfaces and in-kernel routing to provide your app with network connectivity, snap manages access to the host systems network resources via AppArmor - You can also use snaps in docker containers - You could install docker on a host via snap By providing ArangoDB as a snap, you get the ease of installation and updates that you would get from your distro’s own archives, while still getting the very latest versions directly from us. And because they are fully self-contained, you don’t need to worry about them breaking other software on your system, or other software interfering your ArangoDB install. Installing the ArangoDB snap If you’re running Ubuntu 16.04 or later you already have the ability to install Snap packages. For other distros, you will need to follow the installation instructions provided for your distro. Then installing ArangoDB becomes a simple sudo snap install arangodb3. If this is the first snap you’ve installed, you will see it also downloading the ubuntu-core package, this is the common runtime that all Snap packages use, and you will only need to install it once. After installing the arangodb3 snap, the service will be running and ready for you to use! You can verify the service with systemctl status snap.arangodb3.arangod. You will also have all of the ArangoDB command-line tools available under /snap/bin/, including arangodump and arangosh, for managing your database. The ArangoDB snap package stores your data in one of two locations. The database files themselves will be written to /var/snap/arangodb3/common/, including any Foxx services you install on your instance. Your log files and other meta data will be in /var/snap/arangodb3/current/. Your arangodb snap will receive updates as often as we publish new, stable releases, there’s nothing you need to do to stay up to date. If at any time you want to remove this snap, just run sudo snap remove arangodb3, but be aware that this will delete your data files too, so be sure you’ve backed them up if you want to keep them! If you’ve installed the snap on your local machine, you now can reach the ArangoDB web-interface via or https://<your servers ip>:8529 : You may download one of our example graph datasets and explore it with our new graph viewer: More resources Download the packages for Sap Ubuntu Core16.04. Get started with ArangoDB by learning about the basic operations in the database: go through our 10 min CRUD tutorial (Create, Read, Update, Delete) or complete one of the tutorials for drivers in your language of choice.
https://www.arangodb.com/2016/12/arangodb-snapcraft-packaging/
CC-MAIN-2020-34
refinedweb
673
58.82
procmgr_guardian() Let a daemon process take over as a parent Synopsis: #include <sys/procmgr.h> pid_t procmgr_guardian( pid_t pid ); Since: BlackBerry 10.0.0 Arguments: - pid - The ID of the child process that you want to become the guardian of the calling process's other children. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The function procmgr_guardian() allows a daemon process to declare a child process to take over as parent to its children in the event of its death: Returns: -1 on error; any other value on success. Examples: #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <fcntl.h> #include <spawn.h> #include <sys/procmgr.h> #include <sys/wait.h> pid_t child = -1; pid_t guardian = -1; /* * Build a list of the currently running children */ void check_children(void) { if(child > 0) { if(kill(child, 0) == -1) { child = -1; } } if(guardian > 0) { if(kill(guardian, 0) == -1) { guardian = -1; } } } void start_needed_children(void) { if(guardian == -1) { /* Make a child that will just sit around and wait for parent to die */ while((guardian = fork()) == 0) { pid_t parent = getppid(); /* Wait for parent to die.... */ fprintf(stderr, "guardian %d waiting on parent %d\n", getpid(), parent); while(waitpid(parent, 0, 0) != parent); /* Then loop around and take over */ } if(guardian == -1) { fprintf(stderr, "Unable to start guardian\n"); } else { /* Declare the child a guardian */ procmgr_guardian(guardian); } } if(child == -1) { static char *args[] = { "sleep", "1000000", 0 }; if((child = spawnp("sleep", 0, 0, 0, args, 0)) == -1) { fprintf(stderr, "Couldn't start child\n"); child = 0; /* don't try again */ } } } int main(int argc, char *argv[]) { fprintf(stderr, "parent %d checking children\n", getpid()); do { fprintf(stderr, "checking children\n"); /* Learn about the newly adopted children */ check_children(); /* If anyone is missing, start them */ start_needed_children(); } while(wait(0)); /* Then wait for someone to die... */ return EXIT_SUCCESS; } Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/p/procmgr_guardian.html
CC-MAIN-2015-18
refinedweb
335
64.61
Bugs item #2117590, was opened at 2008-09-18 12:41 Message generated for change (Comment added) made by rumen You can respond by visiting: Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Roumen Petrov (rumen) Assigned to: Nobody/Anonymous (nobody) Summary: correction of function asinh - inverse hyperbolic sine Initial Comment: The current implementation of mingwex function with declaration "double asinh(double x)" return unexpected value for some arguments. Please to replace implementation with new one that pass at least following tests(argument->result): 1) 0.0->0.0 2) -0.0->-0.0 3) -1.0000000000000002e+299->-689.16608998577965 The current code return -0.0 for argument 0.0. Uncommenting code for "/* Avoid setting FPU underflow exception flag in x * x. */" isn't enough since the case 3) will return NaN. ---------------------------------------------------------------------- >Comment By: Roumen Petrov (rumen) Date: 2008-09-27 00:26 About big numbers - ok for 1/sqrt(x) as limit in asinhl. About implementation of similar limit in asinh/asinhf - I don't know gnu assembler (I forgot all other assemblers) and all __fast_log functions looks identical :) . No idea for small numbers. ----- [ 2009559 ] @CFLAGS@ not substituted in Makefiles" -> ----- My repository(<repo>) is from CVSROOT=:pserver:anoncvs@...:/cvs/src . In a separate directory(<SRCCOPY>) I keep a copy of subdirectories mingw and win32 from <repo>/src/winsup/. I run configure script and build only in <SRCCOPY>/mingw/mingwex/ . ----- flag -nostdinc and as example xxx_EPSILON: $ cd /empty_directory 1) $ printf "%s\n%s\n" "#include <float.h>" "long double a = LDBL_EPSILON" | i386-mingw32...-gcc -E -nostdinc - # 1 "<stdin>" # 1 "<built-in>" # 1 "<command line>" # 1 "<stdin>" <stdin>:1:19: no include path in which to search for float.h long double a = LDBL_EPSILON 2) $ printf .... | i386-mingw32...-/gcc -E -nostdinc -I/path_to_mingwrt/include - In file included from <stdin>:1: /path_to_mingwrt/include/float.h:19:23: no include path in which to search for float.h .... # 2 "<stdin>" 2 long double a = LDBL_EPSILON 3) $ printf .... | i386-mingw32...-/gcc -E - # 1 "<stdin>" # 1 "<built-in>" # 1 "<command line>" # 1 "<stdin>" # 1 "..../lib/gcc/i386-mingw32.../3.4.5/include/float.h" 1 3 4 # 2 "<stdin>" 2 long double a = 1.08420217248550443401e-19L As I understand -nostdinc flag is problem(not so important) in my build environment. Since #include <float.h> will work in you and Chris environments - then it is fine code to use generic standard defines. ---------------------------------------------------------------------- Comment By: Keith Marshall (keithmarshall) Date: 2008-09-26 13:07 Roumen, >> Also for long double I would like to propose an additional >> modification that deal with big long double. > > Ok, thanks. That's basically a useful refinement of the technique > I proposed; I've no issue with adopting it... However, I am curious as to why you propose if (z > 0.5/__LDBL_EPSILON__) as the cut-off point, for switching from the analytically exact computation to the rounded approximation? Surely a more analytically appropriate expression is if( z > 1.0L / sqrtl( LDBL_EPSILON ) ) (and BTW, we should `#include <float.h>', and use the generic standard LDBL_EPSILON, rather than rely on a compiler specific definition such as __LDBL_EPSILON__). Granted, with this expression, the compiler is unlikely to recognise that the RHS is a constant, so will not be able to optimise away the sqrtl() call -- we'd need to help it out, by providing a suitable manifest constant definition, if we want that optimisation. Is this the basis of your proposal? You know `0.5 / EPSILON' > `1 / sqrt(EPSILON)', yet should still be safe to square, and you are prepared to ignore the insignificance of unity in the intermediate interval, to have a generically reproducible and recognisably constant term on the RHS of the comparison? I also wonder if there might be merit in adding if( z < sqrtl( LDBL_EPSILON ) ) return copysignl( __fast_log1pl( z ), x ); or, (if we accept the `0.5 / EPSILON' paradigm for the overflow case) if( z < (2.0L * LDBL_EPSILON) ) return copysignl( __fast_log1pl( z ), x ); to trap any potential underflow issues. ---------------------------------------------------------------------- Comment By: Keith Marshall (keithmarshall) Date: 2008-09-23 16:26 > Compiler is build from package > gcc-core-3.4.5-20060117-1-src.tar.gz As is mine, using the x86-mingw32-build.sh script package, from our SF download site, with the installation directory set as $HOME/mingw32, and the cross-compiler identifying name set simply as `mingw32'. > To compile minwgex I modify configure.in Why do you do this? As a user, you should have no need to *ever* modify configure.in. > to substitute CFLAGS (also i see this reported in the tracker). Which tracker would that be, then? > Also I remove flag -nostdinc from Makefile.in, > otherwise build fail - missing include header stddef.h. You *definitely* should not need to do this. It may not be wrong to do so, but if you are referring to the tracker item I think you are, then I do not think that the solution proposed there is necessarily the correct way to address the problem. > Dunno why after modification my library return NaN for big double. Nor do I; I certainly cannot reproduce this behaviour. FWIW, here's how I build, from CVS, with the following sandbox structure:-- $HOME/mingw-sandbox | +-- build | | | +-- runtime | +-- runtime (the CVS sources are checked out to $HOME/mingw-sandbox/runtime) $ cd $HOME/mingw-sandbox/build/runtime $ ../../runtime/configure --prefix=$HOME/mingw32 \ --build=i686-pc-linux --host=mingw32 $ make CFLAGS='-s -O2 -mtune=i686 -mms-bitfields' $ make install With this, I keep my cross-compiler up to date with the current state of the MinGW CVS. Note that I do not use this build tree for creating distributable packages -- I leave that to Chris. The Makefiles are broken in this regard; they require entirely inappropriate misuse of configure's --target spec, to achieve a correctly structured MinGW package directory hierarchy. > Also for long double I would like to propose an additional > modification that deal with big long double. Ok, thanks. That's basically a useful refinement of the technique I proposed; I've no issue with adopting it, but I think the explanatory comments could be improved, to make them more acceptable in a mathematically correct sense. You may leave it with me, to come up with suitable wording. I also think it may be advantageous to handle the potential overflow similarly, within asinhf() and asinh(), rather than relying on the obfuscated method of the present implementation. ---------------------------------------------------------------------- Comment By: Roumen Petrov (rumen) Date: 2008-09-20 17:07 Also I attach my new tests case build with (part from Makefile): test-asinhex-my.exe: test-asinh.c $(MINGWCC) -posix -o $@ $^ -lmingwex-my The output is (mysterious nan for asinh is still here, but output for asinhl,asinhf look good): === sizeof(double)=8 asinh(1.0000000000000002e+299)=nan ..... z=688.47294280521965 asinh(-1.0000000000000002e+299)=nan ..... z=-688.47294280521965 asinh(1.7976931348623157e+308)=nan ..... z=709.78271289338397 asinh(-1.7976931348623157e+308)=nan ..... z=-709.78271289338397 asinh(3.4028234663852886e+038)=89.415986232628299 ..... z=89.415986232628299 asinh(-3.4028234663852886e+038)=-89.415986232628299 ..... z=-89.415986232628299 asinh(0)=0 ..... z=0 asinh(-0)=-0 ..... z=-0 === sizeof(long double)=12 asinhl(1.189731495357231765021264e+4932)=11357.21655347470389507691 asinhl(-1.189731495357231765021264e+4932)=-11357.21655347470389507691 asinhl(1.797693134862315708145274e+308)=710.4758600739439420301835 asinhl(-1.797693134862315708145274e+308)=-710.4758600739439420301835 asinhl(3.402823466385288598117042e+038)=89.41598623262829834135168 asinhl(-3.402823466385288598117042e+038)=-89.41598623262829834135168 asinhl(0)=0 asinhl(-0)=-0 === sizeof(float)=4 asinhf(3.40282347e+038)=89.4159851 asinhf(-3.40282347e+038)=-89.4159851 asinhf(0)=0 asinhf(-0)=-0 File Added: test-asinh.c ---------------------------------------------------------------------- Comment By: Roumen Petrov (rumen) Date: 2008-09-20 16:57 May I ask you to apply similar modification to math/asinh{l|f}.c with call to copysign{l|f}. I'm tired. I couldn't find why my self-build library return NaN for big numbers. If I add code from asinh (with call to copysign) into test program along with inline function from fastmath.h - it work as expected, i.e. without NaN for big numbers. Compiler is build from package gcc-core-3.4.5-20060117-1-src.tar.gz To compile minwgex I modify configure.in to substitute CFLAGS (also i see this reported in the tracker). Also I remove flag -nostdinc from Makefile.in, otherwise build fail - missing include header stddef.h. Dunno why after modification my library return NaN for big double. Also for long double I would like to propose an additional modification that deal with big long double. File Added: mingwex-asinhl.diff ---------------------------------------------------------------------- Comment By: Keith Marshall (keithmarshall) Date: 2008-09-19 20:20 I applied this: Index: mingwex/math/asinh.c =================================================================== RCS file: /cvs/src/src/winsup/mingw/mingwex/math/asinh.c,v retrieving revision 1.1 diff -u -r1.1 asinh.c --- mingwex/math/asinh.c 6 Oct 2004 20:31:32 -0000 1.1 +++ mingwex/math/asinh.c 19 Sep 2008 17:14:04 -0000 @@ -23,6 +23,6 @@ z = __fast_log1p (z + z * z / (__fast_sqrt (z * z + 1.0) + 1.0)); - return ( x > 0.0 ? z : -z); + return copysign (z, x); } It fixes the signed zero issue; I STILL cannot reproduce your other fault, (this time just running under wine). ---------------------------------------------------------------------- Comment By: Roumen Petrov (rumen) Date: 2008-09-19 18:25 May NaN is because my mingwex library is modified to use copysign instead xx> 0.0 ... Yes in the original there is no problem for the big numbers - only sign of zero. ---------------------------------------------------------------------- Comment By: Keith Marshall (keithmarshall) Date: 2008-09-19 16:21 FTR, I've now taken your test case to my own Ubuntu 8.04 box, compiled with mingw32-gcc (GCC-3.4.5, mingwrt-3.15), and run it under wine-1.0. It behaves correctly; I *cannot* reproduce the fault you report in (3). ---------------------------------------------------------------------- Comment By: Keith Marshall (keithmarshall) Date: 2008-09-19 14:10 Case (1) clearly isn't an issue, for the result is correct. Case (2) is purely an issue of discriminating between positive and negative zero[1]. I agree with your assessment, that `return (z > 0.0 ? z : -z);' is incorrect, for it must *always* return a zero as negative zero. Similarly, `return (z >= 0.0 ? z : -z);' is incorrect, for it must always return zero as *positive* zero. The correct solution is to either check for `if( x == 0.0 ) return x;' on entry, or to always `return copysign( z, x );' Case (3) is *not* a MinGW bug, (or at least I cannot reproduce it[2]). I am confused by your references to native vs. emulated environment; do you mean, perhaps, that you get incorrect results running on GNU/Linux under Wine? If so, then surely this problem lies in Wine, and you should follow up on their ML. Also note that uncommenting the code to avoid setting the FPU underflow exception flag would not be expected to have any effect in case (3), for it affects only very *small* values of `x', and case (3) has a fairly *large* (negative) value of `x'. [1] Using the MSVCRT implementation of printf(), I appear to see no distinction between positive and negative zero anyway; it only becomes apparent when using the mingwex printf() functions. [2] Although the current mingwex implementation appears to produce correct results for me, in each of the test cases given, it does use what seems a rather suspect method of guarding against overflow, in its internal computation of `x * x', (or `z * z' as it is actually coded); I believe that it may be worthwhile replacing this with a more mathematically robust formulation: long double my_asinhl( long double x ) { if( isfinite( x ) && (x != 0.0L) ) { long double z = fabsl( x ), zz = z * z; if( isfinite( zz = z * z ) ) /* * `z' has been squared without overflow... * Compute the inverse sinh, using the analytically correct formula, * (using log1p() for greater precision at very small `z', than * can be achieved using log()). */ return copysign( log1pl( z + sqrtl( zz + 1.0L ) - 1.0L ), x ); /* Computation of `z squared' results in overflow... * Here we may employ the approximation that, for very large `z', * `sqrt( z * z + 1 ) ~= z', yielding the approximate result */ return copysign( logl( z ) + log1pl( 1.0L ), x ); } return x; } double my_asinh( double x ) { return (double)(my_asinhl( (long double)(x) )); } ---------------------------------------------------------------------- Comment By: Roumen Petrov (rumen) Date: 2008-09-18 16:43 more tests: If we change current code from "return ( x > 0.0 ? z : -z);" to "return ( x >= 0.0 ? z : -z);" the function result is not changed. If we replace return statement with "return copysign(z,x)" the result for {+/-}0.0 argument is as expected in native and emulated environment. But if we try to use function _copysign the result is ok in native environment but fail in emulated. Also with similar changes for functions asinhf/asinhl with respective copysign, i.e. copysignf/copysignl, I see expected result. Did someone know how to resolve problem with big exponent ? ---------------------------------------------------------------------- Comment By: Roumen Petrov (rumen) Date: 2008-09-18 15:13 more test vectors: [-]1.0000000000000002e+99 ->[-]228.64907138697046 //ok [-]1.0000000000000001e+199->[-]458.90758068637501 //but mingwex-nan, native - ok [-]1.0000000000000002e+299->[-]689.16608998577965 //but mingwex-nan, native - ok [-]1.6025136110019349e+308->[-]710.3609292261076 //but mingwex-nan, native - ok ---------------------------------------------------------------------- Comment By: Roumen Petrov (rumen) Date: 2008-09-18 13:53 also asinhl/asinhf return -0.0 too for 0.0 argument. ---------------------------------------------------------------------- You can respond by visiting: View entire thread
http://sourceforge.net/p/mingw/mailman/message/20417419/
CC-MAIN-2015-11
refinedweb
2,251
56.96
Taking Time Off Sometimes students take a leave of absence from Swarthmore. Time away. Deadlines: - Leave Request Deadlines: November 15 for a Spring semester leave of absence, April 1 for a Fall semester leave of absence. If you are considering a leave of absence after these deadlines, please reach out to Dean Derickson or another dean right away for advice. See the Leaving webpage for important details. - Return Request Deadlines: November 15 for a Spring semester return, July 1 for a Fall semester return (April 1 recommended, if possible). See the Returning webpage for important details. Communication is key. Be sure to regularly read and respond to your Swarthmore email account, as that is our official way of communicating with you. For more details, please see the website sections Leaving or Returning. Please also consult the Student Handbook, and the College Catalog sections on expenses & faculty regulations 8.5. Leaves from the College may occur for academic, disciplinary, health, or personal reasons and may be voluntary or required by the College. Questions? Concerns? Please reach out to your assigned dean and/or Dean Liz Derickson.
https://www.swarthmore.edu/academic-advising-support/taking-time
CC-MAIN-2019-35
refinedweb
185
55.95
wcscoll - wide-character string comparison using collating information #include <wchar.h> int wcscoll(const wchar_t *ws1, const wchar_t *ws2); The wcscoll() function compares the wide-character string pointed to by ws1 to the wide-character string pointed to by ws2, both interpreted as appropriate to the LC_COLLATE category of the current locale. The wcscoll() function will not change the setting of errno if successful. An application wishing to check for error situations should set errno to 0 before calling wcscoll(). If errno is non-zero on return, an error has occurred. Upon successful completion, wcscoll() returns an integer greater than, equal to or less than 0, according to whether the wide-character string pointed to by ws1 is greater than, equal to or less than the wide-character string pointed to by ws2, when both are interpreted as appropriate to the current locale. On error, wcscoll() may set errno, but no return value is reserved to indicate an error. The wcscoll() function may fail if: - [EINVAL] - The ws1 or ws2 arguments contain wide-character codes outside the domain of the collating sequence. None. The wcsxfrm() and wcscmp() functions should be used for sorting large lists. None. wcscmp(), wcsxfrm(), <wchar.h>. Derived from the MSE working draft.
http://pubs.opengroup.org/onlinepubs/7990989775/xsh/wcscoll.html
CC-MAIN-2014-52
refinedweb
206
53.41
Difference between revisions of "IKart" Latest revision as of 12:32, 25 September 2014 iKart overview iKart is an holonomic mobile platform designed to provide to iCub autonomous navigation capabilities in structured environments. The main features of the platform are: - Easy plug-and-play connection with the iCub. - Omnidirectional movement: six omnidirectional wheels allow the platform to both translate and rotate on the spot. - On board computing capabilities: iKart is equipped with an Intel i7 quad core CPU to provide on board computing capabilities. The machine also acts as the server repository for the iCub software. - Power supply: the platform is equipped with a Lithium ion polymer battery (48V, 20Ah). The estimated autonomy is about two hours of normal operation of iKart+iCub system and for hours if only iKart is used. - Wireless connectivity: a 300Mbit/s wireless bridge provide wireless connectivity to both iKart and iCub. - Obstacle detection and localization: a laser rangefinder is mounted in the front of the platform. It can detect obstacles at thirty meters of distance, with a 270º angle of view. iKart hardware description In this section the main iKart hardware components are described. The picture below shows the location of the various components: Main panel The iKart main panel is shown in the picture below: From left to right: - The battery switch. It is a three positions switch: - left position In this position the iKart (and iCub) is powered by the battery. The blue battery led will illuminate in this position. If the external power supply cable is also connected, the iKart will use the external power supply and the battery will not discharge. However, the battery cannot be recharged in this modality, even if the battery charger is connected. - middle position In this position the iKart is powered by the external power supply. The battery is completely disconnected. - right position In this position you can recharge the battery plugging the battery recharge cable into the apposite connector. The red battery recharge led will illuminate in this position. Remember that in this mode the iKart must still be powered through the external power supply, since the battery recharge do not directly supply power to the iKart but just to the battery. - The picture below illustrates the three power modes described above: - The PC104 switch: This switch turns on/off the iCub PC104 board. - The battery recharge connector: the battery charger cable must be plugged into this connector. - The led indicators: Top row, from left to right: battery in use (blue), battery in charge (red), external power supply (green), motors on (yellow). Bottom row, from left to right: iKart on (green), iCub PC104 on (red). - The power button: press the button to turn on the iKart motherboard (like a normal PC) - The external power supply connector: the external power supply cable must be plugged into this connector. - The external fault connector: the external iKart and iCub fault buttons are connected through a 2 meters long cable to this connector. It's also possible to plug a jumper, in order to enable to the motors without connecting the external fault buttons. - The motor switch. This switch turns on/off the all the motors of iKart and iCub. Remember: never plug/unplug the power cable connecting the icub to the ikart if the power supply switches of the pc104/motors are on. Failing to do this will seriously damage the electronics! Internal panel The internal panel, accessible after removing the fiberglass back cover, contains several connectors, which should be not accessed during normal operation and are mainly used for diagnostic purposes and debug. From left to right: - CAN connector: using this connector it is possible to directly access the iKart CAN bus. - PS/2 connector: connector for an external keyboard. - VGA video connector: the output of the internal video board. - reset button: the iKart reset button. - Two USB connectors: general purpose USB connectors. Battery IKart is equipped with a 48V, 20Ah lithium ion polymer battery (model ePLB P4820 from Eig The battery is located on the back of the iKart, and can be extracted/replaced simply disconnecting the power cable and lifting the wireless bridge placed on the its top. On the back of battery is located a button which controls the internal battery management circuitry. This button should be never be pressed during normal operation. However, if a fault condition occurs (e.g: overheating, short circuit etc), the battery will enter in protection mode and will stop to supply current (the voltage will also drop to about 20V). In this case, the battery can be restored to normal operation by following this procedure: resume_iKart_battery_from_fault. The battery autonomy is estimated to be about two hours during normal operation (iKart+iCub) and four hours if only iKart is used (iCub power supply cable disconnected). The battery can be recharged using its own charger, with the recharge cable plugged into the connector located on iKart main panel. During the battery recharging, the iKart can continue to normally operate if connected to the external power supply. Battery Control Board (BCS) It is a electronic board, located on right side of iKart, which monitors the status of charge of the battery. The board broadcasts information about the battery voltage, the current consumption and the estimated charge of the battery through a serial cable connected to the motherboard. If the battery charge is low, the user will be first warned, then if the charge reaches a critical level, a shutdown procedure will be initiated in order to prevent data loss. Wireless IKart is equipped with a DAP-1522 wireless bridge (technical specifications can be found on the manufacture website located just on the top the iKart battery. On the back of the wireless bridge, two LAN ports are used: one port is connected to the iKart motherboard, the other one is connected to iCub PC104 (through the power cable which connects iKart with iCub). The wireless bridge is powered by the iKart 48V-to-5V DC/DC converter. Hard Disk The iKart hard disk is located under the front plate, on the left side. Fault control board The fault control board is plugged on the motherboard parallel connector. [TO BE COMPLETED] File:Ikart fault board.jpg Motor control boards iKart uses two BLL motor control boards, identical to those used in iCub to control the brushless motors ( The two boards are connected on the same CAN bus and have address 1 and 2 respectively. The board with address 1, located on the left under the frontal plate, control the motor on the back of iKart. The other board, placed on the right side, controls the two frontal motors of iKart. The boards are directly powered by the 48V power line. Wheels suspensions The three idle wheels of the iKart platform are equipped with a suspension mechanism. The preload of the suspensions can be individually tuned through two nuts that are located on the top of the mechanism, inside the iKart. A screw, also located on the top of the mechanism, allows to set the limit of the suspension. During the iKart assembly, the preload is tuned in order to take in account the weight of the battery and of the whole iCub placed on the top of the platform. CAN to USB converter A CAN-USB2 converter is used to communicate with the motor control boards. The interface is located under the front plate, on the right side. Technical information about the CAN-USB2 converter can be found on the manufacturer website: Internal power supply The iKart in equipped with three internal power supply modules: - An ATX power supply (input: 48V) which provides power to the iKart motherboard. - A 48V-to-12V DC/DC converter, which provides power to the laser rangefinder. - A 48V-to-5V DC/DC converter, which provides power supply to the wireless switch. Laser scanner An UTM-30LX Laser Range finder (Hokuyo Ltd) is mounted in front of the iKart. It is powered by the iKart 48-to-12V DC/DC converter, while the data are transmitted on through a separated USB cable. The range finder is able to detect obstacles up to 30m, with a scan angle of 270 degrees. The maximum refresh rate is 40Hz. Joystick receiver The joystick receiver is a standard XBox 360 wireless receiver. It is placed on the of the battery and is connected to an internal USB connector. The wireless range is about 10 meters. External power supply (read carefully!) iKart uses the same external power supply of iCub (please refer to: although with different settings. The external power supply must be in fact configured in order to provide a voltage of 52.5V with a current limit of 18A. Using these settings is extremely important: - Since the iKart battery has a nominal voltage of 52V, the external power supply voltage must be always higher then the battery voltage, in order to prevent an undesired battery discharge when the iKart power supply switch is set to battery (blue led on). - A current limit of 18A, instead, is required in order to prevent accidental brownouts due to to the high inrush-current when the iKart motor switch is turned on. Since an unexpected reset during a write operation on the disk may compromise the integrity of the saved data, it is extremely important to increase the current limit of the power supply to 18A. - Remember: never plug/unplug the power cable connecting the icub to the ikart if the power supply switches of the pc104/motors are on. Failing to do this will seriously damage the electronics! How to connect and reuse the iCub power supply for the iKart? - Let's begin by the starting point : what is the configuration of the Power Supply? - you should have 3 connections in between the 2 Power Supplies (the slimest one delivering 12V and the biggest one delivering a tension in betwwen 24V min to 40 max): - The main power cord connected to your iCub - one slim black wire as Ground reference in between the 2 power supplies - one cable compounded of 2 wires (1 black and 1 red or yellow in this picture below) labelled SH which enabled the biggest power supply (see : picture below to know where is it connected what is the normal configuration of the red selector) - At first, remove all the wirings from the back panel of the biggest Xantrex power supply : that is to say, first remove the 3 screws on the sides of the wire holder box (see : picture below) and then remove the connections on minus and plus L-shapped supports: - Minus support is compounded of 2 big black wires ended by a ring connector and one little black wire which is connected to the 12V power supply so as to create a GND reference. - Plus support is compounded of 2 big wires with RED tape ended with a ring. - Attach the black wire and the red wire, from the cable labelled CBL35 you could find inside the iKart box Kit received, respectively with the minus and the plus L-shapped support as shown on picture below. !!!! be careful !!!! you will need to solder a terminal on both wire. we used the ring terminal for AWG6 wires part number 130552 from TE Connectivity orderable from RS as code number 373-370. see picture below - The last but not the least, just copy the settings done for the switches and jumpers from the picture below, this is the settings to get the power supply independent from the other one. iKart Software In this section the core modules required to run the iKart are described. A diagram describing how these module are interconnected is presented below (click to enlarge). iCub Interface iKart uses the same iCubInterface application used by the iCub to communicate with the motor control boards. The system configuration is specified in the files iKart.ini and ikart_wheels.ini, which are located in the $IKART_ROOT\app\iKart\conf folder and installed by the make install command in the folder $ICUB_ROOT\app. The robot configuration is extremely simple, since a single CANbus line is used. This results in a single Yarp device driver which is used to control the robot analogously to the way in which iCub is controlled. The only difference between the two is that instead of having multiple robot parts such as head, left_arm, right_leg, etc. iKart has one only robot part, called wheels. iKart Control The iKartCtrl module is the main control module that acts as the interface between the user commands and the robot. The application consists of several threads which run concurrently, each of them responsible for a particular robot interface. motorControlThread This thread receives the user commands and transform them into speed reference signals which are sent to the individual motors. The input commands can be sent to the controller through three different input ports: - /ikart/joystick:i This is the port to which connect the output of the joystickCtrl module. - /ikart/control:i On this port a user module (for example, a navigation software) can send commands to the iKart. - /ikart/aux_control:i Additional port providing the same functionality of /ikart/control:i While all the three ports provide the same functionality, they have different (decreasing) priority. An input on the joystick:i port overrides an input on control:i port, which, in turn overrides an input on the aux_control:i. In this way, for example, the user can always take control of the iKart through the joystick, even if another module is sending wrong commands to the iKart. The commands sent to iKart through these ports can be expressed in two formats: - percentage respect to the maximum speed. This is the format used by default by the joystickCtrl module. The bottle is consituted by five values, with the following meaning - protocol identifier (int): an int value fixed to 1. - heading (double): the commanded linear direction of the iKart, expressed in degrees. The heading must me expressed accordingly to iKart reference frame, represented in the picture at the end of this section. - linear speed (double): the commanded linear speed, expressed as a percentage of the robot maximum linear speed (0-100.0%). - angular speed (double): the commanded angular speed, expressed as a percentage of the robot maximum angular speed (0-100.0%). - motor scaling factor (double): a scaling factor expressed in percentage (0-100.0%) that multiplies both the linear and the angular speed. - metric format. This is the preferred format for user modules. The bottle is constituted by four values, with the following meaning: - protocol identifier (int): an int value fixed to 2. - heading (double): the commanded linear direction of the iKart, expressed in degrees. The heading must me expressed accordingly to iKart reference frame, represented in the picture at the end of this section. - linear speed (double): the commanded linear speed, expressed in m/s. - angular speed (double): the commanded angular speed, expressed in deg/s. odometryThread It computes the iKart odometry, using the information take from the motor encoders. It provides the two following yarp ports: - /ikart/odometry:o. This port broadcasts the robot odometry. The bottle is constituted by six values (double), with the following meaning: - x position [m] - y position [m] - orientation [deg] - x velocity [m/s] - y velocity [m/s] - angular velocity [deg/s] - Please note that wheels slippage and model inaccuracies may affect the odometry accuracy, resulting in a cumulative error. This is not an issue of the iKart platform, but it is an intrinsic problem of the odometry computation in all mobile robot. For this reason, odometry information must be integrated with a localization mechanism (based for example on the laser data). Please refer to the SLAM section for information about performing absolute localization with the iKart. For the orientation of the x and y axes, please refer to the iKart reference frame picture at the end of this section. - /ikart/odometer:o. Information about the cumulative distance traveled by the robot is broadcasted by this port. The bottle is constituted by two values (double), with the following meaning: - traveled distance [m] - traveled angle [deg] - Both the odometry and the odometer information can be zeroed by sending the command reset_odometry to the /ikart/rpc port. laserScannerThread It retrieves the laser rangefinder measurements through its yarp interfaces, and broadcasts the data on the output port. - /ikart/laser:o. Contains the laser scan measurements. The data is constituted by an array of 1080 double, corresponding to the measurements (expressed in mm) obtained from the laser during a counterclockwise scan (each measurements corresponds to an angle of 270/1080= 0.25 degrees). iKartCtrl RPC commands The iKartCtrl module also provides a port /ikart/rpc to which the user can send rpc commands. Below is reported the list of the available rpc commands: - help displays the list of the available commands - run turns on the three motors (use to run again the iKart after pressing the fault button) - idle turns off the three motors. - reset_odometry set to zero both the odometry and the odometer data. The iKart Reference frame Both the cartesian joystick/user commands and the iKart odometry are expressed into the iKart reference frame which is oriented according to the following convention: - the y axis is directed forward. - the x axis is directed toward right. - the heading angle is positive clockwise and negative counterclockwise. Joystick Control The joystickCtrl module allows to take the input from a joystick and send it on a yarp port (by default: /joystickCtrl:o). The module takes in input a .ini file in which the user can specify the configuration of the joystick, including axis remapping, different scaling factors, deadbands, etc. A complete list of the available configuration options is reported in the module documentation. Below is reported the default joystick configuration to control the iKart. The joystick configuration is specified in the file: $ICUB_ROOT/app/joystickCtrl/iKart.ini. By default the .ini file assigns to button 0(A) and 1(B) the execution of the scripts ikart_motors_run.sh and ikart_motors_idle.sh. The other buttons can assigned by the user to execute any custom .sh script. Laser GUI The laserScannerGUI provides a simple graphical interface in order to visualize the measurements performed by the frontal laser rangefinder. The module receives the measurements from the iKartCtrl module through the /ikart/laser:o yarp port. A list of all the available options (e.g. zoom, refresh rate etc.) is displayed by pressing the key 'h'. Battery Manager The iKartBattteryManager is the module responsible of verifying the status of charge (SoC) of the battery. The module is automatically started from the script /etc/rc.local during the boot sequence and periodically reads the from the /dev/ttyUSB0 serial port the SoC of the battery. The default update rate is 10 seconds. In order to preserve the robot sensible components from dangerous brownouts, the module takes the following actions if the battery is low: - The user will be notified with a wall message if the charge of the battery level is lower than 10%. - If the battery reaches a critical level (below 5%), the module will start an emergency shutdown procedure, by stopping the iCub and iKart motor interfaces, and turning off the machine, with a two minutes advance notice. The module executes independently if the yarp server is running or not. If yes, infomation about the SoC of the battery is sent through the /ikart/battery:o port. You can also ask the module to save a timestamped logfile of the performed battery measurements, by specifying the option --logToFile into the startup script (default: off). Battery Display The iKartBatteryDisplay is a graphical tool which displays the current SoC of the battery. The module receives the battery info from the iKartBatteryManager module through the /ikart/battery:o yarp port. Since the iKart obviously doesn't have a graphical output, this module will not run on the iKart machine. Instead, the iKartBatteryDisplay is thought to be executed by the user on all the machines remotely connected to the iKart. The provided always-on-top window will remember to the user the remaining autonomy of the robot, so it's a good idea to keep an iKartBatteryDisplay instance always running on each user machine connected to the iKart. Starting the iKart with the joystick There are cases in which you may want to start moving the iKart just after the boot, without connecting to the robot in ssh, starting all the application etc. This joystick start-up is particularly useful if, for example, you turned on the iKart in a room where there is no wireless connection, and you want to move it to another room. In order to command the iKart to perform a joystick start-up, you have to follow the procedure: - Turn on the the iKart (with the motor switch on and the fault button unpressed). - Turn on your joystick by pressing the central joystick button. - Wait for the beep coming from the iKart, indicating that the boot is finished. - At this point you have for 5 seconds the chance, by pressing any button on the joystick (or moving the sticks) to initiate the joystick start-up procedure. If a joystick activity has been detected, the iKart will automatically initialize the following processes (and will make automatoically all the required connections): - The Yarp server. - The iCubInterface required to control the iKart motors. - The iKartControl module. - The joystickControl module. You will be now able to move the iKart around usually the joystick as in normal operation. How the joystick-startup procedure works The script responsible for the joystick-startup procedure is ikart_start.sh which is originally located under $IKART_ROOT/app/iKart/ikart_boot and copied to $ICUB_BIN during the installation procedure. The scripts executes the joystickCheck module in order to verify if a joystick activity is detect, and executes initializes the yarp server and the control software if so. The ikart_start.sh script is automatically executed from the main boot script /etc/rc.local after the low-level drivers initialization. Stopping/Restarting the joystick-startup An analogous script ikart_stop.sh is provided in order to stop all the modules launched by the ikart_start.sh script, when controlling the iKart is no longer required. Currently, there is no way to execute this script using the joystick, so it has to be manually invoked through an ssh connection to iKart. Please also note the yarp server started by the ikart_start.sh script will be not stopped by the ikart_stop.sh script. Finally, remember that the joystick start-up procedure, which is particularly convenient during the boot, can be also manually invoked anytime, by launching the ikart_start.sh script from a console, or through an ssh connection. Additional/user modules In this section only the core iKart module have been described. A general description of the iKart navigation software can be found here. Additional user modules, providing other useful functionalities, are contained in the iKart repository. Please refer to the individual module documentation for a detailed description of their usage. Using the iKart Starting the robot Before starting the iKart, verify that: - The motor switch must be on - The fault button must not pressed. If you do not want to use the fault button, plug the provided jumper into the fault connector. Remember that in this way you will not be able to stop iKart and iCub in case of problems, so use the jumper with caution. - The iCub arms are properly positioned (the iKart will may not pass through a door if its arm are extended, and no checks about the robot posture are performed during the navigation!) - The joystick is turned on. Remember that in case of problems, the joystick commands have always the priority on the navigation software and the other user module. You can now turn the main panel switch to the battery operated mode, and AFTER doing this you can disconnect the external power supply cable. Using the gYarpManager iKart can be also started using the gYarpManager tool. All the ikart applications (including the navigation modules, the GUIs etc) can be started using the corresponding .xml scripts, which have to be previosuly installed with the command make install_applications from the $IKART_ROOT directory (see also the installation section ) Launching from command line This is not the recommended way to start the iKart, since there are several modules to be launched and the port connections have to be done manually. However, the procedure is here described for the sake of completeness. $ iCubInterface --context iKart --config conf/iKart.ini $ iKartCtrl $ joystickCtrl --context joystickCtrl --from conf/ikart.ini --silent $ yarp connect /joystickCtrl:o /ikart/joystick:i Remember that all above modules must be launched on the iKart machine (on which also the yarp server must be already running). Recharging the battery Battery can be recharged both when iKart is off and when iKart is running on the external power supply. - Recharging the battery when iKart is off. - Connect the battery recharge cable - Put the power supply switch of the iKart frontal panel on the recharge position. The red led will turn on. - Press the start button on the battery charger. - The charger will automatically turn off when the recharge is complete. - The recharge cable can be disconnected. - Before turning on the iKart, remember to put battery switch back to the the external power supply position or the battery position. - Recharging the battery during normal operation (iKart is on). - Connect the battery recharge cable. Do not disconnect the external power supply cable. - Put the battery switch of the main panel on the recharge position. The red led will turn on. - Press the start button on the battery charger. - You can see the battery status of charge using the batteryDisplayManager application. - The charger will automatically turn off when the recharge is complete. - The recharge cable can be disconnected. Do not disconnect the external power supply cable. - The power supply switch of the iKart frontal panel can be now put back on the external power supply position. - It is not possible to recharge the battery and at the same time use the battery as primary power supply with the external power supply disconnected. iKart (re)installation Additional information related to the installation/configuration of iKart software are reported in this section. Firmware The two motor control boards use a firmware version which is different respect to the iCub. The firmware version is called 3.51 and the binary file can be found in $ICUB_ROOT\firmware\build\2BLL.iKart.out.S. The firmware of the boards can be upgraded using the canLoader application, as in iCub (see also: The CAN interface must be set to socketCAN and the bus number to 0. Drivers Esdcan driver iKart uses an ESD CAN-USB2 interface to communicate with the motor control board. The driver of the CAN-USB2 interface is contained in the socketcan linux package (further documentation here: You should check the installation of the socketcan package from the the kernel module configuration menu (by launching make menuconfig from the /usr/src/linux directory). Please be sure that the following kernel modules/options are installed/enabled: - networking support - CAN bus subsystem support - Raw CAN Protocol - Broadcast Manager CAN Protocol - CAN Device Drivers - CAN bit-timing calculation After compiling the kernel, you can check if the driver modules are correctly loaded by: - typing dmesg and searching for a string similar to esd_usb2 2-1:1.0: device can0 registered. - modprobing the modules can, can_raw, can_bcm. Additionally to the kernel modules installation, the can-usb2 driver has to be initialized by the user before using it. This operation is automatically performed at the end of the iKart boot sequence, by a line contained in the /etc/rc.local script: ip link set can0 up type can bitrate 1000000 After executing this command, the CAN interface will be also visible in the list of interfaces shown by the command ifconfig. In order to final check if everthing is correctly working, you can run the canLoader application: canLoader From the GUI, choose the socketcan interface and click on the connect button. If the list of the motor control boards appears in the window below, the CAN interface is properly configured. Joystick driver iKart uses the xboxdrv userspace driver (further information can be found on: Since the driver runs in the user space, no additional kernel modules are required. Instead, it is highly recommended to blacklist the xpad kernel module (which is the Ubuntu default choice) since the two drivers can enter in conflict. This operation is performed by editing the file /etc/modprobe.d/blacklist.conf and adding a line: blacklist xpad To install the xboxdrv userspace driver simply do: sudo apt-get install xboxdrv The xboxdrv userspace driver is automatically launched at the end of the iKart boot sequence by the following line of the etc/rc.local file: xboxdrv --silent Finally, in order to help the joystickCtrl module to properly recognize the joystick configuration (the number of axis, buttons etc) the following line has to be added to the .bashrc file. export SDL_LINUX_JOYSTICK="'Xbox Gamepad (userspace driver)' 8 0 0" NOTES: - If you want to stop and restart the xboxdrv driver, remember that it be executed with superuser privileges. - if you want to use a different joystick type, the SDL_LINUX_JOYSTICK variable has to be changed accordingly. Permission Serial ports /dev/TTYACM0 (laser scanner) and /dev/TTYUSB0 (battery control system) must have the right permissions in order to be accessed. Add the icub user to the dialout group with the following command: usermod -a -G dialout icub Serial port communication can be tested using the gtkterm utility. - /dev/TTYACM0 (laser) configuration: 56800 baud, 8 bits, even parity - /dev/TTYUSB0 (battery control system) configuration: 56800 baud, 8 bits, even parity Start-up scripts The main iKart start-up scripts are stored in the $IKART_ROOT\app\iKart\ikart_boot directory. In particular: - rc.local : It is the main script that initializes the hardware drivers. It is invoked by the operating system during the boot sequence. During the iKart installation, the script must be copied to the /etc directory. All the commands included in this file are executed with super-user privileges. - ikart_start.sh : It is invoked by the /etc/rc.local script. The script checks if joystick activity is detected and, if so, automatically starts the yarp server and the iKart motor interface. The script must be copied to the $ICUB_BIN directory. NOTE: By default, the yarp server launched by this script is the standard yarp server, not the yarpserver3. If you want to launch the yarpserver3 you can uncomment the corresponding line in the ikart_start.sh script. - ikart_stop.sh : It can be executed by the user to stop the modules previously launched by ikart_start.sh. The script must be copied to the $ICUB_BIN directory. - .bashrc: This script configures the user environment variables. It includes the default search paths for the yarp/icub software. The script must be copied to the $HOME directory. - ikart_motors_run.sh: a shortcut script that reactivates ikart motors from the idle state (e.g. if the fault button has been pressed) - ikart_motors_idle.sh: a shortcut script that puts the ikart motors in to the idle state. ADDITIONAL NOTES: The following line must be added to /etc/sudoers to allow the iKartBatteryManager module to perform emergency shutdown: icub ALL=(ALL) NOPASSWD: /sbin/shutdown Yarp and iCub repositories The yarp and iCub repositories are located under the folders /usr/local/src/robot/yarp2 and /usr/local/src/robot/iCub respectively. - The following libraries have been previously installed (these are standard libraries required also for iCub, please also refer to - - GTK2.0 Rel. 2.14 - - GTKMM Rel. 2.14 - - QT3 - - GNU Scientific Library, GSL Rel. 1.14 - - OpenCV 2.0 - - Open Dynamics Engine: ODE Rel. 0.10 - - Simple DirectMedia Layer: SDL Rel. 1.2 - - Interior Point OPTimizer library: Ipopt Rel. 3.5.0 [TO BE COMPLETED] icub-common - To compile the Yarp repository on the iKart, follow the steps: cd $YARP_ROOT/build ccmake .. check that the following mandatory cmake options are turned on: CREATE_GUIS CREATE_LIB_MATH CREATE_DEVICE_LIBRARY_MODULES ENABLE_yarpmod_serial ENABLE_yarpmod_serialport ENABLE_yarpmod_laserHokuyo Once cmake has generated the makefiles, compile the yarp repository: make - To compile the iCub repository on the iKart, follow the steps: cd $ICUB_ROOT/main/build ccmake .. check that the following mandatory cmake options are turned on and generate the project: ENABLE_icubmod_canmotioncontrol ENABLE_icubmod_debugInterfaceClient ENABLE_icubmod_socketcan compile the repository and install the iCub applications: make install make install_applications - To compile the ikart contrib repository, first add the following lines to the ~/.bashrc file: EXPORT IKART_ROOT=/usr/local/src/robot/iCub/contrib/src/iKart EXPORT IKART_DIR=/usr/local/src/robot/iCub/contrib/src/iKart/build then run ccmake on the ikart contrib repository: cd $IKART_ROOT/build ccmake .. compile the repository and install the iKart applications: make install make install_applications finally append the contents of the file $IKART_ROOT/app/ikart/ikart_boot/rc.local at the bottom of the /etc/rc.local file - iKart also acts as a server for the Yarp and iCub repositories mounted by the PC104. These repositories are located in the folders: - - /exports/pc104/ [TO BE COMPLETED] - - /exports/pc104/ [TO BE COMPLETED] The two repositories must be compiled from the PC104, NOT from the iKart. [TO BE COMPLETED] Network configuration iKart is a standard PC, so it does not require particular configuration. The PC104, instead, must be configured in order to mount the repository from the iKart machine. This configuration is suggested because in this way the repository can be always accessed by the PC104 using a wired network. Wireless mounting is discouraged. Basic configuration The basic iKart network configuration is contained in /etc/hosts and /etc/network/interfaces. Here an example is reported: /etc/hosts 127.0.0.1 localhost 127.0.1.1 ikart 10.0.0.10 pc104 10.0.0.54 console /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.0.0.11 netmask 255.255.255.0 gateway 10.0.0.12 PC104 configuration The PC104 is configured to mount the repository from the iKart. Releavant scripts can be found in the repository /etc/hosts is configured to have the following line: 10.0.0.11 ikart iKart and ROS It is possible to interface iKart with ROS using the ikart_ros_bridge application. The application must run on a machine with both Yarp and ROS installed. We suggest to install ROS not on the iKart but on a different machine. ROS installation We currently tested the ikart_ros_bridge using: 1) the electric ROS distribution on a Ubuntu 11.04-natty machine. 2) the fuerte ROS distribution on a Ubuntu 12.04-precise machine. The packages included in other ROS distributions may have different names and some work may be required to make the ikart_ros_bridge properly run. Below are reported the steps to install the suggested ROS distribution (further information can be found at sudo sh -c 'echo "deb precise main" > /etc/apt/sources.list.d/ros-latest.list' wget -O - | sudo apt-key add - sudo apt-get update sudo apt-get install ros-fuerte-desktop-full After the installation, the ~/bash_icub_env file must be edited to set the new environment variables: source /opt/ros/fuerte/setup.bash export ROS_ROOT=/opt/ros/fuerte/ros export PATH=$ROS_ROOT/bin:$PATH export PYTHONPATH=$ROS_ROOT/core/roslib/src:$PYTHONPATH export ROS_PACKAGE_PATH=~/ros_workspace:/opt/ros/fuerte/stacks:$ROS_PACKAGE_PATH Additional info: The iKart ros bridge The ikart_ros_bridge module connects to the output ports opened by the iKartCtrl module and publish the corresponding topics onto the ROS workspace. Viceversa, motor commands sent by a generic ROS node can be translated by the ikart_ros_bridge into the proper commands which can be executed by the iKartCtrl module. Ports and topics accessed by ikart_ros_bridge A list of the yarp ports / ros topics used by ikart_ros_bridge is reported below: ikart_ros_bridge input ports: - /ikart_ros_bridge/laser:i: it receives the laser scanner data from /iKartCtrl/laser:o - /ikart_ros_bridge/odometry:i: it receives the odometry data from /iKartCtrl/odometry:o - /ikart_ros_bridge/odometer:i: it receives the odometer data from /iKartCtrl/odometer:o - /ikart_ros_bridge/goal:i: it receives the user goal to be sent to the ROS navigation stack. ikart_ros_bridge output ports: - /ikart_ros_bridge/command:o it broadcast the current iKart velocity commands to be sent to /iKartCtrl/command:i - /ikart_ros_bridge/localization:oit broadcast the current iKart localized position. ikart_ros_bridge subscribed topics: - /cmd_vel: it receives the iKart velocity commands computed by the navigation stack. - /tf: it receives the transformations related to the reference frames: <map>, <odom>. ikart_ros_bridge published topics: - /tf: it broadcasts the transformations related to the reference frames: <base_link>, <base_laser>, <home>. - /ikart_ros_bridge/laser_out: it broadcasts the received laser scanner data. - /ikart_ros_bridge/odometry_out: it broadcasts the received odometry data. - /ikart_ros_bridge/odometer_out: it broadcasts the received odometer data. In the picture below are shown the connections between ikart_ros_bridge and the other applications during a typical navigation demo (click to enlarge): Compiling iKart_ros_bridge Before compiling, you have to configure your environment. gedit .bashrc add the path to source folder of iKartRosBridge to the ROS_PACKAGE_PATH environment variable. For example: export ROS_PACKAGE_PATH=$ROS_PACKAGE_PATH:/usr/local/src/robot/iCub/contrib/src/iKart/src/iKartRosBridge Now you can compile the ikart_ros_bridge ccmake make [TO BE COMPLETED] Running the ikart_ros_bridge 1.Check that the machine can reach the yarp nameserver located on the iKart: yarp check if there are problems, verify that: - you can can ping the iKart ping ikart - you are using the correct yarp namespace yarp namespace <your_namespace_name> If you still cannot find the nameserver, maybe a wrong default nameserver address has been saved in the configuration of your machine. You can force Yarp to find the nameserver on the specific iKart ip address and save this configuration using the command: yarp conf <ikart_ip_address> 10000 --write 2.Start the ROS name server: roscore 3.Start the core iKart applications (joystickCtrl, iKartCtrl etc.) Further information are reported in iKart user manual section: 4.Start the ikart_ros_bridge application: ikart_ros_bridge Reference frames used by ikart_ros_bridge The ikart_ros_bridge applications uses the following reference frames uner the /tf topic. - <base_link> internal use only. - <ikart_root> the mobile base local reference system. gotoRel <x> <y> <angle> commands (see [here]) are expressed in this reference system. - <base_laser> the frontal laser scanner reference system. It is linked to <ikart_root> - <robot_root> the standard iCub root reference frame (definition here). It's linked to <ikart_root>. - <map> It is the fixed world reference frame. It is broadcasted by the ROS map server. The origin of this reference frames is constant and saved in the map file. - <home> The user can change the location of the <home> reference frame using the set home RPC command. By default <home> = <map>. If <home> is changed, the position of the mobile platform is expressed <home>. ikart_ros_bridge broadcasts the localized position of the iKart on the /ikart_ros_bridge/localized:o yarp port. gotoAbs <x> <y> <angle> commands (see [here) are expressed in this reference system. - <odom> If the map-localization is not employed, <odom> represents the fixed world reference frame. However the origin of this reference frames is not constant: it corresponds to the point where the ikartCtrl has been started (or its odometry has been resetted from the user). This reference frame is not published if ikart_ros_bridge is launched using the option --no_odom_tf. - <userXX> the user can set its own reference frames by sending a set frame RPC ommand. The <userXX> frames are expressed in the <world> fixed reference frame. In the picture below is shown the tree of the reference frames, as shown by the ROS view_frames utility: rosrun tf view_frames In the picture below, a example screenshot is taken from rviz to display some of the transformations between the above mentioned reference frames. <map> is the fixed reference frame. <ikart_root> and <robot_root> are the local reference frames. RPC commands used by ikart_ros_bridge The ikart_ros_bridge module provides a port /ikart_ros_bridge/rpc which can be used by the user to send rpc commands. Below is reported a list of the available rpc commands: - help displays the list of the available commands - set home <x> <y> <angle> set the origin and the orientation of the <home> reference frame. The provided values must be expressed in the <map> reference frame. - set current home set the origin and the orientation of the <home> reference frame using the current position of the iKart. Additionally, the following three commands are used to interact with the ROS navigation stack (more on this here): - gotoAbs <x> <y> <angle> set the goal for a navigation task, expressed in the <home> reference frame. - gotoRel <x> <y> <angle> set the goal for a navigation task, expressed in the <ikart_root> reference frame. - stop stops a current navigation task Running the SLAM gmapping node The following script will start the gmapping node and the ROS visualizer rviz: roslaunch ikart_build_map.launch Please refer to gmapping package documentation for further information about the configuration parameters used inside the script: When you are satisfied with the map create with gmapping, you can save it to a file: rosrun map_server map_saver -f map_filename Alternatively you can run the shortcut: $IKART_ROS_BRIDGE/launch/save_map.sh map_filename Note: You can edit the saved map using a standard graphical editor: the color codes used for each pixel are used to represent blocking wall (black) or free areas (light grey). By adding black lines or clearing areas on the map you can instruct the navigation package to keep away the robot from those area or grant access to them. The gmapping node is used to create a map of environment. During the normal operation/navigation, instead, you may want just to localize the ikart into a previously saved map. A saved map can be loaded using the command: rosrun map_server map_server map_filename.yaml Alternatively you can run the shortcut: $IKART_ROS_BRIDGE/launch/load_map.sh map_filename Running the AMCL localization on a previously saved map The localization task is performed by the AMCL (Adaptive Monte Carlo Localization) module. This module allows to compensate for the integral error of the wheels odometry, providing an accurate estimation of the robot position respect to a fixed reference frame. The AMCL module can be configured to estimate the robot position using: - the laser data AND the wheels odometry. - the laser data only. Using AMCL with iKart wheels odometry This is the classical way to run AMCL. The odometry information and the laser scans are obtained from the ikartRosBridge module (through the /tf topic and the /ikart_ros_bridge/laser_out topics). All configuration parameters are included in the script: roslaunch ikart_localize.launch Using AMCL with estimated laser odometry Recently we investigated the accuracy of the estimated wheels odometry. Since it may happen that on certain surfaces the wheels have a not negligible slippage, is it possible to prevent AMCL to use the wheel odometry data, and use only the laser information. To do so, it's required to run a ROS module called laser_scan_matcher, which estimates the odometry using the laser information. Currently the laser_scan_matcher is not included in standard ros distribution, but you can follow the installation instructions provided here. The following script contains all the required configuration to run AMCL with laser_scan_matcher. Please note that the iKartRosBridge must be launched using the --no_odom_tf option, otherwise it will enter in conflict with the odometry estimation provided by the laser_scan_matcher module. roslaunch ikart_laseronly_localize.launch Example scripts The following example scripts can be used to start/stop all AMCL-related ROS modules at once: - ros_start_all_odom.sh (wheels odometry) to start: roscore, iKartRosBridge, mapserver, amcl and rviz - ros_start_all_no_odom.sh (laser odometry) to start: roscore, iKartRosBridge, laser_scan_matcher, mapserver, amcl and rviz to stop all the ros modules including iKartRosBridge, mapserver, amcl and rviz Guess of the initial pose The localization node must be initialized giving to the node an estimated position of iKart on the map. This guess initial position must be published on the /initialpose topic. The AMCL node will then try to localize the iKart by searching a match between the received laser scans and the map, updating this information with the odometry data. In order to facilitate the localization process, it is a important to give to the node a good guess of the robot initial position. You can also specify the iKart initial pose graphically, using the 2D Pose Estimate tool of the ROS graphical interface rviz: click on the map and drag an arrow to select the orientation (keyboard shortcut: 'p'). After having set a reasonable good guess of the initial position, the estimated localization of the robot will automatically improve as soon as it moves around, as shown in the following sequence of pictures (the red uncertainty cloud progressively reduces, just rotating the robot on place). For further information about the AMCL pacakge, refer to the official documentation: Opening the visual GUI rviz You can use one of the following scripts (the display is optimized for the different tasks): roslaunch rviz_build_map.launch roslaunch rviz_navigate.launch As reported in the rviz manual, there are several issues the affect the module, most of the them related to the opening/redirection of the display and the use of the 3D hardware acceleration. In particular, if rviz segfaults during the startup, it is most likely that your system does not supports the default rendering mode, called PBuffer. You can change it by using one of the three following modes: export OGRE_RTT_MODE=PBuffer (default) export OGRE_RTT_MODE=Copy export OGRE_RTT_MODE=FBO Additional info about this issue can be found here: NOTE: Currently it's not possible to run rviz in VirtualBox, since the 3D hardware acceleration is not supported yet. Several modules can bu used to perform autonomous navigation tasks. - iKartNav - iKartGoto - iKartPathPlanner <--- currently, the preferred choice - the ROS navigation stack A very quick comparison between the module capabilities is shown in the following table: All these modules can be controlled using the same interface, i.e. sending one of the following command to their rpc port: - "gotoAbs" <x> <y> <angle> sets as goal the coordinates <x>, <y>, <angle> expressed in the world reference frame. <angle> is optional, if not the specified the robot will maintain the orientation assumed during the navigation path. - "gotoRel" <x> <y> <angle> sets as goal the coordinates <x>, <y>, <angle> expressed in the mobile base reference frame. <angle> is optional, if not specified the robot will maintain the orientation assumed during the navigation path. - "stop" - "pause" - "resume" - "quit" In the following sections these commands are referred as standard navigation commands. NOTE: The orientation of the world (absolute) and the local (relative) reference frames has been previously defined here The navigation modules have also a standard description of their internal status, typically broadcasted through a yarp port (e.g. /XXX/status:o) or accessible on rpc request. Currently, seven possible statuses are implemented (status.h). - IDLE: the navigation module is ready to receive a new goal command. - MOVING: the robot is currently moving to reach the commanded target. Depending on the module, it may be able or not to receive a new goto command. - WAITING_OBSTACLE: The robot is trying to reach the commanded target, but an obstacle is currently blocking the path. Depending on the module, a timeout event can be triggered stopping the navigation if the obstacle is not removed after a certain amount of time. - REACHED: the last goto command has been successfully executed (goal reached). - ABORTED: the module detected a deadlock status which prevents the goal to be reached (e.g. an obstacle is obstructing the path, or no path can be found). - PAUSED: the module has paused the navigation task with a "pause" command. It can be resumed with a "resume" command. - THINKING: the new goal has been accepted and the module is currently computing the path. The possible status transitions are depicted in the picture below. iKartNav is reactive navigation module: the platform uses only its sensory information (e.g. the odometry, the laser scanner) to locally compute the movement direction towards the goal. No a-priori knowledge of the environment (e.g. map) is used. This is particularly useful when it is required to perform a navigation task in an unknown, unmapped environment. On the other hand the path followed by the robot may be sub-optimal in complex environments: in this case large areas have to be explored before finding the right t path to the goal. The navigation algorithm is based on the artificial potential fields. The goal represents the attractor, while the obstacles detected by the laser scanner generate a repulsive potential field. The intensity of this potential field is given by the equation: We can define the associated repulsive force f as: Additionally we define two unit vectors g,t as follows: - g: the normalized direction towards the goal - t: the normal unit vector to the repulsive force f (thus tangent to the equipotential line U=UM) The direction of the robot v is thus computed as the weighted superimposition of the attractive and repulsive forces: v = wg g + wf f + wt t where the weights wg, wf, wt are care set accordingly to the following rules: - wg = 1 - UM if t · g >= 0 - wg = 0 if t · g < 0 - wf = UM2 - wt = UM In the above formula the attractor towards the target g is prevalent when the robot is far from obstacles, and it is overridden by f and t when close to the obstacles. Being UM =maxi{Ui} ∈ [0;1], the tangent part t is felt before the repulsive part f, so the repulsive part is used as an emergency repulsive force when the robot gets very close to the obstacle. Additional info: - Ros/map are not used. How to use: - Launch the KartNav.xml application. iKartGoto iKartGoto is a basic point-to-point navigation module which executes straight trajectory from the robot current location to the user-specified goal. Since the module is only able to generate straight trajectories (eventually corrected to avoid an obstacle on the path), the module is not suited for complex navigation tasks in structured environments. Instead iKartGoto is thought to ensure the robot to reach a pre-computed waypoint of a more complex path. In order to guarantee the reaching of the specified goal, the module internally uses a PID controller whose feed-back is constituted by the current position of the mobile base (obtained either from the robot's odometry or from an external map localization module, such as AMCL). The module also implements an obstacle detection/avoidance mechanism. The two different behaviors, corresponding to two different detection areas (A and B in the picture below) can be independently activated by the the user. The obstacle avoidance behavior activates when an obstacle is detected in the lateral 'A' areas. In this case the robot will correct his trajectory, according to the repulsive potential field generated by the obstacle. Instead, if an obstacle is detected in the frontal 'B' area, corresponding to the movement direction, the robot will stop and wait until the obstacle is removed. If the object is not removed by the user, a timeout event will be triggered and the navigation command will terminate. As mentioned before, iKartGoto is used to perform simple point-to-point navigation tasks or to track a sequence of waypoints pre-computed by a path planner. In this latter case the module does not accepts new goals until the current destination is reached (if the user wants to send a new goal, a stop command has to be sent first). When the current goal is reached, a status is Additional info: - iKartGoto is used by the iKartPathPlanner module to track the computed waypoints. - The navigation speed, the goal tolerance and the parameters controlling the obstacle avoidance behavior are stored in the iKartGoto.ini file. - Laser odometry (and not wheel odometry) is suggested for this type of navigation (read again here) How to use: - Start ROS and the localization service with the script ros_start_all_no_odom.sh (you have to load you load your own map) - Launch the iKartGoto.xml application. iKartPathPlanner The iKartPathPlanner module allows to perform complex navigation tasks (i.e. reaching a different room on the provided map) by computing a waypoint-point based path. When the module is launched, the provided map file is internally converted to graph representation, in which each node corresponds to a square cell of fixed sized (typically 5x5cm). Obstacles (e.g. walls, corners etc.) are enlarged taking in account the diameter of the mobile base, in order to simplify the computation of the navigation path. When the user issues a new navigation command, the plan is computed by performing an A* search on the graph and the raw set of nodes corresponding to the minimum path is returned. The path is subsequently refined by reducing the set of employed waypoints to those whose direct connection does not cross an obstacle. Depending on the complexity of the map and the position of the goal,the algorithm may take few seconds to compute the path. Once that the global navigation plan is computed, each waypoint is sequentially sent to the local point-to-point trajectory executor (e.g. the iKartGoto module) that will pursue the local goals one after the other, until the final destination is reached. The position of the mobile platform and the current path is sent through a yarp port in order to be visualized using the standard yarpview tool. Additional info: How to use: - Start ROS and the localization service with the script ros_start_all_no_odom.sh (you have to load you load your own map) - Launch the iKartPathPlanner.xml application. Another possibility is to directly use the ROS navigation stack. You can start the navigation stack launching the following script (the AMCL node must be already running): roslaunch ikart_navigation.launch You can send one of the standard navigation commands to the navigation stack using the port /ikart_ros_bridge/command:i The motor control output coming from ikart_ros_bridge has to be connected to the ikartCtrl: yarp connect /ikart_ros_bridge/command:o /ikart/control:i You can also send navigation commands using the 2D nav goal tool of the ROS graphical interface rviz: click on a location on the map a drag an arrow to select the orientation (keyboard shortcut: 'g') For further information about the navigation stack, refer to the official documentation: iKart dimensions In the picture below are reported the most significative measurements related to the odometry computation and the main geometric transformations between the iKart reference frames. Image ikart_dimensions.jpg Important things to remember - The external power supply settings are different respect to the iCub. The supply voltage must be set to 52.5V ant the current limit to 18A. - The iCub configuration file must be edited in order to disable the robot legs. - When moving around the iKart, be sure that the arms are put in a safe position (i.e. not fully extended) in order to avoid collisions with the surrounding environment. - Remember to replace the wireless joystick batteries regularly. - NEVER plug/unplug the power cable connecting the icub to the ikart if the power supply switches of the pc104/motors are on. Failing to do this will seriously damage the electronics! Common customizations Using grub It's possible to use grub to install multiple OS on the iKart (NB: since the iKart is not equipped with keyboard/video, a default selection and a timeout has to be defined). Some users have pointed out that if the iKart hangs up (it may happen), the default grub behavior is to turn off the timeout, in order to let the user to select which entry to boot. This however has a drawback: the machine will wait indefinitely for the user input, but the user does not know what is really happening (because there is no video) and it's leaded to think that the machine is stuck. To fix this problem, Linux forums suggest to edit the file /etc/grub.d/00_header with the code below (not tried yet): cat << EOF if [ \${recordfail} = 1 ]; then set timeout=-1 else set timeout=${GRUB_TIMEOUT} fi EOF Support If you have questions, or you experience difficulties, please contact the rc-hackers mailing list: robotcub-hackers@lists.sourceforge.net
http://wiki.icub.org/index.php?title=IKart&diff=prev&oldid=20415
CC-MAIN-2022-21
refinedweb
9,333
52.6
- NAME - SYNOPSIS - DESCRIPTION - API OVERVIEW - CONSTRUCTOR - METHODS - BUGS AND LIMITATIONS - AUTHOR - SEE ALSO NAME GnuPG - Perl module interface to the GNU Privacy Guard (v1.x.x series) SYNOPSIS use GnuPG qw( :algo ); my $gpg = new GnuPG(); $gpg->encrypt( plaintext => "file.txt", output => "file.gpg", armor => 1, sign => 1, passphrase => $secret ); $gpg->decrypt( ciphertext => "file.gpg", output => "file.txt" ); $gpg->clearsign( plaintext => "file.txt", output => "file.txt.asc", passphrase => $secret, armor => 1, ); $gpg->verify( signature => "file.txt.asc", file => "file.txt" ); $gpg->gen_key( name => "Joe Blow", comment => "My GnuPG key", passphrase => $secret, ); DESCRIPTION GnuPG is a perl interface to the GNU Privacy Guard. It uses the shared memory coprocess interface that gpg provides for its wrappers. It tries its best to map the interactive interface of the gpg to a more programmatic model. API OVERVIEW The API is accessed through methods on a GnuPG object which is a wrapper around the gpg program. All methods takes their argument using named parameters, and errors are returned by throwing an exception (using croak). If you wan't to catch errors you will have to use eval. When handed in a file handle for input or output parameters on many of the functions, the API attempts to tie that handle to STDIN and STDOUT. In certain persistent environments (particularly a web environment), this will not work. This problem can be avoided by passing in file names to all relevant parameters rather than a Perl file handle. There is also a tied file handle interface which you may find more convenient for encryption and decryption. See GnuPG::Tie(3) for details. CONSTRUCTOR new ( [params] ) You create a new GnuPG wrapper object by invoking its new method. (How original !). The module will try to finds the gpg program in your path and will croak if it can't find it. Here are the parameters that it accepts : - gnupg_path Path to the gpg program. - options Path to the options file for gpg. If not specified, it will use the default one (usually ~/.gnupg/options). - homedir Path to the gpg home directory. This is the directory that contains the default options file, the public and private key rings as well as the trust database. - trace If this variable is set to true, gpg debugging output will be sent to stderr. Example: my $gpg = new GnuPG(); METHODS gen_key( [params] ) This methods is used to create a new gpg key pair. The methods croaks if there is an error. It is a good idea to press random keys on the keyboard while running this methods because it consumes a lot of entropy from the computer. Here are the parameters it accepts : - algo This is the algorithm use to create the key. Can be DSA_ELGAMAL, DSA, RSA_RSA or RSA. It defaults to DSA_ELGAMAL. To import those constant in your name space, use the :algo tag. - size The size of the public key. Defaults to 1024. Cannot be less than 768 bits, and keys longer than 2048 are also discouraged. (You *DO* know that your monitor may be leaking sensitive information ;-). - valid How long the key is valid. Defaults to 0 or never expire. - name This is the only mandatory argument. This is the name that will used to construct the user id. Optional email portion of the user id. - comment Optional comment portion of the user id. - passphrase The passphrase that will be used to encrypt the private key. Optional but strongly recommended. Example: $gpg->gen_key( algo => DSA_ELGAMAL, size => 1024, name => "My name" ); import_keys( [params] ) Import keys into the GnuPG private or public keyring. The method croaks if it encounters an error. It returns the number of keys imported. Parameters : - keys Only parameter and mandatory. It can either be a filename or a reference to an array containing a list of files that will be imported. Example: $gpg->import_keys( keys => [ qw( key.pub key.sec ) ] ); export_keys( [params] ) Exports keys from the GnuPG keyrings. The method croaks if it encounters an error. Parameters : - keys Optional argument that restricts the keys that will be exported. Can either be a user id or a reference to an array of userid that specifies the keys to be exported. If left unspecified, all keys will be exported. - secret If this argument is to true, the secret keys rather than the public ones will be exported. - all If this argument is set to true, all keys (even those that aren't OpenPGP compliant) will be exported. - output This argument specifies where the keys will be exported. Can be either a file name or a reference to a file handle. If not specified, the keys will be exported to stdout. - armor Set this parameter to true, if you want the exported keys to be ASCII armored. Example: $gpg->export_keys( armor => 1, output => "keyring.pub" ); encrypt( [params] ) This method is used to encrypt a message, either using assymetric or symmetric cryptography. The methods croaks if an error is encountered. Parameters: - plaintext This argument specifies what to encrypt. It can be either a filename or a reference to a file handle. If left unspecified, STDIN will be encrypted. - output This optional argument specifies where the ciphertext will be output. It can be either a file name or a reference to a file handle. If left unspecified, the ciphertext will be sent to STDOUT. - armor If this parameter is set to true, the ciphertext will be ASCII armored. - symmetric If this parameter is set to true, symmetric cryptography will be used to encrypt the message. You will need to provide a passphrase parameter. - recipient If not using symmetric cryptography, you will have to provide this parameter. It should contains the userid of the intended recipient of the message. It will be used to look up the key to use to encrypt the message. The parameter can also take an array ref, if you want to encrypt the message for a group of recipients. - sign If this parameter is set to true, the message will also be signed. You will probably have to use the passphrase parameter to unlock the private key used to sign message. This option is incompatible with the symmetric one. - local-user This parameter is used to specified the private key that will be used to sign the message. If left unspecified, the default user will be used. This option only makes sense when using the sign option. - passphrase This parameter contains either the secret passphrase for the symmetric algorithm or the passphrase that should be used to decrypt the private key. Example: $gpg->encrypt( plaintext => file.txt, output => "file.gpg", sign => 1, passphrase => $secret ); sign( [params] ) This method is used create a signature for a file or stream of data. This method croaks on errors. Parameters : - plaintext This argument specifies what to sign. It can be either a filename or a reference to a file handle. If left unspecified, the data read on STDIN will be signed. - output This optional argument specifies where the signature will be output. It can be either a file name or a reference to a file handle. If left unspecified, the signature will be sent to STDOUT. - armor If this parameter is set to true, the signature will be ASCII armored. - passphrase This parameter contains the secret that should be used to decrypt the private key. - local-user This parameter is used to specified the private key that will be used to make the signature . If left unspecified, the default user will be used. - detach-sign If set to true, a digest of the data will be signed rather than the whole file. Example: $gpg->sign( plaintext => "file.txt", output => "file.txt.asc", armor => 1, ); clearsign( [params] ) This methods clearsign a message. The output will contains the original message with a signature appended. It takes the same parameters as the sign method. verify( [params] ) This method verifies a signature against the signed message. The methods croaks if the signature is invalid or an error is encountered. If the signature is valid, it returns an hash with the signature parameters. Here are the method's parameters : - signature If the message and the signature are in the same file (i.e. a clearsigned message), this parameter can be either a file name or a reference to a file handle. If the signature doesn't follows the message, than it must be the name of the file that contains the signature. - file This is a file name or a reference to an array of file names that contains the signed data. When the signature is valid, here are the elements of the hash that is returned by the method : - sigid The signature id. This can be used to protect against replay attack. - date The data at which the signature has been made. - timestamp The epoch timestamp of the signature. - keyid The key id used to make the signature. - user The userid of the signer. - fingerprint The fingerprint of the signature. - trust The trust value of the public key of the signer. Those are values that can be imported in your namespace with the :trust tag. They are (TRUST_UNDEFINED, TRUST_NEVER, TRUST_MARGINAL, TRUST_FULLY, TRUST_ULTIMATE). Example : my $sig = $gpg->verify( signature => "file.txt.asc", file => "file.txt" ); decrypt( [params] ) This method decrypts an encrypted message. It croaks, if there is an error while decrypting the message. If the message was signed, this method also verifies the signature. If decryption is sucessful, the method either returns the valid signature parameters if present, or true. Method parameters : - ciphertext This optional parameter contains either the name of the file containing the ciphertext or a reference to a file handle containing the ciphertext. If not present, STDIN will be decrypted. - output This optional parameter determines where the plaintext will be stored. It can be either a file name or a reference to a file handle. If left unspecified, the plaintext will be sent to STDOUT. - symmetric This should be set to true, if the message is encrypted using symmetric cryptography. - passphrase The passphrase that should be used to decrypt the message (in the case of a message encrypted using a symmetric cipher) or the secret that will unlock the private key that should be used to decrypt the message. Example: $gpg->decrypt( ciphertext => "file.gpg", output => "file.txt" passphrase => $secret ); BUGS AND LIMITATIONS This module doesn't work (yet) with the v2 branch of GnuPG. AUTHOR. SEE ALSO Alternative module: GnuPG::Interface gpg(1)
https://metacpan.org/pod/GnuPG
CC-MAIN-2015-27
refinedweb
1,732
67.86
Important: Please read the Qt Code of Conduct - Application falls after adding QtLocation to QML Hi! I need to add QML form to my QWidget app. So, i decide to do it using QQuickWidget but application falls(((. I have removed many part of code and leaved this: main: QApplication a(argc, argv); MainWindow w; w.show(); QQuickWidget *wq = new QQuickWidget(); wq->setResizeMode(QQuickWidget::SizeRootObjectToView); wq->setSource(QUrl(QStringLiteral("../appp/user_windows/map/userwindowmap.qml"))); wq->show(); return a.exec(); In constructor of MainWindow only ui->setupUi(this); qml file: import QtQuick 2.13 import QtPositioning 5.13 Rectangle { color: "red" Component.onCompleted: console.log("11111111111") } In program output I saw: qml: 11111111 The program has unexpectedly finished. But application works ok if I'm removing import QtPositioning 5.13. Help me please))) By the way, it works fine if I call w.show() after wq->show() So, so, so... I got this error: plugin cannot be loaded for module "QtLocation": Cannot load library C:\Qt\Qt5.13.1\5.13.1\mingw73_32\qml\QtLocation\declarative_locationd.dll: Not enough memory resources are available to process this command. My app was compiled for 32bit))) I have compiled for 64 and it works ok. - SGaist Lifetime Qt Champion last edited by SGaist Hi, How much memory does your application currently use ? What version of Qt are you using ? - vladstelmahovsky last edited by @brmisha said in Application falls after adding QtLocation to QML: what are relations between your QQuickWidget and MainWindow?
https://forum.qt.io/topic/106885/application-falls-after-adding-qtlocation-to-qml/2
CC-MAIN-2022-05
refinedweb
246
51.95
I've added the AutoQuery Admin plugin to a project that we have started in .NET CORE, however, I am unable to use the viewer, I consistently get an error in the React UI (I know zero about react, unfortunately). The page halts before anything loads, with a gray background but no UI elements visible. Swagger , works fine. I have a single request that uses autoquery: [Route("/query/SomethingWierdToTest", HttpMethods.Get)] public class QuerySomethingWierdToTest : QueryDb<SomethingWierdToTest> { } public class SomethingWierdToTest { public int Id { get; set; } } The error is: Uncaught (in promise) TypeError: t.attributes.filter is not a function at t.getAutoQueryViewer (AutoQuery.tsx:87) at AutoQuery.tsx:41 at Array.forEach () at new t (AutoQuery.tsx:39) at p._constructComponentWithoutOwner (ReactCompositeComponent.js:297) at p._constructComponent (ReactCompositeComponent.js:284) at p.mountComponent (ReactCompositeComponent.js:187) at Object.mountComponent (ReactReconciler.js:45) at p._updateRenderedComponent (ReactCompositeComponent.js:764) at p._performComponentUpdate (ReactCompositeComponent.js:723) I downloaded the Admin code and attempted to run that to test as well, but am having a few issues launching any of the projects there. Is the ADMIN up to date in .CORE or a similar story to the MiniProfiler? AutoQuery Viewer works fine on .NET Core, e.g: Does the Network inspector show any error responses? No, the only error to show is in the console (the one I pasted originally). I downloaded the northwind core project and run that, it works as advertised. In my project however, as best I can tell, the line: const type = etc etc from getAutoQueryViewer(name:string) { const type = this.getType(name); return type != null && type.attributes != null ? type.attributes.filter(attr => attr.name === "AutoQueryViewer")[0] : null; } comes back "not defined" from the call to : getType(name: string) { return this.props.metadata.types.filter(op => op.name === name)[0]; } no matter what request name is being checked on the page initialization (eg QueryResources, QueryStuff, etc). I comment out my autoquery request messages one by one and the same error happens, only it moves to the next message type from those remaining... so its like there isnt an issue with a particular autoquery (because i can actually query the endpoint using autoquery), but I am somehow misconfiguring my application such that the Admin plugin fails. This is running localhost on IIS Express, but I am not sure where to start trying to debug this. Suggestions? Can you provide a screenshot for the output of console.log(type,type.attributes)? If any of those are null or undefined can you also provide a screenshot of console.log(this.props.metadata) as well please. console.log(type,type.attributes) console.log(this.props.metadata) I have another project that seems to work with 5.0.2 libs, I am attempting to update that to 5.0.3 (becuase the first project served as the base for the other one...copy pasted code, then updated libs) Thats all i can think of at the moment, but I will investigate more and get back to you with the screenshots and more details Going to bed now, but think I have solved it. It appears to be the fact that I have my table classes in Autoquery , extending a base table (or even two, at times). The actual autoquery calls handle this fine... the admin feature, not so much. E.g.: [Alias("Country")] public class CountryTbl : SoftDeletedTable { [PrimaryKey] [StringLength(2,2)] public CountryCodeIso3166A2 Iso3166A2 { get; set; } [Index(Unique = true)] [Required] [StringLength(3, 3)] public CountryCodeIso3166A3 Iso3166A3 { get; set; } } public abstract class Soft { public bool IsDeleted { get; set; } } public abstract class BaseTable : IHasTimeStamp { [RowVersion] public byte[] RowVersion { get; set; } } So, this works: [Route("/querytablethatisntextended")] public class QueryCountryTbl : QueryDb<ThisWorks_CountryTableNotExtendingOtherClass> { } [Alias("Country")] public class ThisWorks_CountryTableNotExtendingOtherClass { public CountryCodeIso3166A2 Iso3166A2 { get; set; } public CountryCodeIso3166A3 Iso3166A3 { get; set; } } but this doesnt: [Route("/queryatablethatISextended")] public class QueryCountryTbl : QueryDb<CountryTbl> { } I have returned to this after finding a bit of free time, I went to strip out all the base tables from our codebase, only to find out that the above diagnosis is incorrect. In my application, even with base tables removed, the AdminUI crashes with the same React error whenever my Services are defined in a project other than the AppHost. (i.e. when i have myapp project, and myapp.services and myapp.service.messages projects) I copy and paste a single request to query an Attachment table, (the table POCO i have stripped of all attributes and noise), and the only way i can get the SS Admin UI to render at all is if I copy the (manually defined) Autoquery Service + the autoquery request message, into the same project as my dotnetcore app is running from . And even when that renders, it is empty, with the "Please Sign In to see your available queries" message. It's as though having my autoquery dtos and/or services outside the apphost project means they dont get wired up in time for the plugin feature to detect them, or something? Is that possible/likely? That because they reside in a different project , that they are added to the pipeline too late to be detected / compiled in an AdminUI-friendly way? That is a small zip file with a cut down solution with a single autoquery request and endpoint that shows the issue, sorry, I can't remember / see how to add an attachment here. The queryattachments request in the SomeDemoCustomAutoQueryEndpoint is what I am trying to view and query from the AdminUI plugin, to no avail. The issue is because the default was changed to: JsConfig.IncludeNullValues = true; Commenting that out should make it work. Also you don't need to set JsConfig.EmitCamelCaseNames = true in .NET Core Apps as it's enabled by default. JsConfig.EmitCamelCaseNames = true Well... Now i feel somewhat stupid. What steps did you take to diagnose that, out of interest (so I can avoid rookie questions in future). Thanks for the heads up about the defaults too. Chrome's WebInspector showed the error was trying to access the type.attributes property so had a look at the /autoquery/metadata response to see what was being returned for the attributes property and saw it was null where it would normally contain an array if exists or no property if it didn't have any values, so then navigated to your AppHost to uncomment the JsConfig.IncludeNullValues which caused null values to be emitted. type.attributes /autoquery/metadata null JsConfig.IncludeNullValues Note the latest v5.0.3 that's now on MyGet should handle null values now as well. Although non default configuration is typically not well tested so I'd recommend avoiding changing the defaults if possible. Same kind of issues on my side. Can't see my query in the viewer but if I change to QueryData I see it, like: [Route("/Query/Applications", Verbs = "GET")] public class QueryApplications : QueryDb<Application> --> QueryData { } I use Safari browser. Any hints ?
https://forums.servicestack.net/t/admin-viewer-for-autoquery-not-working-in-core/5365
CC-MAIN-2018-51
refinedweb
1,151
56.45
VB.NET Public Property on a single line public property let in vb net vb.net class property attributes vb.net property default value vb.net property vs variable vb.net property private set define property in vb net vb.net create object with properties Is there any way I can put Public Properties on a single line in VB.NET like I can in C#? I get a bunch of errors every time I try to move everything to one line. C#: public void Stub{ get { return _stub;} set { _stub = value; } } VB.NET Public Property Stub() As String Get Return _stub End Get Set(ByVal value As String) _stub = value End Set End Property Thanks EDIT: I should have clarified, I'm using VB 9.0. You can use automatically implemented properties in both VB 10 and C#, both of which will be shorter than the C# you've shown: public string Stub { get; set; } Public Property Stub As String For non-trivial properties it sounds like you could get away with putting everything on one line in VB - but because it's that bit more verbose, I suspect you'd end up with a really long line, harming readability... c# - VB.NET Public Property on a single line, Is there any way I can put Public Properties on a single line in VB.NET like I can in C#? I get a bunch of errors every time I try to move everything to one line. C#:. Yes you can Public Property Stub() As String : Get : Return _stub : End Get : Set(ByVal value As String) :_stub = value : End Set : End Property and you can even make it shorter and not at all readable ;-) Public Property Stub() As String:Get:Return _stub:End Get:Set(ByVal value As String):_stub = value:End Set:End Property Auto-Implemented Properties, › › Visual Basic guide › Language reference VB.NET program that uses property syntax Class Example Private _count As Integer Public Property Number() As Integer Get Return _count End Get Set(ByVal value As Integer) _count = value End Set End Property End Class Module Module1 Sub Main() Dim e As Example = New Example() ' Set property. e.Number = 1 ' Get property.): public class Propertor<T> { public T payload; private Func<Propertor<T>, T> getter; private Action<Propertor<T>, T> setter; public T this[int n = 0] { get { return getter(this); } set { setter(this, value); } } public Propertor(Func<T> ctor = null, Func<Propertor<T>, T> getter = null, Action<Propertor<T>, T> setter = null) { if (ctor != null) payload = ctor(); this.getter = getter; this.setter = setter; } private Propertor(T el, Func<Propertor<T>, T> getter = null) { this.getter = getter; } public static implicit operator T(Propertor<T> el) { return el.getter != null ? el.getter(el) : el.payload; } public override string ToString() { return payload.ToString(); } } Then in your VB program, for example: Private prop1 As New Propertor(Of String)(ctor:=Function() "prop1", getter:=Function(self) self.payload.ToUpper, setter:=Sub(self, el) self.payload = el + "a") Private prop2 As New Propertor(Of String)(ctor:=Function() "prop2", getter:=Function(self) self.payload.ToUpper, setter:=Sub(self, el) self.payload = el + "a") public Sub Main() Console.WriteLine("prop1 at start : " & prop1.ToString) Dim s1 As String = prop1 Console.WriteLine("s1 : " & s1) Dim s2 As String = prop1() Console.WriteLine("s2 : " & s2) prop1() = prop1() Console.WriteLine("prop1 reassigned : " & prop1.ToString) prop1() = prop2() Console.WriteLine("prop1 reassigned again : " & prop1.ToString) prop1() = "final test" Console.WriteLine("prop1 reassigned at end : " & prop1.ToString) end sub This results in: prop1 at start : prop1 s1 : PROP1 s2 : PROP1 prop1 reassigned : PROP1a prop1 reassigned again : PROP2a prop1 reassigned at end : final testa Property Statement, If Typeof m(i) Is Property Info Then Dim pro As Property Info = CType(m(i), public Regions)escription property, one line for the private RegionDescription field, VB needs to know that you want to set up a Property for your Class. The way you do this is type "Public Property … End Property". Access the code for your Class. Type a few lines of space between the End Sub of your DoMessageBox Method, and the line that reads "End Class". On a new line, type the following: Building Client/Server Applications with VB .NET: An , The way you do this is type "Public Property … End Property". Access the code for your Class. Type a few lines of space between the End Sub of your): Then in your VB program, for example: Creating Properties for your VB NET Classes, line */ /// <summary>XML comments on single line</summary> Public Property Size As Integer = -1 ' Default value, Get and Set both Public. VB.NET and C# Comparison, C# and Visual Basic .NET are the two primary languages used to program on the . C# lacks the DirectCast (mapping to a single CLR instruction), strict type conversion can In VB you'd have to define two properties instead: a read-only property all Visual Basic keywords to the default capitalised forms, e.g. "Public", "If". VB will allow you to make a public only (which means in reality it is just a field) property with a default value. C# allows you make a mixed access property, but it is impossible to set a default value (except in the constructor) - VB.net equivalent of C# Property Shorthand? covers the answer - an equivalent is available as of VB10. - Yes you can use collan (:) to right multiple lines in a single line. - @djacobson: Not quite the same, as the OP isn't actually using automatically implemented properties in the code given... - @Jon Skeet The example given doesn't do anything to the field except assign/retrieve its value... That's the only case in which there exists an equivalent one-line property syntax between C# and VB, and that's auto-properties. Is it not so? :) - @djacobson: Yes, but that's not equivalent to the code in the question - that's what I'm saying. (I referred to automatically implemented properties in my answer, too.) - Looks like I need to be using VB version 10.0 for this (which I didn't specify or even know I was using). Thanks though. - @Fred: Right, have edited to make the answer clearer, thanks. - Wow, looks bad but at least it's not 8 lines of code! Considering I'm using VB 9.0 this is the best option. Thanks - Please consider: MAX(Readability) > MIN(Number of Lines). I wouldn't want to maintain code like the above... - it should work in VB9 it has been there since a long time. msdn.microsoft.com/en-us/library/865x40k4%28v=VS.71%29.aspx - @Chrissie1 - then I must have another problem, because I get the error "Visual Basic 9.0 does not support auto-implemented properties" and other syntax errors (blue lines) - What version of VS are you using and against which framework are you compiling?
https://thetopsites.net/article/51555982.shtml
CC-MAIN-2021-25
refinedweb
1,138
56.96
Introduction: can be very useful for measuring small resistors and the resistance of PCB traces, motor coils, inductance coils, transformer coils, or calculate the length of wires. Measurement ranges: - Scale 0m1: 0.1mOhm to 12.9999 Ohm. - Scale 1m0: 1mOhm to 129.999 Ohm. - Scale 10m: 10mOhm to 1299.99 Ohm. Calibration kit video: Step 1: Parts and Wiring Diagram - The file WiringAndParts.pdf attached to this step shows all the parts of the milliohm meter with the buying link, and how to connect them. - The main circuit (1) contains the Arduino Nano and the rest of the electronics. This circuit has been designed by me and can't be purchased, but it is fully described in following steps. - The power comes from a wall adapter (D) that is connected directly to the 2.1mm power jack female socket (2). - The 2x16 display (3) shows the current scale set and the value of the resistor under test. It is connected using the I2C bus. - The binding posts/banana jacks (4) connect the circuit to the test clips (B). - There are two pushbuttons (5). The black one is connected to the “SEL” connector. When this button is pressed the scale change. The red one is connected to “CAL”. When this button is pressed the meter enters in HOLD MODE. - All the parts are contained in an orange Hammond 1590B aluminum box (A). Attachments Step 2: Circuit Design and Schematic - Most of the parts in the circuit can be ordered at digikey.com. - IC4 is an LT3092 precision current source/sink. The SET pin produces a precise current of 10uA. R10 and R11 are 0.1% precision resistances arranged in parallel that form a value of 15623ohm, this leads to a SET voltage of 0.156V. The resistance between OUT and SET program the output current, in this case, there are three resistors R12, R13 and R14 that can be enabled or disabled by MOSFET transistors. - Enabling R12 (1ohm) the output current will be 156mA, enabling R13 (10ohm) the output current will be 15.6mA and enabling R14 (100ohm) the output current will be 1.56mA. T1 has less than 1mOhm of RDSon, that is less than 0.1% of the value of R12. T2 and T3 have 20mOhm of RDSon that is a very small value relative to R13 and R14. - IC4 could have worked as source or sink, but in this case, the sink mode is better because it makes it easier to have a higher gate voltage for T1, T2, and T3. - T1, T2, and T3 are driven directly by the supply voltage through a ULN2003 (IC3) I have chosen this IC because it is very simple and it actually requires less space in the board than discrete transistors, also, there is no need for fast switching the MOSFETs in this circuit. - R15 is a 250mA resettable fuse that avoids the 5V rail to be destroyed if accidentally the + lead touches the circuit's GND. - D3 and D4 are TVS that protect the circuit against static discharges from test leads. - C3, R16, and R17 form a 30Hz filter. - J5-1 (+), J5-2 (S+), J5-3 (S-), and J5-4 (-), are wired to the binding posts and form the Kelvin connection with the resistance under test. - IC2 is an MCP3422A0 I2C ADC. It has an internal voltage reference of 2.048V, a selectable voltage gain of 1, 2, 4, or 8V/V and a selectable number of 12, 14, 16 or 18bits. In this circuit, only the channel 1 of IC2 is used, and it is connected differentially to the R under test "S+ S-". The MCP3422 is configured as 18bit but as S+ is always going to be greater than S-, the effective resolution is 17bit. - With T1 on, T2 off, and T3 off, the current through the resistance under test is 156mA. With this current, each 1/10 milliohm will result in a voltage of 15.6uV, this value is exactly one LSB of the ADC (IC2). In the other two scales the situation is the same, but multiplying by 10 or 100. - The I2C bus is shared between IC2 and the external display. The external display is connected and powered using J3. - J4 is connected to a panel 2.1 power jack with 12V. D1 is a TVS to prevent damages from static discharges. D2 is just for protecting reverse polarity. D1 and D2 are bigger than the necessary but I've used them because I already have these parts around. - J1 connects the SCALE push button. - J2 connects the HOLD push button. Step 3: Circuit PCB - I have used the free version of Eagle for both the schematic and the PCB. The PCB is 1.6 thick double-sided design. - The traces between IC4(SET), R12, T1, and IC4(OUT) are as short as possible. IC4(OUT) is soldered to a big copper area on both sides of the PCB, to meet the power dissipation requirements of IC4. - One failure in the design is that the 5V regulator of the Arduino Nano gets too hot with the maximum 200mA current it provides in the 0.1mOhm scale, it works but a separated regulator would have been better. - I am posting the following files: Gerber files: 00004A.zip. And the BOM(Bill Of Materials) + assembly guide: BOM_Assemby.pdf. - I ordered the PCB to PCBWay (). This time I have ordered the boards using the standard "ePacket" shipping method, the shipping time was higher but the price was significantly lower, $14 for 10 boards that arrived in two weeks and a half. Attachments Step 4: Circuit Assembling Most of the parts are SMT on this board. They can be assembled with a regular soldering iron, fine-tip tweezers, some solder wick, and a 0.02 solder. - Sort the parts. - Start soldering smaller parts. - Use solder wick to solder IC3. - In the case of IC4, solder first the leads and then, the thermal pad. - Assemble the rest of the THT parts and cut the leads. Step 5: Box Lid Machining I have attached an Inkscape file with the stencil: frontPanel.svg. - Cut the stencil. - Cover the panel with painter tape. - Glue the stencil to the painter tape. I have used a glue stick. - Mark the position of drills. - Drill holes to allow the fret saw or coping saw blade get into the internal cuts. - Cut all the shapes. - Trim with files. - Remove the stencil and the painter tape. Attachments Step 6: Box Body Machining - Mark the position of the holes for the circuit spacers (screws). - Center punch the holes. - Drill the holes of the circuit spacers. - Countersink the holes of the circuit spacers on the bottom of the box. - Mark the position of the 2.1-panel jack. - Drill first a small hole and then a bigger one. - Trim to size with a file. - Soften the edges. Step 7: Lid / Front Panel Assembly - Cut the leads of pushbuttons and the bolts of the banana connectors to avoid them touching the circuit. - Install the LCD using double-sided tape. - Build the cable-Molex connectors assemblies and solder the wires to the parts of the panel. Step 8: Final Assembly Step 9: Arduino Code Attachments Step 10: Misc. The Kelvin Leads I have purchased on eBay came with the nozzle springs a little bit loose and didn't make a good contact. I applied solder to fix it. Step 11: Updates - In order to update the code easily, I have done a hole facing the mini USB connector of the Arduino. - I added a new mode for measuring uV as requested by schabanow, the Arduino code is attached to this step. To enter into this mode it is necessary to press the two buttons for ~2sec. It is necessary also to shortcut the +5V output (red) with the S+ input. To return to normal mode it is necessary to press the RUN button again. Here is a video showing how it works: Attachments Participated in the Arduino Contest 2017 4 People Made This Project! Recommendations 84 Comments 7 months ago Hi Daniel. Do you sell the PCB as a kit?? Best regards Stig. Reply 7 months ago Hi Stig, unfortunately not. But I still have some PCBs laying around and I can send one if you need. Cheers, Daniel. Reply 4 weeks ago Hi Daniel, Nice project - do you happen to have any pcb0004 available? if so would you post me one to Malta - I would like to add an external regulator such as the lm1117-5.0 to supply the 5v. Reply 7 months ago Hi Daniel. Yes, this will bo good, how much for it shipped to Denmark ?? Regards Stig. Reply 7 months ago Hi Stig. Royal Mail standard to Denmark seems to be £1.70. Can exchange details in the private chat. Reply 7 months ago Hi Daniel. Sounds good. I am not sure, hov to make the private chat ?? Regards Stig. Question 8 months ago Hi Daniel, Thank for this great project. Like the genuine ULN2003 in TSSOP is obsolete, is it possible to have the Eagle files for replace the ULN2203 by SOIC-16 model? Thank again 9 months ago Hi Daniel I like very much your project. During the download of the program I receved a message: "Positif was not declared..." Introducing a line like a second one in the program: #include <LCD.h> helps download the program. Made my own board, soldered all components. When ploging the 12 V I can see only first line in the 12x2 lcd like 12 black squares. Library for LiquisCrystal_I2C is version of the "Newliquidcrystal_1.3.3.zip" of fmalpartida file. I would appreciate if you or somone else have any idea of the problem. Chequed connections on the board many times. Thank you. Reply 9 months ago For everyone who experienced this problem: I checked the initial adress of my I2C interface for lcd display. It was 0x27. I replaced the value in Daniel's program with 0x27 and the milliohm meter started to display. First mesurement I made with 10.4 milliohm Isabellenhutte resistence (+/- 0.5%) and the meter displayed 0.0104. No calibration done yet! The meter sems to work with extreme precision. Thank you Daniel 4 years ago Bonjour , votre projet et formidable j ai enfin tous en ma possession.Par contre lorsque je veux compilé le firmware j ai ce message d erreur :/ soft arduino 1.8.4 Merci d avance google trad Hello, your project and great j finally have all in my possession.By against when I want to compile the firmware j have this error message: / soft arduino 1.8.4 Thank you in advance mOhmMeter:120: error: 'POSITIVE' was not declared in this scope LiquidCrystal_I2C lcd(0x3F, 2, 1, 0, 4, 5, 6, 7, 3, POSITIVE); // Addr, En, Rw, Rs, d4, d5, d6, d7, backlighpin, polarity (As descrived in the ebay link but with 0x3F address instead of 0x20) ^ Utilisation de la bibliothèque Wire version 1.0 dans le dossier: C:\Users\Lenovo\AppData\Local\Arduino15\packages\MightyCore\hardware\avr\1.0.8\libraries\Wire Utilisation de la bibliothèque LiquidCrystal_I2C-1.1.2 version 1.1.2 dans le dossier: C:\Users\Lenovo\Documents\Arduino\libraries\LiquidCrystal_I2C-1.1.2 Utilisation de la bibliothèque EEPROM version 2.0 dans le dossier: C:\Users\Lenovo\AppData\Local\Arduino15\packages\MightyCore\hardware\avr\1.0.8\libraries\EEPROM exit status 1 'POSITIVE' was not declared in this scope Reply 4 years ago Bonjour ,pour information il suffit de prendre l avant dernière library disponible via la lien . J ai résolu le problème . Merci Reply 2 years ago I installed all of the LiquidCrystal libraries but still get this error: 'POSITIVE' was not declared in this scope When I googled it, it was shown that I need to include the lcd.h library but I cannot find it in the arduino libraries. Can you please help me on that? Reply 10 months ago Salam Mehdi, how did you fix error: positive was not declared in this scope. I tried many ways to update libraries and delete old libraries, nothing woked. Thanks Reply 4 years ago Awesome!! Reply 4 years ago Bonsoir , j ai fini le mien il et vraiment tip top . Je dit bravo a danielrp. Petit question et il possible d envisager une mise a jour pour agrandir la plage de 0m1: 0.1mOhm to 1000 Ohm ? Ou voir pour une v2 avoir plus de calibre ;p si oui je signe de suite pour la v2 . Travail remarquable D autre projet de prevu ? Reply 4 years ago Thank you JeremyC157! Actually I am working on a calibration kit and on a second version with some improvements, including an AutoScale function that could work cover your rage? Reply 4 years ago Hello what will be the new measurement range ? thks Good project Reply 4 years ago The same, it just will have an automatic range change. The LM3092 can't give more than 200mA or less than 500uA, so other scales are no available without changing the circuit significantly. Daniel. Reply 4 years ago Too bad I'm really interested in a modification that allows to have a scale of 0 to 2k. but I am still curious to see the auto scall. Does this question the current circuit or it will be only a firmware update? Thank you again for all Reply 4 years ago Yes, it will be a question of updating FW. Modifying the hardware it is possible to reach 2K lowering the output current.
https://www.instructables.com/Milliohm-Meter/
CC-MAIN-2022-27
refinedweb
2,265
75.71
30587/hyperledger-fabric-unable-check-source-gerrit-hyperledger I am trying to clone the Source Code but I get this error: Permission denied (publickey). fatal: The remote end hung up unexpectedly I'm not able to clone it. Any idea how to solve this? There's something wrong with your SSH key. Check the id_rsa and id_rsa.pub in your $HOME/.ssh directory and the id_rsa.pub configured in the Gerrit server. Run this command to know more: ssh -vvv -p 29418 <user>@<gerrit-server> Hyperledger Fabric supports only 2 types of ...READ MORE You can learning writing java codes for ...READ MORE In Hyperledger Fabric v1.0.0 your chaincode should confirm ...READ MORE invokeChainCode(securityContext, functionName, args, options) { ...READ MORE Summary: Both should provide similar reliability of ...READ MORE This will solve your problem import org.apache.commons.codec.binary.Hex; Transaction txn ...READ MORE To read and add data you can ...READ MORE I know it is a bit late ...READ MORE Yes, OSNs can see all the transaction ...READ MORE OR
https://www.edureka.co/community/30587/hyperledger-fabric-unable-check-source-gerrit-hyperledger
CC-MAIN-2019-39
refinedweb
175
63.66
279139Re: patch proposal Expand Messages - Aug 3, 2011On 8/3/2011 9:23 PM, Wietse Venema wrote: >> I have another scenariothat's fine with me, on my first. patch i'm using "usock" as dict type, >> >> tcp:host:port >> tcp:/path/name > Sorry, tcp:/path/name is bad user interface design. Everywhere else > in Postfix, one has to specify the socket TYPE before the socket > NAME (with BC compatibility for programs such as the SMTP client > or TCP map that were desigined initially for TCP sockets only). since "unix" already used for unix user/group lookup table. but it has been said that there was namespace issue with that design. well, i guest i have to look for another third party work arround :) >This is very understandable to me :) . i just want to connect to unix >> The reason why i wanted this feature is, by using unix domain socket i >> can protect my backend server from interference on multiuser environment. >> while tcp server is adequate for my single administrator/user >> environment server. > And I have to consider the longer-term issue of keeping the system > usable as it evolves. This means I will fight to keep the use > interface clean. domain socket with with a similar protocol as tcp_table. thanks a lot for attention Wietse. > WietsePowered By > - << Previous post in topic
https://groups.yahoo.com/neo/groups/postfix-users/conversations/messages/279139
CC-MAIN-2015-18
refinedweb
221
63.8
Lith. To get started, check out these links: - Learn how to use Litho in your project. - Read more about Litho in our docs. InstallationInstallation Litho can be integrated either in Gradle or Buck projects. Read our Getting Started guide for installation instructions. Quick startQuick start 1. Initialize SoLoader in your Application class. public class SampleApplication extends Application { @Override public void onCreate() { super.onCreate(); SoLoader.init(this, false); } } 2. Create and display a component in your Activity2. sampleRun sample You can find more examples in our sample app. To build and run (on an attached device/emulator) the sample app, execute $ buck fetch sample $ buck install -r sample or, if you prefer Gradle, $ ./gradlew :sample:installDebug ContributingContributing For pull requests, please see our CONTRIBUTING guide. See our issues page for ideas on how to contribute or to let us know of any problems. Please also read our Coding Style and Code of Conduct before you contribute. Getting HelpGetting Help - Post on StackOverflow using the #lithotag. - Chat with us on Gitter. - Join our Facebook Group to stay up-to-date with announcements. - Please open GitHub issues only if you suspect a bug in the framework or have a feature request and not for general questions. LicenseLicense Litho is BSD-licensed. We also provide an additional patent grant.
https://www.ctolib.com/facebook-litho.html
CC-MAIN-2019-39
refinedweb
215
60.92
Hi guys, quite an explanation here, hope someone has the patience to read it through I'm building an application in Flex 4 that handles an ordering system. I have a small mySql database and I've written a few services in php to handle the database. Basically the logic goes like this: I have tables for customers, products, productGroups, orders, and orderContent I have no problem with the CRUD management of the products, orders and customers, it is the order submission that the customer will fill in that is giving me headaches: What I want is to display the products in dataGrids, ordered by group, which will be populated with Flex datamanagement via the php-services, and that per se is no problem. But I also want an extra column in the datagrid that the user can fill in with the amount he wishes to order of that product. This column would in theory then bind to the db table "orderContent" via the php services. The problem is that you would need to create a new order in the database first that the data could bind to (orderContent is linked to an order in the db). I do not want to create a new order every time a user enters the page to look at the products, rather I would like to create the order when a button is pressed and then take everything from the datagrids on the page and submit it into the database. My idea has been to create a separate one-column datagrid, line it up next to the datagrid that contains the products and in that datagrid the user would be able to enter the amount of that product he'd like to order. I've created a valueObject that contains the data I would need for an order:.Code:package valueObjects { public class OrderAmount { public var productId:int; public var productAmount:int; public var productPrice:Number; public function orderAmount() { } } } I've tried to use the results from the automatically generated serviceResults that retrieve the products for the datagrid and put in a resultHandler that transfers the valueobjects, however this does not seem to work. Basically my question is this: Am I approaching this thing completely wrong, or is there a way I can get it to work the way I planned? Would I need to create a completely new service request to get the product id:s, and price to populate the one-column datagrid. I'll post some code if that would help. Thank you if you read this far.
http://www.codingforums.com/adobe-flex/201730-flex-4-data-management-how-do-i-approach.html
CC-MAIN-2016-26
refinedweb
428
58.45
How I do imaging for my PhD research 29–03–2020 I’ve not spoken a lot about my research of late. Being a PhD student, I’m still learning how to science; I’m always heads-down, busy fixing code, running experiments and such. Such things don’t strike me as very compelling for readers, especially the technical kind, but I’ve had a think, and there are a few technical topics I can talk about. I’d like to start with images, or data. Working with microscopes a lot, you tend to come across image data often. Whether that be STORM localisations, fluorescence images, or MATLAB matrix files, there is an awful lot of data best represented as a two dimensional grid of numbers, in either greyscale or RGB. Manipulating these easily, and at scale, has been really important to my work. Image formats — tiff, fits, gif and jpg Yeah, that’s right — I use all four of these regularly and probably more besides. They each have their pros and cons when it comes to research. Some are great for quick visualisations. Others are better for processing and recording results. I’ve found that working in AI and biology, I’ve had to deal a lot with images. tiffs Tiff images are one of these formats that has gotten out of hand! The specification for tiff is somewhat large and mad! It’s been around for a while, and is up to version 6 at time of writing. Most of the tiff images I deal with contain what biologists call Z-stacks. Essentially, these are images within the image, but they often represent the same object, imaged at different depths with a microscope. One problem that came up at the Turing Data Study Group I attended, was the support in python for 16bit tiffs. The de-facto image library Pillow (based on PIL) doesn’t support them correctly. 16bit tiffs are pretty common in biology. Other python libraries are available that deal with tiffs. pyopencv is one of them, though it is a little heavy weight. libtiff and matplotlib can also deal with tiffs. fits I’d not heard of the Flexible Image Transport System format before I started working on this AI project. Apparently, it’s used for shipping data formatted in rows, columns and tables, so basically an image with multiple layers if you will. It’s associated with NASA () and is used predominantly within the astronomy community. The main advantage is the format can work with floating point values and there is support in python through the Astropy library. This makes it a good choice to working with our microscopy data. gif, jpg & imagemagick Animated gifs are handy for many things, not just memes! I tend to export animations to both mp4 and gif format, as both are handy for posting on twitter, slack and the like. Animation has been a big help when visualising our results. jpg is quite handy when we need to compress our images down. For each run, I typically generate masses of fits files which are difficult to see altogether. Converting them to jpg significantly improves speed and management, when all I need to see is a general overview. I use imagemagick a lot! Mostly, it’s for generating various montages and comparison images. I’ll combine the input and output images of a particular experiment in three different ways: - montage of in and out data as two separate images - An overlay of in and out data, using a colour scheme to show overlap. - A single image with each input on the left and the corresponding output on the right, as a collection of pairs. The first is quite easy to generate: montage $1/in_*.jpg $1/montage_in.jpg montage $1/out_*.jpg $1/montage_out.jpg The second one is a bit trickier and relies on generating the two images above: convert '(' $1/montage_out.jpg -flatten -grayscale Rec709Luminance ')'\ '(' $1/montage_in.jpg -flatten -grayscale Rec709Luminance ')'\ '(' -clone 0-1 -compose luminize -composite ')'\ -channel RGB -combine $1/diff.png Finally, combining input and output side-by-side looks like this: !/bin/bash files_in=($1/in_*.jpg) files_out=($1/out_*.jpg) counter=1 while [ $counter -le ${#files_in[@]} ] do out_file=`printf pair_%05d.jpg ${counter}`; montage -tile 2x1 ${files_in[$counter]} ${files_out[$counter]} /tmp/$out_file ((counter++)) done montage -frame 3 -geometry 128x64 /tmp/pair_* $1/pair_montage.jpg rm /tmp/pair_* Animation Speaking of gifs and animation, I make use of ffmpeg quite a bit. Generating mp4 files from a series of jpgs is quite easy: ffmpeg -r 24 -i %03d.jpg test1800.mp4 This command assumes you want a 24 frames-per-second framerate and that your jpgs all being with three numbers (up to three leading zeroes in this case). Converting these mp4 files into gifs is also quite easy: ffmpeg -i input.mov -s 320x240 -r 10 -pix_fmt rgb24 output.gif Although I must admit, these animated gifs don’t seem as robust as others. I’ve seen them not render correctly under certain circumstances. I’m probably missing something. Image-J Another thing I’d not heard of; image-j beloved by microscopists and biologists! I can sort of see why. It’s a program that has many useful features and several plugins. As a tool that can read and manipulate almost all image formats known to human-kind, it’s definitely worth having in your arsenal. It’s just a shame it’s written in Java, with it’s slightly wonky user-interface. Still,, nevermind — it’s got tonnes of features and options for scientific images. Definitely worth having in your toolbox. Fiji is the name given to image-j packaged with a load of useful plugins, and is often referred to in lieu of image-j. Graphs & heatmaps I’ve never had a lot of luck understanding how to plot in python. Matplotlib is fine I suppose, but I find it rather complicated. ggplot exists for python but in the end, I decided on pandas and seaborn, mostly because the images rendered in seaborn are really pretty (I occasionally use matplotlib too, as there are plenty of examples out there). Generating line plots and heatmaps are my main things at the moment. You can get a reasonable line plot (of, say, your training and test errors from your neural net) like this: import matplotlib import seaborn as sns import pandas as pd import matplotlib.pyplot as pltfig, axes = plt.subplots(nrows=1, ncols=1, figsize=(7, 7)) step = [] error = [] test_error = [] with open(args.savedir + "/stats_train.txt", "r") as f: for line in f.readlines(): tokens = line.split(",") step.append(int(math.floor(float(tokens[0])))) error.append(float(tokens[2])) with open(args.savedir + "/stats_test.txt", "r") as f: for line in f.readlines(): tokens = line.split(",") #step.append(int(math.floor(float(tokens[0])))) test_error.append(float(tokens[2])) # plot the error thus far axes.set_title("Error rate") axes.plot(step, error, color='C0') axes.plot(step, test_error, color='red') axes.set_xlabel("Step (one batch per step)") axes.set_ylabel("Error") fig.tight_layout() fig.savefig(args.savedir + '/error_train_graph.png') plt.close(fig) At the moment, I really only use Pandas to create heatmaps. At some point, when I get around to doing more involved stats, I might use Pandas a little more. I create a dataframe and then pass it to seaborn like so: import matplotlib import seaborn as sns import pandas as pd import matplotlib.pyplot as pltdef plot_heatmap_activations(self, filename): dframe_active = pd.DataFrame() for idx, actives in enumerate(self.fc2_activations) : active = np.array(actives.flatten(), dtype=np.float) dframe_active["rot" + str(idx).zfill(3)] = active sns.heatmap(dframe_active) plt.savefig(filename) plt.close() Lots of images? Enter Rust Image-J is a little heavy weight, if what you want to do is skim over a huge number of images quickly in order to get a sense of what you are dealing with. I looked around for the right command line image viewer. Programs such as feh seemed like they might work, but when you are dealing with things like tiffs that can have tiny and huge values and multiple stacks, such programs just give you a black rectangle. Using a visual file browser like nautilus tends to fail too, as you end up waiting for the program to generate thumbnails for hundreds of thousands of images. In the end I wrote a small program called fex (for Fits EXplorer) that looks inside a directory for FITS and TIFF files and renders them out using some normalisation, averaging over the stacks and providing a little information. It’s faster and more reliable than any other methods I’ve tried, though it’s a bit more work. It’s not really ready for prime-time use by anyone else, but it’s up there so folks can get a sense of what writing rust programs with gtk and libtiff is like. Visualisation Visualise early and visualise often! It’s something I need to do more often. If I can’t see the data or what the network is doing, I don’t really know where to go next. When working on an A.I. project most folks leap to tensorboard. It’s quite a nice library this one, but it has one fatal flaw — it generates massive log files! Now this may have changed in the more recent version (I’ve not really played with Tensorflow 2) but unless you do a fair bit of tweaking, you’ll be running out of disk space. I also have an additional criterion — sharing experiments quickly and easily with folks in different places. This means publishing to the web. With these requirements in mind, I settled on a flask and WebGL based approach, running on my own web-server. Flask is pretty straight-forward to use and provides some nice logic for doing simple things, like ordering experiments by date. WebGL is used to visualise how our network moves around the points during training — invaluable in spotting some of the failure modes. It’s important to make your visualisation routine as easy as a one button press I reckon. If it becomes a faff, I suspect it won’t enter your regular routine. To that end, I have several evaluation python programs, all tied together in a single bash script. This script can be added to and changed as required. Once all the visualisation files are generated, the script uploads to the server automatically, and it’s there ready to view.
https://benjamin-computer.medium.com/how-i-do-imaging-for-my-phd-research-4d9486c63935
CC-MAIN-2021-10
refinedweb
1,757
65.62
29 November 2011 06:55 [Source: ICIS news] SINGAPORE (ICIS)--Indian Oil has tendered to sell 35,000 tonnes of naphtha for loading from Chennai on 18-20 December, traders said on Tuesday. The tender closes on 30 November, with offers to stay valid until 1 December, they added. The producer is planning to build a second cracker at Paradeep on the east coast of ?xml:namespace> Indian Oil is aiming for a 2015 start-up, although work has not begun on the project. The company plans to build derivative plants together with the cracker. Indian Oil operates an 857,000 tonne/year naphtha cracker at Panipat and the plant is running at 65% capacity this
http://www.icis.com/Articles/2011/11/29/9512226/indian-oil-offers-35000-tonnes-naphtha-for-18-20-december.html
CC-MAIN-2014-15
refinedweb
116
61.67
has so far been tested up to Python 2.5a2. Python 1.6 or later is required for proper treatment of Unicode in XML. For details of the PXTL language itself, please see the accompanying language documentation. The ‘pxtl’ directory is a pure-Python package which can be dropped into any directory on the Python path., for example C:) Using pxtl.processFile means you only want the final output and don’t care which implementation is used. Normally this will be the optimised implementation. If you want to specify a particular implementation (for testing, for example) you can import the ‘reference’ and ‘optimised’ sub-modules and call pxtl.reference.processFile or pxtl.optimised.processFile. Note: when the optimised implementation is in use, the current user must have write-access to the folder containing the template, so that a compiled Python bytecode file can be stored there (just like with standard Python .pyc files, but normally with the file extension .pxc instead of .pyc). See the bytecodebase argument for ways around this. If a bytecode file cannot be stored, the template will have to be recompiled every time processFile is called, which is extremely slow. Whichever implementation you use, the arguments are the same. The first argument path is required and holds the= True, headers= True) dom: a DOM implementation to use for XML work. This should normally be left unspecified, to use the embedded DOM implementation ‘pxdom’, which is the only suitable native-Python DOM Level 3 Core/LS implementation at the time of writing. bytecodebase: allows the location of bytecode cache files (.pxc) generated by the optimised implementation to be changed..) Typically this feature is useful when you want to run PXTL in a restricted user environment such as CGI on a web server (which on Unix-like Operating Systems typically runs as the unprivileged user ‘nobody’), and you want to allow this user to save bytecode files without giving them access to the template folder. Another possibility in this case is to precompile the bytecode files at install-time (see pxtl.optimised.compileFile). If the bytecodebase argument is set to None, the optimised implementation will make no attempt to store bytecode files, and the pxtl.processFile function will normally prefer to use the reference implementation. standard. If the document uses <px:import> elements, it must also support DOM Level 3 Core and LS.: Used to pre-compile the bytecode files used by the optimised implementation, so that they can be loaded and executed quickly in the future without having to have write access to the place where bytecode files are stored. This can be used as part of an application install procedure in the same way as the standard library function py_compile.compile (with doraise set). The files themselves are exactly the same format as standard .pyc files, so are dependent on the version of Python used as well as the version of PXTL; if either do not match the template will be recompiled at run-time. The required arguments to pxtl.optimised.compileFile are path and pathc, giving the pathname of the template source file to load and the compiled bytecode file to save, respectively. Further optional arguments include globals and dom, which work the same way as the processFile arguments of the same name. globals is only used as the scope for calculating the contents of the doctype attribute, the only part of a PXTL template run at compile-time, so it too is not normally needed. Finally there’s method and encoding, which can be used to override the output method (given in the px:doctype value) and the character encoding (given in the <?xml?> declaration). These are only generally required where the imported template has a different output method or encoding to the template that imports it (because in PXTL the importing templateଁs doctype is always used in preference). Typically this happens when an XHTML template to-be-imported omits px:doctype, relying on the importing template to set the method to xhtml. Otherwise they can be omitted. If the output method and encoding are wrong when the template comes to be run, it will simply be recompiled, so there is not normally a problem, except when permissions don't allow the bytecode to be written, resulting in slow execution every time. Compiles bytecode files recursively inside the given dir, analagous to the standard library compileall function. Uses the optional argument bytecodebase as in processFile, plus globals, dom, method and encoding from compileFile, and maxlevels and quiet from the standard compileall (so it defaults to 10 levels deep). Having compiled a template or unmarshalled its code from a bytecode file, you can run it using the standard Python exec statement. However, it expects a few internal PXTL variables to be put in its global scope before it can run, so that it knows, for example, what stream to send the output to. Use the prepareGlobals function to write these internal variables before calling exec code in mapping. The arguments are writer, baseuri, headers, dom and bytecodebase. baseuri should hold the URI of the source template, and can be omitted if PXTL import elements are not used, or if they are only used with absolute-URI src attributes. The other arguments are the same as for processFile= True) Add the argument debug= True requires standards-compliant DOM Level 3 Core and LS functionality. For this reason the it which work the same as the Python standard library module xml.dom.minidom. More information on pxdom and using it in other applications. If you have another DOM 3 implementation you would rather pxdom used, pass its DOMImplementation module into the processFile function’s dom argument..
http://www.doxdesk.com/file/software/py/v/pxtl-1.6.html
crawl-001
refinedweb
950
52.6
PROBLEM LINK: Practice Contest: Division 1 Contest: Division 2 Contest: Division 3 Author: Anshu Garg Tester: Danny Mittal Editorialist: Nishank Suresh DIFFICULTY: Medium PREREQUISITES: Cycle decomposition of a permutation, Greedy algorithms PROBLEM: Alice and Bob have copies of the same permutation P. Further, Alice has a potential array V. They take turns doing the following, with Alice starting: - On Alice’s turn, she can swap P_i and P_j for any two indices i and j such that V_i > 0 and V_j > 0, after which V_i and V_j decrease by 1. - On Bob’s turn, he can swap P_i and P_j for any two indices i and j. Determine who manages to sort their permutation first, and a sequence of their moves achieving this minimum. QUICK EXPLANATION: If the cycle decomposition of the permutation consists of C disjoint cycles, Bob can sort it in N - C moves (using s-1 moves for a cycle of size s). Alice wins if and only if she can also sort the permutation in N - C moves, which requires her to also be able to sort a cycle of size s in s-1 moves. It can be shown that this is possible if and only if the sum of potentials of all vertices in a cycle of size s is at least 2s-2, and the moves can be constructed greedily. EXPLANATION: A common idea when dealing with permutations is to look at the cycle decomposition of the permutation, which is a graph constructed on N vertices with edges i \to p_i. Since p is a permutation, every vertex has exactly one out edge and exactly one in edge, which is only possible if the graph looks like a bunch of disjoint cycles. Let’s ignore the potentials for now, and concentrate on finding the minimum number of swaps needed to sort the permutation (which also happens to be the number of moves Bob needs). How many moves to sort a permutation? Each cycle of size s can trivially be sorted using s-1 moves - for example, if the cycle is a_1 \to a_2 \to \dots \to a_s \to a_1 it can be sorted by swapping the following pairs in order: Adding this up over all cycles, we can see that for a permutation with C cycles, it can be sorted in N - C moves. It turns out that this is also necessary, i.e, we can’t do any better. The crux of the idea here is that performing any swap either decreases or increases the number of swaps by exactly 1. We start out with C cycles and the sorted array has N cycles; clearly if we can only increase by 1 each time, the minimum number of moves required is N-C. For those interested, a formal proof of this can be found at this stackexchange answer. Now let’s look at Alice’s case. The only way Alice can win is if she also takes exactly N-C moves to sort the permutation - of course, the earlier analysis tells us that this is only possible when she can sort each cycle of size s in s-1 moves. First, note that each swap consumes 2 potential - hence, sorting a cycle needs its vertices to have at least 2(s-1) potential in total, otherwise that cycle cannot be sorted at all. However, once a cycle has at least 2(s-1) potential in total, it turns out that it can always be sorted in s-1 moves. Here’s how: For convenience, let the cycle be 1\to 2\to 3\to \dots s\to 1, with potentials V_1, V_2, \dots, V_s respectively. If s = 1, the cycle is already sorted and nothing needs to be done. Otherwise, pick a vertex i such that V_i is minimum. If there are multiple such i, pick one with the largest value of V_{i-1}. If there are still ties, pick any of them arbitrarily. Now swap vertices i and i-1, which updates the cycle to become 1\to 2\to 3\to \dots \to i-1\to i+1\to i+2\to \dots s\to 1 (i.e, a cycle of size s-1) and continue the process till we are left with a single vertex. This process clearly takes exactly s-1 moves, because at each step we set p_i = i for some index i. All that remains to be shown is that we never pick a vertex whose potential is 0. Proof We prove this by induction on the size of the cycle. Note that the input guarantees that V_i \geq 1 for all i initially. If s = 1 the result is trivial. If s = 2, the cycle consists of two vertices, each with positive potential, so we can safely swap them. Suppose that any cycle of size s such that every vertex in it has positive potential, and the total potential is at least 2s-2, can be sorted in s-1 moves. Consider any cycle of size s+1 whose vertices have positive potential, and the total potential is P, where P\geq 2s. Let i be the vertex picked by our greedy process, and consider what happens when we swap i with i-1. - Case 1: V_i = 1 Note that by the pigeonhole principle, there is at least one vertex j with V_j\geq 2 (otherwise the total potential would be s+1 < 2s, which contradicts our assumption). So, if the minimum potential is 1, there definitely exists some vertex x with potential 1 such that V_{x-1} \geq 2 (just follow the cycle backwards). By our rule of breaking ties, only such a vertex will be chosen as V_i. The total potential of the cycle formed by swapping i and i-1 is hence exactly P - 2 \geq 2s-2. Further, we have V_{i-1} \mapsto V_{i-1} - 1 \geq 2-1 = 1, so the smaller cycle we created satisfies the inductive hypothesis and hence can be sorted. - Case 2: V_i > 1. In this case, every vertex has at least potential 2. So, even after swapping i and i-1, the other s-1 untouched vertices give a total potential of at least 2\cdot (s-1) to the remaining cycle. Further, V_{i-1}\geq 2 so upon subtracting 1 from it, it remains positive. By the inductive hypothesis, once again the smaller cycle can be sorted optimally and we are done. IMPLEMENTATION DETAILS Given a cycle, we would like to find the vertex with least potential, and among these the one whose inverse has the largest potential. Further, swapping two adjacent elements of the cycle affects the potential of exactly one remaining element, and updates the previous/next element in the cycle of exactly two remaining elements. One way to maintain this information is to keep tuples of (V_x, V_{p^{-1}_x}, x), sorted in ascending order by first coordinate and descending order by second, in a structure which allows us to quickly add/remove elements and get the smallest element - for example, std::set in C++ or TreeSet in Java. At each step, remove the first element of this set and add the operation to swap (x, p^{-1}_x), then remove the tuples corresponding to x, p_x, and p^{-1}_x from the set. Update the potential of p^{-1}_x, update the next/previous links of p^{-1}_x and p_x respectively, and then insert them back into the set. If you still find this confusing, please refer to the code linked below. TIME COMPLEXITY: \mathcal{O}(N\log N) per test. CODE: Setter (C++) #include<bits/stdc++.h> using namespace std ; #define ll long long #define pb push_back #define all(v) v.begin(),v.end() #define sz(a) (ll)a.size() #define F first #define S second #define INF 2000000000000000000 #define popcount(x) __builtin_popcountll(x) #define pll pair<ll,ll> #define pii pair<int,int> #define ld long double const int M = 1000000007; const int MM = 998244353; template<typename T, typename U> static inline void amin(T &x, U y){ if(y<x) x=y; } template<typename T, typename U> static inline void amax(T &x, U y){ if(x<y) x=y; } #ifdef LOCAL #define debug(...) debug_out(#_VA_ARGS, __VA_ARGS_) #else #define debug(...) 2351 #endif int runtimeTerror() { int n; cin >> n; vector<int> p(n+1),V(n+1),par(n+1); for(int i=1;i<=n;++i) { cin >> p[i]; par[p[i]] = i; } for(int i=1;i<=n;++i) { cin >> V[i]; } vector<pair<int,int>> alice,bob; auto make = [&](vector<int> &cy,long long P) { int n = cy.size(); if(n <= 1) return 1; // bob moves for(int i=1;i<cy.size();++i) { bob.push_back({cy[0],cy[i]}); } if(P < 2 * n - 2) return 0; // alice moves set<pair<pair<int,int>,int>> s; for(auto j:cy) { s.insert({{V[j],-V[par[j]]},j}); } while(!s.empty()) { auto [x,u] = *s.begin(); s.erase(s.begin()); alice.push_back({u,par[u]}); s.erase({{V[p[u]],-V[u]},p[u]}); if(p[u] == par[u]) continue; s.erase({{V[par[u]],-V[par[par[u]]]},par[u]}); p[par[u]] = p[u]; par[p[u]] = par[u]; --V[par[u]]; s.insert({{V[p[u]],-V[par[u]]},p[u]}); s.insert({{V[par[u]],-V[par[par[u]]]},par[u]}); } return 1; }; vector<bool> vis(n+1,0); int ans = 1; for(int i=1;i<=n;++i) { if(vis[i]) continue; long long P = 0; int cur = i; vector<int> cy; while(!vis[cur]) { P += V[cur]; vis[cur] = true; cy.push_back(cur); cur = p[cur]; } ans &= make(cy,P); } if(ans) { cout<<"Alice\n"; cout << alice.size() << "\n"; for(auto [j,k]:alice) cout << j << " " << k << "\n"; } else { cout<<"Bob\n"; cout << bob.size() << "\n"; for(auto [j,k]:bob) cout << j << " " << k << "\n"; } return 0; } int main() { ios_base::sync_with_stdio(0);cin.tie(0);cout.tie(0); int T; cin >> T; while(T--) runtimeTerror(); return 0; } Tester (Kotlin) import java.io.BufferedInputStream import java.util.* const val BILLION = 1000000000 fun main(omkar: Array<String>) { val jin = FastScanner() var nSum = 0 val out = StringBuilder() repeat(jin.nextInt(1000)) { val n = jin.nextInt(100000) nSum += n if (nSum > 100000) { throw IllegalArgumentException("constraint on sum n violated") } val p = IntArray(n + 1) for (j in 1..n) { p[j] = jin.nextInt(n, j == n) } if (p.toSet().size != n + 1) { throw IllegalArgumentException("p is not a permutation") } val v = IntArray(n + 1) for (j in 1..n) { v[j] = jin.nextInt(BILLION, j == n) } var moves = solve(n, p.clone(), v) if (moves != null) { out.appendln("Alice") } else { moves = solve(n, p.clone(), IntArray(n + 1) { 2 })!! out.appendln("Bob") } out.appendln(moves.size) for ((j, k) in moves) { out.appendln("$j $k") } } print(out) jin.assureInputDone() } fun solve(n: Int, p: IntArray, v: IntArray): List<Pair<Int, Int>>? { val q = IntArray(n + 1) for (j in 1..n) { q[p[j]] = j } val treeSet = TreeSet<Int>(compareBy({ v[it] }, { it })) fun addIf(j: Int) { if (p[j] != j && (p[p[j]] == j || v[q[j]] >= 2)) { treeSet.add(j) } } for (j in 1..n) { addIf(j) } val moves = mutableListOf<Pair<Int, Int>>() while (treeSet.isNotEmpty()) { val k = treeSet.first() val j = q[k] val l = p[k] treeSet.remove(j) treeSet.remove(k) treeSet.remove(l) p[k] = k p[j] = l q[k] = k q[l] = j v[k] = 0 v[j]-- moves.add(Pair(j, k)) addIf(j) addIf(l) } if ((1..n).all { p[it] == it }) { return moves } else { return null } } assureInputDone() { if (char != NC) { throw IllegalArgumentException("excessive input") } } fun nextInt(endsLine: Boolean): Int { var neg = false c = char if (c !in '0'..'9' && c != '-' && c != ' ' && c != '\n') { throw IllegalArgumentException("found character other than digit, negative sign, space, and newline") } if (c == '-') { neg = true c = char } var res = 0 while (c in '0'..'9') { res = (res shl 3) + (res shl 1) + (c - '0') c = char } if (endsLine) { if (c != '\n') { throw IllegalArgumentException("found character other than newline, character code = ${c.toInt()}") } } else { if (c != ' ') { throw IllegalArgumentException("found character other than space, character code = ${c.toInt()}") } } return if (neg) -res else res } fun nextInt(from: Int, to: Int, endsLine: Boolean = true): Int { val res = nextInt(endsLine) if (res !in from..to) { throw IllegalArgumentException("$res not in range $from..$to") } return res } fun nextInt(to: Int, endsLine: Boolean = true) = nextInt(1, to, endsLine) } Editorialist (C++) #include "bits/stdc++.h" // #pragma GCC optimize("O3,unroll-loops") // #pragma GCC target("sse,sse2,sse3,ssse3,sse4,popcnt,mmx,avx,avx2") using namespace std; using ll = long long int; mt19937_64 rng(chrono::high_resolution_clock::now().time_since_epoch().count()); int main() { ios::sync_with_stdio(0); cin.tie(0); int t; cin >> t; while (t--) { int n; cin >> n; vector<int> p(n+1), inv(n+1), v(n+1), mark(n+1); for (int i = 1; i <= n; ++i) { cin >> p[i]; inv[p[i]] = i; } for (int i = 1; i <= n; ++i) cin >> v[i]; auto pcopy = p, invcopy = inv; if (n == 1) { cout << "Alice\n0\n"; continue; } vector<array<int, 2>> ans; auto solve = [&] (auto cycle) -> bool { int N = size(cycle); if (N == 1) return true; ll sum = 0; for (int x : cycle) sum += v[x]; if (sum < 2*(N-1)) return false; auto cmp = [] (auto a, auto b) { if (a[0] != b[0]) return a[0] < b[0]; if (a[1] != b[1]) return a[1] > b[1]; return a[2] < b[2]; }; set<array<int, 3>, decltype(cmp)> active(cmp); for (int x : cycle) { active.insert({v[x], v[inv[x]], x}); } while (active.size()) { auto [v1, v2, x] = *begin(active); active.erase(begin(active)); ans.push_back({x, inv[x]}); // Now inv[x] points to p[x] and v[inv[x]] decrements by 1 active.erase({v[p[x]], v[x], p[x]}); if (p[x] == inv[x]) continue; active.erase({v[inv[x]], v[inv[inv[x]]], inv[x]}); p[inv[x]] = p[x]; inv[p[x]] = inv[x]; --v[inv[x]]; active.insert({v[p[x]], v[inv[x]], p[x]}); active.insert({v[inv[x]], v[inv[inv[x]]], inv[x]}); } return true; }; vector<vector<int>> cycles; for (int i = 1; i <= n; ++i) { if (mark[i]) continue; vector<int> cycle; int cur = i; while (!mark[cur]) { cycle.push_back(cur); mark[cur] = 1; cur = p[cur]; } cycles.push_back(cycle); } bool alice = true; for (auto cycle : cycles) { alice &= solve(cycle); } if (alice) { cout << "Alice\n" << size(ans) << '\n'; for (auto move : ans) cout << move[0] << ' ' << move[1] << '\n'; } else { for (int i = 1; i <= n; ++i) v[i] = n; swap(p, pcopy); swap(inv, invcopy); ans.clear(); for (auto cycle : cycles) { solve(cycle); } cout << "Bob\n" << size(ans) << '\n'; for (auto move : ans) cout << move[0] << ' ' << move[1] << '\n'; } } }
https://discuss.codechef.com/t/srtgame-editorial/95746
CC-MAIN-2021-49
refinedweb
2,485
63.39
From: Guillaume Melquiond (gmelquio_at_[hidden]) Date: 2003-05-18 10:30:57 According to the paragraph 3.7.3.1-3 of the Standard, an 'operator new' can return a null pointer or throw an exception to report a failed allocation; but it isn't allowed to adopt the two behaviors. Unfortunately, it's exactly what 'stateless_integer_add' does for the sake of avoiding warnings; and gcc complains about it. So the line "return 0; // suppress warnings is wrong." doesn't do what it's supposed to do: it doesn't suppress warnings. Here is patch that returns another pointer so that gcc doesn't complain. I'm not sure it's the best way to choose a non-null pointer, but at least gcc doesn't complain anymore, and the compilers that needed a return statement still have it. Index: libs/function/test/stateless_test.cpp =================================================================== RCS file: /cvsroot/boost/boost/libs/function/test/stateless_test.cpp,v retrieving revision 1.6 diff -u -r1.6 stateless_test.cpp --- libs/function/test/stateless_test.cpp 30 Jan 2003 14:25:00 -0000 1.6 +++ libs/function/test/stateless_test.cpp 18 May 2003 15:25:01 -0000 @@ -24,7 +24,7 @@ void* operator new(std::size_t, stateless_integer_add*) { throw std::runtime_error("Cannot allocate a stateless_integer_add"); - return 0; // suppress warnings + return (void*)1; // suppress warnings } void operator delete(void*, stateless_integer_add*) throw() Regards, Guillaume Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2003/05/47941.php
CC-MAIN-2020-16
refinedweb
248
50.33
itha Perera wrote: > Brad King wrote: >> Amitha Perera wrote: >>> namespace vcl { using namespace std; } >> >> See my previous argument for why we cannot do that: > > Fair enough. I'd forgotten. > >> namespace vcl >> { >> using std::swap; >> using std::tr1::swap; >> } >> #define vcl_swap vcl::swap > > This is what I had in mind anyway. This is "easy" to do, and only > requires a list of names (which we already have in vcl, thanks to Fred > S., Andrew F., Matt L.), instead of requiring the full declarations for > each of these. > > I think. Maybe actually implementing this scheme is just as difficult. It can be done for just the few names that are overloaded in both std and std::tr1 namespaces. The goal is to define vcl_swap, not to allow people to write "vcl::", at least for now. -Brad View entire thread
https://sourceforge.net/p/vxl/mailman/message/19035264/
CC-MAIN-2017-51
refinedweb
137
80.21
27 June 2011 22:38 [Source: ICIS news] HOUSTON (ICIS)--A ?xml:namespace> Buyers could not immediately be reached for comment. Increases proposed for June failed because of buyer resistance, which was fuelled by waning feedstock propylene prices. The producer said it plans to maintain the US contract price at the current level, and will not pursue a price decrease. “We intend to hold our July prices to a rollover. We don’t have any intention of giving product away,” the producer said. US June contract prices for industrial-grade MPG is $1.20–1.24/lb ($2,646–2,734/tonne, €1,878–1,941/tonne), as assessed by ICIS. Major US MPG producers include Arch Chemicals, Dow Chemical, Huntsman and LyondellBasell. ($1 = €0.71) For more on.
http://www.icis.com/Articles/2011/06/27/9472980/july+rollover+set+for+us+mpg++producer.html
CC-MAIN-2013-20
refinedweb
129
68.67
tag:blogger.com,1999:blog-7101933101966798446.post6811099832098555409..comments2018-03-23T15:11:48.767-07:00Comments on The View from Aristeia: Breaking all the Eggs in C++Scott Meyers with third party tools is completely wron...Approach with third party tools is completely wrong and is likely to fail.<br />No "magic wand" will help if we have 10M lines of sources with requirement to do code review and write (and physically sign!) "formal code review reports" (this is what FDA requires from healthcare related project!).<br /><b>The best solution for this problem is to adopt for C++ something like "use strict" does for JavaScript.</b><br /><br />Now it is up to developer (!) to decide: do we need to update all the 10000files of project sources with "magic wand" (and write tons of those "formal code review reports"), or use this new "strict" mode only for new or refactored files!<br /><br /.<br />The new "#pragma strict" or "using strict" or whatever we call it will be different from this hell just because it is <b>part of C++ standard and every conforming compiler is forced to support this new feature</b>! No more reinvention the wheel and no more trying to tie square wheels invented by my company to triangle wheels invented by third party companies!Pavlo Mur: Its fine, helped me thinking about the pro...@Scott: Its fine, helped me thinking about the problem again. Thanks for your time.<br />The reason I want only constant zero is simply to enforce that code, some simple canonical "bitcheck" that has a well defined meaning whatever the type is. x == 0 is a good candidate because its pretty simple, "builtin" for plain enums, widely known and used.nplnoreply@blogger.comtag:blogger.com,1999:blog-7101933101966798446.post-36576688222967733922015-12-01T20:48:50.452-08:002015-12-01T20:48:50.452-08:00@npl: I don't have a solution for you, I'm... Meyers Meyers: You know, I am talking about an ide...@Scott Meyers:<br />You know, I am talking about an ideal, egg-free omlett world (sounds somehow implausible).<br /).<br /><br /:<br /><br />enum EType {<br /> eType_Somebit = 1 << 0,<br /> eType_Otherbit = 1 << 1,<br />};<br /><br />bool foo(EType e)<br />{<br /> return (e & eType_Somebit) != 0;<br />}<br /><br />// easily transformed (search + replace mainly) to omlette:<br />enum class EType : unsigned {<br /> Somebit = 1 << 0,<br /> Otherbit = 1 << 1,<br />}<br />// define & | ~ &= |= == != operators for EType, a MACRO can do this<br /><br />bool foo(EType e)<br />{<br /> return (e & EType::Somebit) != 0;<br />}<br /><br />BTW, I had written some lines to explain why there cant be a "standard" way to test for constant 0 argument - but while writing I might have found one =)<br />nplnoreply@blogger.comtag:blogger.com,1999:blog-7101933101966798446.post-53081063606424003002015-11-30T16:46:20.898-08:002015-11-30T16:46:20.898-08:00@npl: Okay, I think I see what you mean. However, ...@npl: Okay, I think I see what you mean. However, I believe that [bitmask.types]/4 is simply defining terminology, not required expressions. ("The following <i>terms</i> apply...") As such, I think cppreference's interpretation of that clause of the Standard is incorrect. <br /><br />Even if we assume that [bitmask.types]/4 requires that the expression "(X & Y)" be testable to see if it's nonzero, I don't see any requirement that the zero value be a compile-time constant. That is, I believe this would be valid:<br /><br />template<typename T><br />void setToZero(T& param) { param = 0; }<br /><br />int variable;<br />setToZero(variable);<br /><br />if ((X & Y) == variable) ....<br /><br /.)<br /><br / the interface of a bitmasktype requires it: ...Cause the interface of a bitmasktype requires it:<br />Its supposed to be identical in use / interchangeable with a pre11 plain enumnplnoreply@blogger.comtag:blogger.com,1999:blog-7101933101966798446.post-82819750204546133112015-11-29T23:14:36.825-08:002015-11-29T23:14:36.825-08:00@Greg Marr: Thank you for the explanation. But IM...@Greg Marr: Thank you for the explanation. <br /.Vincent G. Marr: I should have remembered that; I write...@Greg Marr: I should have remembered that; I write about it in <i>Effective Modern C++</i>, Item 12. Thanks for reminding me!Scott Meyers's a contextual keyword, which means it has ..rnoreply@blogger.comtag:blogger.com,1999:blog-7101933101966798446.post-47706879896227831872015-11-29T10:31:11.120-08:002015-11-29T10:31:11.120-08:00@Vincent G.: I'm not familiar with the history...@Vincent G.: I'm not familiar with the history of the placement of "override" at the end of the function declaration, sorry.Scott Meyers: From what I can tell, your operator== functi...@npl: From what I can tell, your operator== function returns whether all bits in the bitmask are 0, so I don't see why you want a binary operator to test that. Why don't you just define a function something like this?<br /><br />constexpr bool noBitsAreSet(bitmask X)<br />{ return X == bitmask(); }Scott Meyers override why not just write: class ... { ...About override why not just write:<br /><br />class ... {<br /> ...<br /> override type fun(paramlist);<br />};<br /><br />instead of:<br /><br /> virtual type fun(paramlist) override;Vincent G. have found one use for nullptr and deprecation o...I have found one use for nullptr and deprecation or removing the automatic cast from 0 to nullptr would break it.<br /><br />When you want to provide enums as bitmask.types (17.5.2.1.3), you have to define a few operations on them. set, clear, xor and negate are easy and even documented in the standard.<br />Now taking a enum bitmask and 2 instances X,Y, defined as<br />enum class bitmask : int_type{....}; bitmask X,Y;<br /><br />you would have to support 2 additional operations (noted in the standard):<br />(X & Y) == 0; (X & Y) != 0;<br /><br />In other words, you need operator== and operator!=, which ideally ONLY TAKE CONSTANT 0. The solution I came up with was:<br />constexpr bool operator ==(bitmask X, const decltype(nullptr))<br />{ return X == bitmask(); }<br /><br /.nplnoreply@blogger.comtag:blogger.com,1999:blog-7101933101966798446.post-59292564213497119022015-11-23T23:59:29.312-08:002015-11-23T23:59:29.312-08:00I agree it's time to do this break those eggs,...I agree it's time to do this break those eggs, as for the comment on python 2to3 two points <br />1). the change is happening but slowly and <br />2). the change has been much slowed by the fact that the 2to3 utility was sloppily implemented it couldn't even do trivial changes like print args --> print(args) we can do better Francis Grizzly Smit Scott, I cannot agree more that it is time to ...Hi Scott,<br /><br /.<br /><br /.<br /><br /.<br /><br />Cheers,<br /> Jensjensa Scott, i think you are rigth and it will be an ..)<br /><br /<br /><br />P.D I am ready for your next book ;-) <br /><br />Juan AG that /Wall is extremely noisy, as it turns on...Note that /Wall is extremely noisy, as it turns on all the warnings. It is roughly equivalent to GCC's and Clang's -Weverything.Ben Craig Halbersma: Thanks for the test results with ....Scott Meyers Re: the compiler warnings on your Item 12. ...: Halbersma idea! Except, there are already enough langu...Great idea! Except, there are already enough languages which are perfect candidates for this. So just use one of them instead of making a new one!Paul Michalik'm inclined to agree with Nevin on default in...I'm inclined to agree with Nevin on default initialization not being 0, at least with regards to simple stack types like int.<br /><br />I don't want to look at a declaration and assume that the author intended to initialise to zero, I'd rather explicitly know.<br /><br /.<br /><br />One could argue that the other way though, that they'd rather auto initialize to 0 so that if that wasn't intended, one can predict what will/has happened to the data/system better.<br /><br />If one likes that latter view, then could say default initialization to zero is done, but the user/compiler should still not let default initialization to zero affect it's compile time analysis of errors. But it will likely affect runtime.<br /><br />But overall, so far, I'm not in favour of automatic initialization, at least of simple types like int. I want the compiler and the user to be able to assume that anything not explicitly initialized is a potential source of error.<br /><br /.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-7101933101966798446.post-14117091947923889592015-11-16T16:46:16.072-08:002015-11-16T16:46:16.072-08:00@Nevin ":-)": I'm proposing changing.... <br /><br /.<br /><br /.Scott Meyers are basically allowing more things to legally ...You are basically allowing more things to legally compile and run, so there is less checking that compilers and sanitizers can do.<br /><br />Example #1: the following does not compile with warnings enabled:<br /><br />int i;<br />std::cout << i << std::endl;<br /><br />If the behavior were changed so that i was initialized to 0, this code would have to legally compile.<br /><br />Example #2: this code compiles but is caught by the sanitizer:<br /><br />foo(int& ii) { std::cout << ii << std::endl; }<br /><br />int i;<br />foo(i);<br /><br />With your proposed change, sanitizers could no longer diagnose such code, as it would be perfectly legal.<br /><br /><br />So, why make the change?<br /><br /.<br /><br />Reason #2: this is a common mistake for beginners. Beginners (as well as experts and everyone in between) ought to be using sanitizers.<br /><br />On the whole, this kind of change seems to mask bugs instead of preventing bugs. What am I missing?Nevin ":-)" compatibility is indeed necessary and de...<br /><br /.<br /><br />// disables things that make it easy for you to shoot yourself in the foot.<br />lang mode strict;<br /><br />// modern code, sensible language subset<br /><br />lang enable c_varargs;<br />// some code that interfaces C<br />lang disable c_varargs;<br /><br />As an aside, It's also INSANE that -Wall on gcc doesn't really mean everything. This is again for backwards compatibility.<br />No!, if I mean everything I really mean it. If I'm upgrading compiler and I wanted to have yesterday's -Wall, then it should be versioned: -Wall-gcc49.<br />Otherwise what's the point of all those compiler devs spending their time and effort trying to make my life easier if it's so hard to access their efforts?<br /><br /><br /.<br />Anonymousnoreply@blogger.com
http://scottmeyers.blogspot.com/feeds/6811099832098555409/comments/default
CC-MAIN-2018-17
refinedweb
1,809
64.3
SYNOPSIS ctags [ DESCRIPTION By default, ctags generates a file named tags in the current directory which summarizes the locations of various objects in the C or FORTRAN sourcefiles named on the command line. All files with a .c or .h suffix are treated as C source files, while files with a .f suffix are treated as FORTRAN source files. For C source code, ctags summarizes function, macro and typedef definitions. For FORTRAN source code, it summarizes function and subroutine definitions. See tags for a description of the format of the tags file. You can access the tags file with the :tag \fIname\fP in ex and vi, and the command :t\fIname\fP in more. The idea is that you tell the utility which function you want to look at and it checks the tags file to determine which source file contains the function. ctags makes special provision for the function ctags uses sort to sort the file by tag name, according to the POSIX locale's collation sequence. Options -a appends output to the existing tags file rather than overwriting the file. -B produces a tags file that searches backward from the current position to find the pattern matching the tag. -F searches for tag patterns in the forward direction (default). -ftagfile generates a file named tagfile rather than the default tags. -w suppresses warning messages. -x produces a human-readable report on the standard output. The report gives the definition name, the line number of where it appears in the file, the name of the file in which it appears and the text of that line. ctags arranges this output in columns and sorts it in order by tag name according to the current locale's collation sequence. This option does not produce a tags file. FILES DIAGNOSTICS Possible exit status values are: - 0 Successful completion. - 1 Failure due to any of the following: PORTABILITY POSIX.2. x/OPEN Portability Guide 4.0. 4.2 BSD UNIX and up. Windows 7. Windows Server 2008 R2. Windows 8. Windows Server 2012. Windows 10. Windows Server 2016. The NOTE Recognizing a function definition in C source code can be somewhat difficult. Since ctags does not know which C preprocessor symbols are defined, there may be some misplaced function definition information if sections of code within #if...#endif are not complete blocks. ctags invokes the sort command internally. Be sure that ctags does not access the Windows sort command instead of the sort provided with PTC MKS Toolkit; otherwise, ctags fails. Modify your path search rules if you have this problem. AVAILABILITY.
https://www.mkssoftware.com/docs/man1/ctags.1.asp
CC-MAIN-2018-13
refinedweb
431
66.64
UTF-8 error in macOS version WingIDE Hi,I'm trying to read this utf-8 file in this Python script: import csv with open(csv_filename) as csv_fd: reader = csv.reader(csv_fd, delimiter=';') next(reader) for row in reader: print(row) But get an error in WingIDE (macOS): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/encodings/ascii.py", line 26, in decode return codecs.ascii_decode(input, self.errors)[0] builtins.UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 227: ordinal not in range(128) There are "Unicode UTF-8" in "Preferences - Debugger - I/O" ("Encoding" fields). If I run this script in Terminal there is no such kind of error.There is no error if I run this script in Windows version of WingIDE. Thanks!
https://ask.wingware.com/question/45/utf-8-error-in-macos-version-wingide/
CC-MAIN-2021-17
refinedweb
131
54.52
In another notebook, I showed how to get issue metadata and OCRd texts from a digitised journal in Trove. It's also possible to download page images and PDFs. This notebook shows how to download all the cover images from a specified journal. With some minor modifications you could download any page, or range of pages.. For example, if I click on the 'Browse issues' button for the Angry Penguins broadsheet it opens, so the journal identifier is nla.obj-320790312. # Replace the value in the single quotes with the identifier of your chosen journal journal_id = 'nla.obj-748141557' # Where do you want to save the results? output_dir = 'images' # Let's import the libraries we need. import requests from bs4 import BeautifulSoup import time import os import re import glob import pandas as pd from tqdm import tqdm_notebook from requests.adapters import HTTPAdapter from requests.packages.urllib3.util.retry import Retry from slugify import slugify from IPython.display import display, HTML, FileLink import zipfile import io import shutil s = requests.Session() retries = Retry(total=5, backoff_factor=1, status_forcelist=[ 502, 503, 504 ]) s.mount('https://', HTTPAdapter(max_retries=retries)) s.mount('http://', HTTPAdapter(max_retries=retries)) # Set up the data directory image_dir = os.path.join(output_dir, journal_id) os.makedirs(image_dir, exist_ok=True) def harvest_metadata(obj_id): ''' This calls an internal API from a journal landing page to extract a list of available issues. ''' start_url = '{}/browse?startIdx={}&rows=20&op=c' # The initial startIdx value start = 0 # Number of results per page n = 20 issues = [] with tqdm_notebook(desc='Issues', leave=False) as pbar: # If there aren't 20 results on the page then we've reached the end, so continue harvesting until that happens. while n == 20: # Get the browse page response = s.get(start_url.format(obj_id, start), timeout=60) # Beautifulsoup turns the HTML into an easily navigable structure soup = BeautifulSoup(response.text, 'lxml') # Find all the divs containing issue details and loop through them details = soup.find_all(class_='l-item-info') for detail in details: issue = {} # Get the issue id issue['id'] = detail.dt.a.string rows = detail.find_all('dd') try: issue['title'] = rows[0].p.string.strip() except (AttributeError, IndexError): issue['title'] = 'title' try: # Get the issue details issue['details'] = rows[2].p.string.strip() except (AttributeError, IndexError): issue['details'] = 'issue' # Get the number of pages try: issue['pages'] = int(re.search(r'^(\d+)', detail.find('a', class_="browse-child").text, flags=re.MULTILINE).group(1)) except AttributeError: issue['pages'] = 0 issues.append(issue) #print(issue) time.sleep(0.2) # Increment the startIdx start += n # Set n to the number of results on the current page n = len(details) pbar.update(n) return issues def save_page(issues, output_dir, page_num=1): ''' Downloads the specified page from a list of journal issues. If you want to download a range of pages you can set the `lastPage` parameter to your end point. But beware the images are pretty large. ''' # Loop through the issue metadata for issue in tqdm_notebook(issues): # print(issue['id']) id = issue['id'] # Check to see if the page of this issue has already been downloaded if not os.path.exists(os.path.join(image_dir, '{}-{}.jpg'.format(id, page_num))): # Change lastPage to download a range of pages url = '{0}/download?downloadOption=zip&firstPage={1}&lastPage={1}'.format(id, page_num-1) # Get the file r = s.get(url, timeout=60) # The image is in a zip, so we need to extract the contents into the output directory z = zipfile.ZipFile(io.BytesIO(r.content)) z.extractall(image_dir) time.sleep(0.5) issues = harvest_metadata(journal_id) Convert the list of issues to a Pandas dataframe and have a look inside. df = pd.DataFrame(issues) df.head() Save the data to a CSV file. df.to_csv('{}/issues.csv'.format(image_dir), index=False) save_page(issues, image_dir, 1) shutil.make_archive(image_dir, 'zip', image_dir) display(HTML('<b>Download results</b>')) display(FileLink('{}.zip'.format(image_dir))) Created by Tim Sherratt. Work on this notebook was supported by the Humanities, Arts and Social Sciences (HASS) Data Enhanced Virtual Lab.
https://nbviewer.jupyter.org/github/GLAM-Workbench/trove-journals/blob/master/Get-page-images-from-a-Trove-journal.ipynb
CC-MAIN-2019-43
refinedweb
666
50.63
Python Programming, news on the Voidspace Python Projects and all things techie. IronPython in Action: New Chapters and WPF the last few days I've been off work trying to get a chapter on Windows Presentation Foundation finished. I haven't got as much of the writing as I would have liked done, but I have finished the research and completed two of the three examples. WPF is great for creating funky user interfaces, and comes with some great controls. Despite the emphasis on XAML it is also easy to use from code. Here's a screenshot of the second example: More importantly, two more chapters of IronPython in Action are available in the Manning Early Access Program. I'm pretty proud of these chapters. Chapter 7 is on testing with IronPython (including functionally testing a GUI application) and chapter 8 is about deeper aspects of Python and interacting with the .NET framework. The information in section 8.4 is vital to any non-trivial interaction with .NET so I'm glad it has gone live: - Agile Testing – where Dynamic Typing Shines - The unittest Module - Creating a TestCase - SetUp and TearDown - Test Suites with Multiple Modules - Testing MultiDoc - Using Mock Objects - Modifying Live Objects: The Art of the Monkey Patch - Functional Testing - Interacting with the GUI Thread - An AsyncExecutor for Asynchronous Interactions - The Functional Test: Making MultiDoc Dance - Getting Deeper into IronPython: Metaprogramming, Protocols and More - Protocols instead of Interfaces - A Myriad of Magic Methods - Operator Overloading - Iteration - Generators - Equality and Inequality - Dynamic Attribute Access - Attribute Access through Builtin Functions - Attribute Access through Magic Methods - Proxying Attribute Access - Metaprogramming - Introduction to Metaclasses - Uses of Metaclasses - A Profiling Metaclass - IronPython and the CLR - .NET Arrays - Overloaded Methods - 'out', 'ref', 'params' and Pointer Parameters - Value Types - Interfaces - Attributes The one concrete thing I will report about WPF and IronPython is how to create Image controls using images from the filesystem - because I wasted so much time trying to get this to work. You need to use a bizarre relative URI format to specify the location of the image file. The code looks like this: from System import Uri, UriKind from System.Windows.Media.Imaging import BitmapImage image = Image() bi = BitmapImage() bi.BeginInit() bi.UriSource = Uri("pack://siteoforigin:,,,/imagedirectory/image.jpg", UriKind.RelativeOrAbsolute); bi.EndInit() image.Source = bi You don't need this 'site of origin' magic when using compiled XAML, but you do when using the XamlReader or creating images from code. You can read all the gory details here: WPF FlowDocuments are cool by the way, very high level document viewing controls for very little effort - but another new markup to learn (a subset of XAML) to use them. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-12-14 18:03:55 | | Categories: IronPython, Writing PyCon 2008: IronPython and Resolver Talks! Well, the accepted talk list for PyCon 2008 is up! The good news is that only one of my talks got accepted (preparing for just one talk is a lot less scary). My talk is Python in your Browser with IronPython & Silverlight, the second one on the list! Silverlight is a new browser plugin from Microsoft. It is intended for media streaming, games and rich internet applications. It is cross-platform and cross-browser and comes with a rich programmers API. Through the Dynamic Language Runtime, Silverlight is fully programmable with IronPython - meaning that at last client side web applications can be written fully in Python. This talk will explore some of the things that you can do with IronPython in the browser. This includes making web apps run faster, writing 'rich' applications (or games), and embedding a Python interpreter into web pages for tutorials and documentation. It is a good year for IronPython talks (and so it should be). As well as my talk there are: - IronPython: The Road Ahead (48 - Jim Hugunin) - End-user computing without tears using Resolver, an IronPython spreadsheet (65 - Giles Thomas - the Resolver boss!) - Using .NET Libraries in CPython (103 - Mr. Feihong Hsu - a talk on Python.NET which definitely deserves more attention and is related to IronPython) There is also a talk by another Resolver developer (not on IronPython though): - Getting started with test-driven development (5 - Jonathan Hartley) It should be a great conference. sigh I was installing IronPython Studio whilst writing this entry. The .NET 3.5 Setup told me I needed to close just about every application that was running before it could continue - including closing .NET 3.5 Setup. Somehow I didn't think that would work! Good old Microsoft. Oh, and I just discovered iSolitaire solitaire game for the iPhone. Fantastic. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-12-12 12:25:11 | | Categories: Python, IronPython, Work Resolver Hacks Update Last week the Resolver Beta was released. Since then we've had several hundred (3-400 currently) downloads, which is great. The version released was 1.0 beta 4 [1]. This includes quite a few major changes since I last updated you about progress with Resolver. One of the major changes was that we made the API for working with spreadsheet objects simpler from user code [2]. This was a lot of work, but I think it was worth it. The very worst side effect was that it broke most of my examples on the Resolver Hacks website. Today I have finally got around to updating it and putting new screenshots in (Resolver got prettier): There are around thirty pages of articles and examples to get you going with Resolver. Some of the most useful ones are: - Cash Balance - an Example to Get You Started - Exotic & Custom Data Types - Fetching Data From the Web - Charting with Zedgraph - Bugs, Features & Futures - A Look at the Future for Resolver Some of the features that are new in Resolver in the last few months include: - Undo and redo for all spreadsheet actions (that took a long time) - Unpacking Arrays into Selection (an array or cellrange in a cell can be unpacked into a selected area with Ctrl-Shift-Enter - this took even longer than Undo and Redo!) - Cell references and ranges can be inserted into formulae using the cursor keys - Pasting 'patterns' into selection when selection is a multiple of clipboard contents - Drop down lists of items in cells - Performance improvements for large spreadsheets (several very nice optimisations here - but there is still plenty to work on) - View by type (date, number, string etc) and origin (constant, formula or set from user code) - a basic auditing mode to see what types cells contain and where the values come from - Exception tracebacks are printed to the output view (including exceptions raised in cells and worksheet formulae - fiddly to get right but very useful) - The directory of the current file is added to 'sys.path' - Pretty (!?!) coloring of the different user code sections in the code box (I didn't specify this user story!) Plus of course lots of other minor features and bug fixes, almost too numerous to mention. Even since the beta we have added several new features (once a couple of big changes were finished a few weeks ago we really got on a roll). Hopefully these will be available in beta 5 later this week. If you have already downloaded Resolver you should get notified when the update is available. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-12-11 00:06:17 | | Categories: Website, Work Resolver Sponsoring PyCon Resolver is sponsoring PyCon 2008! This is great news as it means that there is at least half a chance that Resolver will pay for me to go. You can see our logo up there on the PyCon 2008 Homepage. We're below google but I think we're still gold sponsors. Looks like there are some benefits to commercial software development... The PyCon 2008 Program Committee promised they would let us know today about the talk proposals. The proposal system is still in shutdown, so I guess they're still working on it. Tough job, and I'm not holding my breath, but I really want to know which of the Resolver guy's talks got accepted... Oh - Ars Technica have just released part 5 of their awesome series on the Amiga: A history of the Amiga, part 5: postlaunch blues. The Amiga was a truly beautiful piece of technological architecture and this series captures the spirit. Unfortunately some of my ebay items had non-paying buyers, so I've relisted. - Smartphone: Nokia N70 Mobile Phone. Boxed & Unlocked - Brand New Microsoft Black & Orange Laptop Case/Bag - Apple Mac OS X Tiger 10.4.8 - Unused Original Disks - Dell Axim X5 PocketPC PDA with TOMTOM GPS + Extras Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-12-10 22:50:49 | | Categories: Work, Computers, Python Resolver and the TechCrunchies Resolver is obviously a great startup, with an awesome application (I hope you've tried it). You can show your appreciation (or just help us out) by nominating 'Resolver Systems' in the Crunchies! The TechCrunch Crunchies are awards for the most innovative technical, creative and business accomplishments. We'd love to see Resolver nominated for the Best international start-up, but there are lots of other appropriate categories too. We're on a weekly release cycle now by the way, so there should be the next beta available around Wednesday. We've just reached a stage where we have about eight updates waiting to be checked in, so there should be some nice new features and defect fixes. Oh, and whilst we're not on the subject... if you're advertising tech jobs this month, there are some great discounts available on the Hidden Network Jobs Board: - 100OFF0712 - $100 discount ($199 listing), usable once per client - 200OFF0712 - $200 discount ($99 listing) , usable once per client Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-12-09 18:27:47 | | London Geeks, Scoble and Resolver So, Friday night I had a good time chatting to Robert Scoble, Dave Sifry and Dave Winer... Actually it's true. They're all en-route to Le Web 3 conference and stopped off in London. A bunch of London geeks got together at a geek dinner organised by Hugh MacLeod. Giles Thomas and I meant a load of interesting people (including the alpha geeks), like Mike butcher from TechCrunch UK and Nick Falstead the founder of fav.or.it. The split was about fifty-fifty between developer geeks of various sorts and those involved in social networking (not sure what I think of people who earn a living showing companies how to leverage social networks and blogs), including a guy with a bike shop round the corner who has a popular blog on cycling. The photowalk afterwards was cool, Dave Siffry (founder of Technorati) told us about his next big idea, but kinda long - we ended up in Picadilly around half one. Here are the two most important pictures from the photowalk: and: You can see more from: Dave Sifry and Tim Watt Yesterday Scoble interviewed three of us from Resolver. It hasn't gone up yet, but he mentioned it so he can't have thought we were too bad [1]. As we just released our beta this would be great publicity. As this blog entry has two embarrassing pictures, it might as well have a third: This is from a few years ago. Our Church did several nightclub events called Destiny and this was some us doing publicity in Northampton town centre. I think it was before I was married - I was certainly less fuzzy back then. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-12-09 17:31:54 | | Mock Update - Version 0.3.1 My Mocking and Patching library has been updated again, this time with a couple of suggestions from Kevin Dangoor. Mock is a module for unit testing. It is particularly useful for mocking and patching dependencies to reduce test complexity. Changes in 0.3.1: - patch maintains the name of decorated functions for compatibility with nose test autodiscovery. - Tests decorated with patch that use the two argument form (implicit mock creation) will receive the mock(s) passed in as extra arguments. The best way to explain these changes is to show how they work. Suppose you want to unit test create_reader method of SomeClass in the following code: class SomeClass(object): def create_reader(self, filename): handle = open(filename, r) self.reader = DataReader(handle) We want to test that create_reader sets a reader attribute with a DataReader instantiated with a filehandle from the filename passed in. We can test this very easily with the help of the patch decorator to patch out the dependencies: from someclass import SomeClass class SomeClassTest(TestCase): @patch('__builtin__', 'open') @patch('datareader', 'DataReader') def test_create_reader(self, mockOpen, mockDataReader): mockDataReader.return_value = sentinel.Reader mockOpen.return_value = sentinel.Handle") The way we used to do this kind of testing at Resolver would look something like the following (using the Mock class but not the patch decorator). Because the dependencies need to be patched at the module level the patching needs to be undone afterwards - which means we need to store the original value and do the patching in a try-finally block. from mock import Mock, sentinel class SomeClassTest(TestCase): def test_create_reader(self): import datareader import __builtin__ oldDataReader = datareader.DataReader oldOpen = __builtin__.open mockDataReader = Mock() mockOpen = Mock() mockDataReader.return_value = sentinel.Reader mockOpen.return_value = sentinel.Handle datareader.DataReader = mockDataReader __builtin__.open = mockOpen try:") finally: datareader.DataReader = oldDataReader __builtin__.open = oldOpen Now that's a lot of code. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2007-12-09 16:10:23 | | Categories: Python, Projects, General Programming Archives This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License. Counter...
http://www.voidspace.org.uk/python/weblog/arch_d7_2007_12_08.shtml
CC-MAIN-2015-18
refinedweb
2,339
61.46
- 15 Jul, 2014 1 commit - 14 Jul, 2014 1 commit - 10 Jul, 2014 1 commit - Stephen Morris authored - 03 Jul, 2014 1 commit - 02 Jul, 2014 6 commits - 01 Jul, 2014 5 commits Modifiedl D2CfgMgr.configPermutations test to verify that parsing error messages contain position information. Fixed format of one error message in D2CfgMgr that didn't match. Hopefully, Marcin is happy now. Surrounded position info in error messages with parens. Replaced use of vector with map for tracking position values. General cleanup. Fixed Mac OS-X/gcc 4.2.1 issued warning on empty bodied while-loops in new asiolink::IntervalTimer unit tests. - cosmetic changes in configure.ac - comment in src/lib/testutils/testdata/Makefile.am updated - 30 Jun, 2014 3 commits D2 unit test IOSignalTest::hammer was scaled back to accomodate sluggish VMs. IOSignaTest::mixedSignals was failing intermittently on NetBSD VM and was rewritten to test the quantity of signals received rather than the order. The unittests IntervalTimerTest.intervalModeTest and IntervalTimerTest.timerReuseTest were failing under NetBSD VM. They have been restructured to be less susceptible to timing. - Francis Dupont authored - 27 Jun, 2014 3 commits There is a new function used in keactrl tests which waits for the server to shutdown, rather than check if it is down already. This is to avoid race condition when the process may not completely terminate when we check that it is. - 26 Jun, 2014 12. Modified DCfgContextBase::getParam() variants to return the parameter's Element::Position. This makes it available during parsing. Modified TSIGKeyInfoParser::build to validate parameters and use position in error messages. D2CfgMgr::buildParams now validates all of the top level params prior to calling the D2Params constructor. This allows element position info to be included in error logging.. As part of merging 3407, D2's shell script tests were revamped to match work done under 3422. - 25 Jun, 2014 7 commits Added missing parameter and method commentary. Extended unit tests as suggested. Moved onreceipt_handler_ from SignalSet to anonymous namespace..
https://gitlab.isc.org/isc-projects/kea/-/commits/852fb7c13332d024cdab8e594336b1cfadba757c/src
CC-MAIN-2021-10
refinedweb
331
51.85
sunset (community library) Summary Allows calculation of sunrise, sunset, and moonphase Calculates Sunrise and Sunset, along with the Civil, Nautical, and Astronomical times for Sunrise/Sunset. It can also tell you the moon phase for a given time.. Since then, I have updated it a bit to do some more work. It will calculate the Moon position generically. Since version 1.1.0, it will also calculate other sunrise/sunset times depending on your needs. - Can accurately calculate Standard Sunrise and Sunset - Can accurately calculate Nautical Sunrise and Sunset - Can accurately calculate Civil Sunrise and Sunset - Can accurately calculate Astronomical Sunrise and Sunset New Github Pages Find Doxygen documentation at Version 1.1.1 IMPORTANT changes I have migrated to an all lower case file name structure. Starting with master and 1.1.1, you must use #include <sunset.h> Instead of SunSet.h in the previous versions. This change was originally caused by the changes to the Particle build system where I use this library extensively. They forced me to name the file the same as the package name which resulted in an upper and lower case name. Now it's all lower case, pry the way I should have started it. I've also change the google test enable variable, though I'm not sure many used that. I've updated the readme below to reflect the change. License This is governed by the GPL2 license. See the License terms in the LICENSE file. Use it as you want, make changes as you want, but please contribute back in accordance with the GPL. Building Building for any cmake target The builder requires CMake 3.0.0 or newer, which should be available in most Linux distributions. mkdir build cd build cmake .. make make DESTDIR=<some path> install Note that by default, the installer will attempt to install to /usr/local/include and /usr/local/lib or the equivalent for Windows. Building with Google Test for library verification You can use google test by doing the following mkdir build cd build cmake -DBUILD_TESTS=ON .. make ./sunset-test Supported targets This should work on any platform that supports C++ 14 and later. There is a hard requirement on 32 bit systems at a minimum due to needing full 32 bit integers for some of the work. I have used this library on the following systems successfully, and test it on a Raspberry PI. It really does require a 32 bit system at a minimum, and due to the math needs, a 32 bit processor that has native floating point support is best. This does mean that the original Arudino and other similar 8 bit micros cannot use this library correctly. Tested against directly - Ubuntu 20.04 g++ - Particle Photon (latest ParticleOS release 1.5.2 works with Sunset 1.1.0) Used with historically - Raspberry PI - Omega Onion - Teensy with GPS - SAMD targets using PIO/VSCode I have used the following build systems with this library as well - Raspberry PI command line - Onion cross compiled using KDevelop and Eclipse - Arudino IDE (must be for 32 bit micros) - VS Code for Particle I don't use PlatformIO for much but some compile time testing. I can't help much with that platform. See notes below for the ESP devices, ESP32 and ESP8266. Testing I primarily use google test to validate the code running under Linux. This is done with the cmake config test above. I also run a small ino on a Particle Photon to prove that it works against a micro as well. Test results can be found for the latest release on the release page. Details To use SunSet, best results, make sure your clock is accurate to within a second if possible. Note that you also need an accurate timezone as the calculation is to UTC, and then the timezone is applied before the value is returned. If your results seem off by some set number of hours, a bad or missing timezone is probably why. - You need an accurate position, both latitude and longitude, which the library needs to provide accurate timing. Note that it does rely on positive or negative longitude, so you are at -100 longitude, but put 100 longitude in, you will get invalid results. - To get accurate results for your location, you need both the Latitude and Longitude, AND a local timezone. - All math is done without a timezone, (timezone = 0). Therefore, to make sure you get accurate results for your location, you must set a local timezone for the LAT and LON you are using. You can tell if you made a mistake when the result you get is negative for sunrise. - Prior to calculating sunrise or sunset, you must update the current date for the library, including the required timezone. The library doesn’t track the date, so calling it every day without changing the date means you will always get the calculation for the last accurate date you gave it. If you do not set the date, it defaults to midnight, January 1st of year 0 in the Gregorian calendar. - Since all calculations are done in UTC, it is possible to know what time sunrise is in your location without a timezone. Call calcSunriseUTC for this detail. - This isn't very useful in the long run, so the UTC functions will be deprecated. The new civil, astro, and nautical API's do not include the UTC analog. This is by design. - The library returns a double that indicates how many minutes past midnight relative to the set date that sunrise or sunset will happen. If the sun will rise at 6am local to the set location and date, then you will get a return value of 360.0. Decimal points indicate fractions of a minute. - Note that the library may return 359.89 for a 6 AM result. Doubles don't map to times very well, so the actual return value IS correct, but should be rounded up if so desired to match other calculators. - The library may return NaN for instances where there is no real sunrise or sunset value (above the arctic circle in summer as an example). Checking for std::isnan() is critical if you are in a location or are calculating for a location which may not have a valid sunrise/sunset. If you get NaN, then either the sun is always up or always down. - This library does some pretty intensive math, so devices without an FPU are going to run slower because of it. As of version 1.1.3, this library does work for the ESP8266, but this is not an indication that it will run on all non FPU enabled devices. - This library has a hard requirement on 32 bit precision for the device you are using. 8 or 16 bit micros are not supported. The example below gives some hints for using the library, it's pretty simple. Every time you need the calculation, call for it. I wouldn't suggest caching the value unless you can handle changes in date so the calculation is correct relative to a date you need. SunSet is C++, no C implementation is provided. It is compiled using C++14, and any code using it should use C++14 as well as there is a dependency on C++14 at a minimum. Newer C++ versions work as well. Releases - 1.1.7 Maintenance release with documentation updates and a fix for a broken test - 1.1.6 Fixing issues with library version numbering - 1.1.5 Bug fixes - Issue #26 - Code quality issue in function calcGeomMeanLongSun? - Issue #28 - Add option to override cmake build settings via variables - Issue #29 - Fix warning for platforms that cannot build shared objects - Issue #31 - Member functions that should be const aren't - Issue #32 - Expose calcAbsSunset style interface, so custom offsets can be used - Issue #33 - Remove unnecessary define statements - Issue #34 - Fix missing precision cast in calcJD - Issue #37 - typo in examples/esp/example.ino - 1.1.4 Making this work for Arduino and the library manager via Include Library - 1.1.3 Performance improvements to enable the ESP8266 to function better. Thank you to. - 1.1.2 Bumping the library.properties license field to be correct. This forced a new release number so it would work with build systems. - 1.1.1 Changes to support case insensitive file systems. - 1.1.0 New capabilities. Added Civil, Nautical, and Astronomical sunrise and sunset. - New API's for the new functionality. See the code for details. - Begin to deprecate UTC functions. These will not be removed until later if ever. They are not tested as well. - Migrate timezone to be a double for fractional timezones. IST for example works correctly now. - 1.0.11 Fixes related to making SAMD targets compile. SAMD doesn't like std::chrono it seems. - 1.0.10 Fixed a bug in a header file, it should build for all platforms now. - 1.0.9: Revert some imported changes which broke the system. - 1.0.8: Fix installation path issue and update README to include installation instructions. - 1.0.7: Allows for use of positive or negative longitude values. Thank you to. Moon Phases This library also allows you to calculate the moon phase for the current day to an integer value. This means it's not perfectly accurate, but it's pretty close. To use it, you call moonPhase() with an integer value that is the number of seconds from the January 1, 1970 epoch. It will do some simple math and return an integer value that represents the current phase of the moon, from 0 to 29. In this case, 0 is new, and 29 is new, 15 is full. The code handles times that may cause the calculation to return 30 to avoid some limits confusion (there aren't 30 days in the lunar cycle, but it's close enough that some time values will cause it to return 30 anyway). - Moon phase is calculated based on current time, not just the current date. As a result, if you don't populate the system time on the device you are using, the return value will be -6, which is clearly incorrect. To use the moon phase, please make sure to keep the system clock accurate, OR, you can pass the time elapsed from the Unix EPOCH in seconds as a positive value. Either will work, but using the system clock is less confusing. Examples This example is relative to an .ino file. Create a global object, initialize it and use it in loop(). #include <time> #include <sunset.h> #define TIMEZONE -5 #define LATITUDE 40.0000 #define LONGITUDE -89.0000 // Note that LONGITUDE can be positive or negative, but the original code will fail if you use a negative value // Using a negative longitude does not impact, as it's the same either way. The code compensates for a negative value SunSet sun; void setup() { // Set your clock here to get accurate time and date // Next, tell SunRise where we are sun.setPosition(LATITUDE, LONGITUDE, TIMEZONE); } void loop() { // You should always set the date to be accurate sun.setCurrentDate(year(), month(), day()); // If you have daylight savings time, make sure you set the timezone appropriately as well sun.setTZOffset(TIMEZONE); double sunrise = sun.calcSunrise(); double sunset = sun.calcSunset(); int moonphase = sun.moonPhase(std::time(nullptr)); } This example is for the Raspberry Pi using C++ #include <ctime> #include <sunset.h> #define ONE_HOUR (60 * 60) #define TIMEZONE -5 #define LATITUDE 40.0000 #define LONGITUDE -89.0000 // This location is near Chicago, Illinois USA void main(int argc, char *argv) { SunSet sun; auto rightnow = std::time(nullptr); struct tm *tad = std::localtime(&rightnow); sun.setPosition(lat, lon, tad->tm_gmtoff / ONE_HOUR); sun.setCurrentDate(tad->tm_year + 1900, tad->tm_mon + 1, tad->tm_mday); double sunrise = sun.calcSunrise(); double sunset = sun.calcSunsetLocal(); double civilSunrise = sun.calcCivilSunrise(); double nauticalSunrise = sun.calcNauticalSunrise(); double astroSunrise = sun.calcAstronomicalSunrise(); int moonphase = sun.moonPhase(static_cast<int>(rightnow)); } Notes - This is a general purpose calculator, so you could calculate when Sunrise was on the day Shakespeare died. Hence some of the design decisions. - Date values are absolute, are not zero based, and should not be abbreviated. (e.g. don’t use 15 for 2015 or 0 for January) - This library has a hard requirement on a 32 bit micro with native hard float. Soft float micros do work, but may have issues. The math is pretty intensive. - It is important to remember you MUST have accurate date and time. The calculations are time sensitive, and if you aren't accurate, the results will be obvious. Note that the library does not use hours, minutes, or seconds, just the date, so syncing time a lot won't help, just making sure it's accurate at midnight so you can set the date before calling the calc functions. Knowing when to update the timzone for savings time if applicaple is also pretty important. - It can be used as a general purpose library on any Linux machine, as well as on an Arduino or Particle Photon. You just need to compile it into your RPI or Beagle project using cmake 3.0 or later. - UTC is not the UTC sunrise time, it is the time in Greenwhich when the sun would rise at the location specified to the library. It's weird, but allows for some flexibility when doing calcualations depending on how you keep track of time in your system. The UTC specific calls are being deprecated starting with 1.1.0. - Use of Civil, Nautical, and Astronomical values are interesting for lots of new uses of the library. They are added as a convenience, but hopefully will prove useful. These functions do not have equal UTC functions. - I do not build or test on a Windows target. I don't have a Windows machine to do so. I do test this on a Mac, but only lightly and not every release right now. ESP Devices The popular ESP devices seem to have some inconsistencies. While it is possible to run on the 8266, which has no FPU but is 32bit, the math is slow, and if you are doing time constrained activities, there is no specific guarantee that this library will work for you. Testing shows it does work well enough, but use it at your own risk. Using this library with an ESP8266 is not considered a valid or tested combination, though it may work for you. I will not attempt to support any issues raised against the 8266 that can't be duplicated on an ESP32. The ESP32 also has some FPU issues, though testing confirms it works very well and does not slow the system in any measurable way. - - The conclusions in the links seem to indicate that a lot of the math used by this library may be slow on the ESP8266 processors. However, slow in this case is still milliseconds, so it may not matter on the 8266 at all. Your mileage might vary. Links You can find the original math in c code at I got the moon work from Ben Daglish at Thank you to The following contributors have helped me identify issues and add features. The individuals are listed in no particular order. - - - - - - - Browse Library Files
https://docs.particle.io/reference/device-os/libraries/s/sunset/
CC-MAIN-2022-27
refinedweb
2,556
65.62
If you’re using an appender layout like this: 1: <layout type="log4net.Layout.PatternLayout"> 2: <conversionPattern value="%date %level [%thread] %type.%method - %message%n" /> 3: </layout> The conversion pattern is: %date %level [%thread] %type.%method - %message%n Which is the default one for the fileLogAppender shipping with EPiServer (note that type & method logging is slow, but immensely useful when you need it.) A log line could look like this: 2009-12-09 17:27:04,655 INFO [6] Microsoft.Samples.Runtime.Remoting.Channels.Pipe.PipeConnection.Write - 18.3.1 Scheduler info: 2780> Write string Content-Type The whole type name is included. In most cases, you just need the name of the class and method, which will save you some logging space (more on that later), but it will also make your log easier to read. The %type pattern supports this syntax: %type{n} where <n> is the number of class/namespaces to include (from the right). Changing the pattern to: %date %level [%thread] %type{1}.%method - %message%n 2009-12-09 17:27:04,655 INFO [6] PipeConnection.Write - 18.3.1 Scheduler info: 2780> Write string Content-Type Read more about this in the log4net SDK documentation on the PatternLayout class.
https://world.episerver.com/blogs/Steve-Celius/Dates/2010/3/log4net-tips-Shortening-the-type-name/
CC-MAIN-2019-30
refinedweb
204
64.61
Red Hat Bugzilla – Bug 233128 opencv-python package bug on x86_64 Last modified: 2007-11-30 17:11:59 EST Description of problem: Putting _cv.so, _highgui.so under /usr/lib64/ while the rest of opencv-python is under /usr/lib doesn't work. Version-Release number of selected component (if applicable): opencv-python-1.0.0-1.fc6 How reproducible: always Steps to Reproduce: 1. yum -y install opencv-python 2. python /usr/share/opencv/samples/python/convexhull.py Actual results: OpenCV Python version of convexhull Traceback (most recent call last): File "/usr/share/opencv/samples/python/convexhull.py", line 6, in ? from opencv import cv File "/usr/lib/python2.4/site-packages/opencv/__init__.py", line 55, in ? from cv import * File "/usr/lib/python2.4/site-packages/opencv/cv.py", line 5, in ? import _cv ImportError: No module named _cv Expected results: "OpenCV Python version of convexhull" on stdout and a graphical window showing a convex hull. Additional info: cp -a /usr/lib64/python2.4/site-packages/opencv/* /usr/lib/python2.4/site-packages/opencv/ And the test case will work. I believe the fedora python packaging guidlines addresses this "some package don't like different 'site_lib' 'site_arch' problem"? Ville, Toshio - any insights about what be going wrong here? Moving lib64 compiled binaries to /usr/lib/* doesn't sound right to me. Unfortunately I don't have access to x86_64 systems and can't test this issue, nor am I a python specialist.) (In reply to comment #2) >) "All of the module" or all "python file"? opencv-python consists of 2 parts: a "module" and a set of "sample applications" Are you saying both must be moved under %_libdir? As the next step, I'be trying to move all of the module under %_libdir, but keep the applications under /usr/share. I hope to have fixed this issue in *-1.0.0-3, which should be available for fc6 and fc7 with next spin of packages.
https://bugzilla.redhat.com/show_bug.cgi?id=233128
CC-MAIN-2017-04
refinedweb
329
60.01
#include <Libs/MRML/Core/vtkMRMLTextNode.h> Definition at line 21 of file vtkMRMLTextNode.h. Definition at line 26 of file vtkMRMLTextNode.h. Definition at line 29 of file vtkMRMLTextNode.h. Definition at line 86 of file vtkMRMLTextNode.h. Create a storage node for this node type. If it returns nullptr then it means the node can be stored in the scene (in XML), without using a storage node. Reimplemented from vtkMRMLStorableNode. MRMLNode methods. Implements vtkMRMLStorableNode. Determines the most appropriate storage node class for the provided file name and node content. Reimplemented from vtkMRMLStorableNode. Get node XML tag name (like Volume, Model) Implements vtkMRMLStorableNode. Definition at line 52 of file vtkMRMLTextNode.h. Set node attributes Reimplemented from vtkMRMLStorableNode. Set encoding of the text For character encoding, please refer IANA Character Sets () Default is VTK_ENCODING_US_ASCII Force the use of a storage node, regardless of text length. By default, a storage node will only be used for nodes that have been read from file (drag and drop), or for nodes that have text longer than 250 characters. This option should be also be enabled for nodes with highly structured text (such as XML) that would not be good to have in the MRML. Set text node contents and encoding. If the encoding is not specified, then it will not be changed from the current value. Copy node content (excludes basic data, such as name and node references). Write this node's information to a MRML file in XML format. Reimplemented from vtkMRMLStorableNode. Definition at line 99 of file vtkMRMLTextNode.h. Definition at line 100 of file vtkMRMLTextNode.h. Definition at line 98 of file vtkMRMLTextNode.h.
https://apidocs.slicer.org/master/classvtkMRMLTextNode.html
CC-MAIN-2021-21
refinedweb
273
60.01