text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
The Qt namespace contains miscellaneous identifiers used throughout the Qt library. More... #include <Qt> The Qt namespace contains miscellaneous identifiers used throughout the Qt: This enum type defines what happens to the aspect ratio when scaling an rectangle. See also QSize::scale() and QImage::scaled(). Background mode: See also QBrush..: The ItemFlags type is a typedef for QFlags<ItemFlag>. It stores an OR combination of ItemFlag values. The key names used by Qt. See also QKeyEvent::key(). This enum describes the modifier keys.. See also KeyboardModifier and MouseButton. This enum type describes the different mouse buttons. The MouseButtons type is a typedef for QFlags<MouseButton>. It stores an OR combination of MouseButton values. See also KeyboardModifier and Modifier... and QAbstractItemDelegate::elidedText()...
http://doc.trolltech.com/4.0/qt.html
crawl-001
refinedweb
121
54.69
Chatlog 2010-09-14 From SPARQL Working Group See original RRSAgent log and preview nicely formatted version. Please justify/explain all edits to this page, in your "edit summary" text. 13:59:18 <RRSAgent> RRSAgent has joined #sparql 13:59:18 <RRSAgent> logging to 13:59:33 <Zakim> Zakim has joined #sparql 13:59:57 <kasei> Zakim, this is sparql 14:00:02 <AxelPolleres> trackbot, start meeting 14:00:04 <trackbot> RRSAgent, make logs world 14:00:05 <Zakim> ok, kasei; that matches SW_(SPARQL)10:00AM 14:00:06 <trackbot> Zakim, this will be 77277 14:00:07 <trackbot> Meeting: SPARQL Working Group Teleconference 14:00:07 <trackbot> Date: 14 September 2010 14:00:11 <Zakim> + +33.4.92.38.aaaa 14:00:15 <Zakim> ok, trackbot; I see SW_(SPARQL)10:00AM scheduled to start now 14:00:18 <AxelPolleres> chair: Axel Polleres 14:00:24 <kasei> Zakim, who is on the phone? 14:00:26 <OlivierCorby> Zakim, aaaa is me 14:00:33 <Zakim> I notice SW_(SPARQL)10:00AM has restarted 14:00:33 <AxelPolleres> agenda: 14:00:35 <Zakim> On the phone I see AxelPolleres, kasei, +33.4.92.38.aaaa 14:00:39 <Zakim> +OlivierCorby; got it 14:00:41 <Zakim> +??P18 14:01:00 <kasei> scribenick: kasei 14:01:00 <AxelPolleres> scribe: Greg Williams 14:01:07 <ivan> zakim, dial ivan-voip 14:01:07 <Zakim> ok, ivan; the call is being made 14:01:09 <Zakim> +Ivan 14:01:12 <AndyS> zakim, ??P18 is me 14:01:12 <Zakim> +AndyS; got it 14:01:26 <AxelPolleres> Zakim, who is on the phone? 14:01:32 <Zakim> On the phone I see AxelPolleres, kasei, OlivierCorby, AndyS, Ivan 14:01:37 <Zakim> + +1.603.897.aabb 14:01:48 <MattPerry> zakim, aabb is me 14:02:09 <Zakim> +MattPerry; got it 14:02:53 <Souri> Souri has joined #sparql 14:03:02 <kasei> topic: admin 14:03:10 <AxelPolleres> PROPOSED: Approve minutes at 14:03:26 <kasei> AxelPolleres: minutes from last time? 14:03:40 <AxelPolleres> RESOLVED: Approve minutes at 14:04:22 <Zakim> +??P42 14:04:22 <kasei> ... next meeting. Both chairs gone next week. 14:04:28 <Zakim> + +1.617.245.aacc 14:04:28 <kasei> ... need to find a chair. 14:04:30 <Zakim> + +1.603.897.aadd 14:04:36 <Zakim> -??P42 14:04:37 <AlexPassant> Zakim, ??P42 is me 14:04:41 <kasei> ivan: sandro may be around, but I am not sure. 14:04:44 <MattPerry> I could scribe next time 14:04:46 <AxelPolleres> ACTION: Axel to clarify chairing next time 14:04:47 <trackbot> Created ACTION-309 - Clarify chairing next time [on Axel Polleres - due 2010-09-21]. 14:05:13 <Zakim> +[IPcaller] 14:05:13 <kasei> topic: status of documents 14:05:39 <Zakim> I already had ??P42 as Guest P42 26632, AlexPassant 14:05:44 <kasei> AxelPolleres: which documents are ready for publication. what issues remain? 14:05:47 <ivan> q+ 14:05:50 <kasei> ... want to assign reviewers. 14:05:55 <kasei> ... start with query document. 14:06:16 <LeeF> Reviews by end of this call? :-) 14:06:20 <kasei> ivan: good to know the times involved in reviewing 14:06:41 <Zakim> -[IPcaller] 14:06:47 <ivan> ack ivan 14:06:49 <Zakim> +??P33 14:06:50 <LeeF> zakim, aacc is me <kasei> subtopic: Query document 14:07:00 <kasei> AxelPolleres: Query document. Andy sent a list of which sections are ready for review. 14:07:06 <Zakim> +LeeF; got it 14:07:09 <kasei> AndyS: sent list of sections that had changed from 1.0 14:07:22 <AndyS> 14:07:58 <ivan> q+ 14:08:02 <kasei> AxelPolleres: which sections should we assign for review? Can we review some sections now? 14:08:25 <kasei> ... are there stable parts reviewable now? 14:08:39 <kasei> AndyS: thought we were aiming for working draft publication out. If so, then the whole document is reviewable. 14:08:43 <Zakim> + +44.186.528.aaee 14:08:52 <bglimm> Zakim, +44.186.528.aaee is me 14:08:52 <Zakim> +bglimm; got it 14:09:13 <kasei> AxelPolleres: with query, we probably need another WD publication. will anything happen before we publish/review? 14:09:30 <ivan> ack ivan 14:09:44 <kasei> ivan: I thought we were talking about LC reviews. 14:09:50 <bglimm> Zakim, mute me 14:09:50 <Zakim> bglimm should now be muted 14:09:54 <kasei> ... now I hear something different. what are we talking about? 14:10:21 <AxelPolleres> 14:10:29 <AxelPolleres> 14:10:56 <kasei> AxelPolleres: I understood in our meeting on Aug 24 that we suggested the way forward is to have editors put missing pieces in, go with one more round of WD 14:11:16 <kasei> ... we'll incorporate reviews from outside. 14:11:23 <kasei> ... try to get the documents out, and in parallel get LC ready. 14:11:29 <kasei> ... is that just my understanding? 14:11:58 <kasei> ivan: that's fine if it's the plan. If we are looking for LC reviewers, then this should happen only when the whole document is ready. 14:12:08 <kasei> ... reviewing just one part of a document should not happen. 14:13:01 <kasei> AxelPolleres: idea was to have missing pieces in [by now?], but that hasn't happened yet. 14:13:13 <kasei> ... could have WG reviewers for the parts that are done. 14:13:28 <kasei> ... want an indication of what parts are finished, what's unstable. 14:13:46 <kasei> ... the list AndyS sent around is what has changed, not what is ready. 14:13:57 <kasei> AndyS: indication to people what sections are relevant. 14:14:05 <kasei> AxelPolleres: can you provide a list of what's ready/stable? 14:15:00 <kasei> ... for going forward, we can assign reviewers to go over it in the next 1/2 weeks? 14:15:27 <kasei> AndyS: isn't clear to me if the reviews right now are for LC or just WD? 14:15:47 <kasei> AxelPolleres: can we get 2 people to review document in current state? 14:15:52 <kasei> I can 14:15:55 <MattPerry> I can too 14:16:08 <kasei> AxelPolleres: Greg and Matt will review. 14:16:39 <kasei> AxelPolleres: looking for having the documents ready to publish. we should also think about the date we want to publish. 14:16:53 <kasei> ... is one week possible? 14:16:59 <MattPerry> 2 week is better 14:17:03 <kasei> I can probably do 1, but 2 better. 14:17:12 <kasei> ... let's discuss in 2 weeks. 14:17:30 <kasei> ... we'll try for 2 weeks for all documents. 14:17:55 <AxelPolleres> ACTION: greg to review queryt for WD publication 14:17:55 <trackbot> Created ACTION-310 - Review queryt for WD publication [on Gregory Williams - due 2010-09-21]. 14:18:10 <AxelPolleres> ACTION: matt to review query for WD publication 14:18:10 <trackbot> Sorry, couldn't find user - matt <kasei> subtopic: Update document 14:18:26 <kasei> AxelPolleres: Update document. 14:18:49 <kasei> AlexPassant: the formal model is not complete yet. 14:18:55 <kasei> ... review could start by Monday next week. 14:19:10 <kasei> ... formal model hasn't been reviewed yet. maybe 2 reviewers plus one just for the model. 14:19:34 <kasei> AxelPolleres: is the document ready from your point of view? are all issues fixed? 14:19:51 <kasei> AlexPassant: no, not yet. issues will be ready by Monday. will be able to say on monday if ready for LC or if more time is needed. 14:20:15 <kasei> AxelPolleres: we should have at least some reviews to get out a WD. 14:20:30 <kasei> ... AlexPassant can inform reviewers when it is ready for review. 14:20:48 <kasei> ... volunteers for reviewing update? 14:20:51 <AxelPolleres> ACTION: axel to review update 14:20:51 <trackbot> Could not create new action (failed to parse response from server) - please contact sysreq with the details of what happened. 14:20:51 <trackbot> Could not create new action (unparseable data in server response: local variable 'd' referenced before assignment) - please contact sysreq with the details of what happened. 14:21:23 <AxelPolleres> ACTION: AndyS to review update formal model 14:21:23 <trackbot> Created ACTION-312 - Review update formal model [on Andy Seaborne - due 2010-09-21]. 14:21:36 <AxelPolleres> ACTION: axel to review Update 14:21:36 <trackbot> Created ACTION-313 - Review Update [on Axel Polleres - due 2010-09-21]. 14:21:57 <AndyS> (assumes Alex has had time to add more formal model) 14:22:00 <AxelPolleres> ACTION: alex to inform andy/axel for review readyness 14:22:00 <trackbot> Created ACTION-314 - Inform andy/axel for review readyness [on Alexandre Passant - due 2010-09-21]. <kasei> subtopic: Entailment document 14:22:12 <kasei> AxelPolleres: Entailment document 14:22:15 <bglimm> Zakim, unmute me 14:22:20 <Zakim> bglimm should no longer be muted 14:22:28 <kasei> bglimm: the document is ready for review. 14:22:36 <ivan> q+ 14:22:40 <kasei> ... want to add example for RDF-based semantics, but that's an informative section so doesn't require waiting. 14:23:07 <kasei> AxelPolleres: made progress with sandro on issues around RIF. 14:23:26 <kasei> ... progress, but will not have it in final shape for this round of WD. 14:23:42 <kasei> ... should mark these points in the document, and will have to review again before LC. 14:23:58 <kasei> ... can we assign reviewers for OWL and RDF parts? 14:24:19 <kasei> ... does Chime still have open issues in the document? 14:24:31 <kasei> ivan: still have to agree on URIs for the namespace document. 14:25:04 <kasei> ... one issue is rdf import. another is what URI to use for entailment. 14:25:15 <kasei> ... not sure where we are on that. 14:25:22 <ivan> q- 14:25:25 <kasei> ... everything leads to Chime. 14:26:00 <AxelPolleres> ACTION: Birte to follow up with Chime on review readiness of entailment 14:26:01 <trackbot> Created ACTION-315 - Follow up with Chime on review readiness of entailment [on Birte Glimm - due 2010-09-21]. 14:26:23 <OlivierCorby> I can review 14:26:28 <kasei> AxelPolleres: does it make sense to look for entailment reviewers, to be ready whenever issues are dealt with? 14:26:48 <AxelPolleres> ACTION: Olivier to review entailment whenever he gets ok from Chime/Birte 14:26:48 <trackbot> Created ACTION-316 - Review entailment whenever he gets ok from Chime/Birte [on Olivier Corby - due 2010-09-21]. 14:26:57 <kasei> ... Olivier can review. Anybody else? 14:27:14 <kasei> ... Zakim can pick a victim. 14:28:00 <kasei> AxelPolleres: need reviewers from the WG. 14:28:03 <AxelPolleres> Zakim, pick a victim 14:28:03 <Zakim> Not knowing who is chairing or who scribed recently, I propose AndyS 14:28:04 <kasei> ... we'll try Zakim. 14:28:29 <AxelPolleres> Zakim, pick a victim 14:28:29 <Zakim> Not knowing who is chairing or who scribed recently, I propose bglimm 14:28:39 <AxelPolleres> Zakim, pick a victim 14:28:39 <Zakim> Not knowing who is chairing or who scribed recently, I propose AndyS 14:29:07 <AndyS> Zakim, pick Axel 14:29:07 <Zakim> I don't understand 'pick Axel', AndyS 14:29:13 <kasei> (Zakim is being uncooperative.) 14:29:19 <kasei> AxelPolleres: LeeF, can you review? 14:29:34 <kasei> LeeF: yes, but don't feel qualified for some of it. 14:29:41 <AxelPolleres> ACTION: Lee to review entailment 14:29:41 <trackbot> Created ACTION-317 - Review entailment [on Lee Feigenbaum - due 2010-09-21]. <kasei> subtopic: HTTP Update document 14:29:57 <kasei> AxelPolleres: HTTP Update 14:30:03 <kasei> ... Chime isn't around, so can't make a decision today. <kasei> subtopic: Service Description document 14:30:07 <kasei> AxelPolleres: Service Description 14:30:24 <kasei> ... any volunteers? 14:30:24 <AlexPassant> I volunteer for SD 14:30:36 <AxelPolleres> ACTION: alex to review SD 14:30:36 <trackbot> Created ACTION-318 - Review SD [on Alexandre Passant - due 2010-09-21]. 14:31:43 <kasei> AxelPolleres: is it ready for review? 14:32:02 <bglimm> I can review 14:32:10 <kasei> kasei: yes, content is all reviewable. open issues for adding cross-links to other documents (e.g. protocol) 14:32:12 <AxelPolleres> ACTION: birte to review SD 14:32:12 <trackbot> Created ACTION-319 - Review SD [on Birte Glimm - due 2010-09-21]. 14:32:19 <kasei> AxelPolleres: Alex and Birte will review SD. <kasei> subtopic: Protocol document 14:32:27 <kasei> AxelPolleres: Protocol document 14:32:33 <kasei> ... still have to figure out the status. 14:32:39 <LeeF> Uothing new, i think we shouldn't poublish it this time around 14:32:54 <kasei> ... will leave it out for this round. 14:32:58 <AxelPolleres> protocol to be left out for this round. <kasei> subtopic: Test Suite 14:33:09 <kasei> AxelPolleres: Test Suite 14:33:27 <kasei> ... question about how we publish it. 14:33:43 <kasei> ... does it make sense to put the test suite in rec-track? 14:33:52 <kasei> ... or sufficient to have it as a note? 14:34:07 <kasei> ... will check our previous discussions. 14:34:19 <kasei> AndyS: I don't see how we can have feedback on implementations unless test suite is rec-track. 14:35:10 <kasei> AndyS: is this about the test case document, or the tests themselves? 14:35:20 <kasei> AxelPolleres: we have to check whether those we've dicussed are marked as approved. 14:35:34 <kasei> ... I would publish the whole document structure, but need a separate overview document. 14:35:41 <kasei> ... only have the readme document right now. 14:35:57 <kasei> ... doesn't make sense to publish this alone. maybe we should hold back. 14:36:14 <kasei> ... on the other hand, we're losing time if we don't do it now. 14:36:32 <kasei> ... I won't be able to work on it in the next 2 weeks. 14:36:44 <kasei> ... I can promise to do it in 3 weeks, then we revisit. 14:37:12 <AxelPolleres> ACTION: Axel to bring test suite in shape for FPWD within 3 weeks 14:37:12 <trackbot> Created ACTION-320 - Bring test suite in shape for FPWD within 3 weeks [on Axel Polleres - due 2010-09-21]. <kasei> subtopic: Overview document 14:37:26 <kasei> AxelPolleres: Overview document 14:37:36 <kasei> ... can probably put it out as FPWD 14:37:42 <kasei> ... any opinions? 14:37:56 <kasei> link in the agenda doesn't work 14:38:01 <AndyS> 14:38:32 <AxelPolleres> 14:38:48 <kasei> maybe just for me 14:38:54 <kasei> q+ 14:39:23 <ivan> ack kasei 14:39:28 <LeeF> I don't think the overview document is ready to be published. 14:39:53 <kasei> kasei: more comfortable if FPWD on overview if there was content in the major sections 14:40:04 <kasei> ivan: not intended for a rec document, right? 14:40:10 <kasei> ... timing is different than for other docs. 14:40:20 <kasei> ... no urgency to publish the overview right now. 14:40:56 <AndyS> Axel: Overview is not REC track, it's a NOTE 14:41:01 <AxelPolleres> no urgency for Overview document at the moment, if we publish as note, no hurry 14:41:06 <pgearon_> pgearon_ has joined #sparql 14:41:29 <kasei> AxelPolleres: that's it for the agenda. hoping next week we can discuss LET/BIND. 14:41:42 <AxelPolleres> q? 14:41:43 <kasei> ... any other issues? 14:41:51 <kasei> AndyS: are we actually having a meeting next week? 14:42:02 <kasei> AxelPolleres: who is not around next week? 14:42:03 <ivan> I am at risk 14:42:05 <AxelPolleres> regrets for next week? 14:42:32 <AndyS> I am around. Seems to be only me. Decisions, decisions. 14:42:53 <bglimm> I am 14:43:01 <kasei> AxelPolleres: hard to have a chair next week. ok to skip next week? 14:43:19 <AxelPolleres> ivan?, axel, lee, souri 14:43:25 <kasei> ivan: propose you wait for sandro to come back and ask him if he can chair. 14:43:38 <LeeF> Steve is still on holiday next week, right? 14:43:43 <LeeF> could be a very productive meeting :) 14:44:06 <kasei> AndyS: propose chairs send out email for next week so progress can be made even without a telecon. 14:44:35 <pgearon_> pgearon_ has left #sparql 14:45:21 <ivan> zakim, drop me 14:45:21 <Zakim> Ivan is being disconnected 14:45:22 <Zakim> -Ivan 14:45:23 <Zakim> -LeeF 14:45:25 <Zakim> -??P33 14:45:26 <Zakim> - +1.603.897.aadd 14:45:27 <AxelPolleres> adjourned 14:45:27 <Zakim> -bglimm 14:45:28 <Zakim> -MattPerry 14:45:32 <Zakim> -OlivierCorby 14:45:38 <Zakim> -AndyS 14:46:04 <Zakim> -kasei 14:46:07 <AxelPolleres> axel: please try to get the promised reivews in in time, possibly earlier, such that we can decide to publish WDs in 2, latest 3 weeks. # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000270
http://www.w3.org/2009/sparql/wiki/index.php?title=Chatlog_2010-09-14&oldid=2468
CC-MAIN-2014-52
refinedweb
2,919
74.59
In SQL Server 2000 and earlier versions of SQL Server, you had one language to use in the database layer: T-SQL (Transact-SQL). T-SQL is well suited to tasks such as data storage and retrieval but it is not an all-purpose programming language. Many programming tasks were difficult or impossible to do with T-SQL. If it was possible to do these tasks in T-SQL, often the code to carry out the tasks was verbose and complex. Writing the necessary code was difficult and maintaining the code was problematic. Another result of the limitations of T-SQL for tasks other than data manipulation was that developers often turned to extended stored procedures to carry out tasks that T-SQL was poorly suited to undertake. Problems with extended stored procedures include lack of security and reliability. In SQL Server 2005, Microsoft has added support to allow you to use managed code in the database layer. Managed code is code that runs in the .NET Framework's Common Language Runtime (CLR). The support for the Common Language Runtime means that you, or developer colleagues, can use code created in Visual Basic .NET or Visual C# .NET inside SQL Server 2005. Languages such as Visual Basic.NET and C#.NET are much better suited than T-SQL to many programming tasks, such as numeric manipulation, just to name one. So, for example, if you have complex number crunching that you want to do on some data, how could you best get the job done? In SQL Server 2000, you would probably have had to use an extended stored procedure. In SQL Server 2005, you have a new, more reliable, and more secure option to use the built-in Common Language Runtime capabilities. SQL Server 2005 controls how the code runs in the CLR. If a CLR process is using too much memory or CPU cycles, SQL Server can shut the process down, which ensures that SQL Server continues to run efficiently. CLR Integration SQL Server 2005 hosts the .NET Framework 2.0 Common Language Runtime (CLR). This is the same version of the .NET Framework that Visual Studio 2005 uses. You can write code in any .NET language including Visual Basic .NET Visual C# .NET Visual C++ Other .NET language can also produce the intermediate language (IL) that the CLR supports. In Visual Studio 2005, it is the previously listed languages that are supported in terms of creating .NET projects. Most developers of CLR code that's intended to run in SQL Server 2005 use one of these three languages and create the project in Visual Studio 2005. You can possibly use a development environment other than Visual Studio 2005. However, Visual Studio 2005 provides such closely integrated support, including a SQL Server Project template, that many developers make it their first choice for creating managed code to run in SQL Server 2005. You can use one of the .NET languages to create any of the following: Procedures Triggers Functions User-defined types User-defined aggregates Visual Studio 2005 supports the following tasks for managed code intended for use in SQL Server 2005: Development Deployment Debugging Development Visual Studio 2005 has a new project type — the SQL Server Project — for development of CLR projects in SQL Server 2005. You use that to create a CLR project as in Figure 1 project as in Figure 1 The Visual Studio environment makes working with Visual C# code or Visual Basic .NET code easy. The SQL Server project has many new screens. Detailed steps of using the SQL Server project in the Visual Studio environment are beyond the scope of this article as in Figure 2. are beyond the scope of this article as in Figure 2. Visual Studio 2005 has support for many useful debugging features. You can debug seamlessly across the language boundaries between T-SQL and Visual Basic .NET or Visual C#. Equally, the type of connection to the SQL Server isn't important because both HTTP (HyperText Transfer Protocol, the protocol used on the World Wide Web) and TDS (Tabular Data Stream, the protocol used by SQL Server) are supported. Manual coding and deployment If you choose to create your .NET code manually and deploy it in the same way, you need to follow these broad steps: When writing stored procedures, functions, and triggers, the .NET class is specified as static if written in C# or specified as shared if written in Visual Basic .NET. User-defined types and user-defined aggregates are written as full classes. The developer compiles the code that he has written. This creates an assembly. After creating the assembly, you use the CREATE ASSEMBLY statement to upload the assembly into SQL Server. To create a T-SQL object corresponding to a procedure contained in an assembly, you use the CREATE PROCEDURE statement. You use the CREATE FUNCTION, CREATE TRIGGER, CREATE TYPE, and CREATE AGGREGATE statements for the same purpose for functions, triggers, types, and aggregates respectively. After creating a T-SQL object, then you can use the object in your T-SQL code in the normal way. To create a simple Visual C# example and deploy it manually to the local SQL Server instance, follow these steps: 1. Open a text editor and type the following C# code: using System; using System.Data; using Microsoft.SqlServer.Server; using System.Data.SqlTypes; public class SQLProc { [Microsoft.SqlServer.Server.SqlProcedure] public static void SQLProcTest() { SqlContext.Pipe.Send("The SQLProc example works!\n"); } } Notice the use of the System, System.data, Microsoft.SQLServer. Server, and System.Data.SqlTypes namespaces. You use these namespaces often when writing .NET code for use in SQL Server 2005. 2. Navigate to the location of the C# compiler. It's located in C:\Windows\ Microsoft.NET\Framework\v2.0.50727. At the command line, type csc /target:library C:\location of CSharp File\SQLProc.cs A dll called SQLProc.dll is created in the .NET Framework folder. More often, you would add the .NET Framework folder to your PATH environment variable. 3. Open SQL Server Management Studio and click the Database Engine Query button to create a new database engine query. Create an assembly in the desired SQL Server 2005 instance, using this code: CREATE ASSEMBLY SQLProc FROM 'c:\windows\microsoft.net\framework\v2.0.50727 \SQLProc.dll' WITH PERMISSION_SET = SAFE Notice the permission setting is SAFE, because the procedure does not need to access anything external to SQL Server. 4. Create a procedure called SQLProc by using this code: CREATE PROCEDURE SQLProc AS EXTERNAL NAME SQLProc.SQLProcTest.SQLProc 5. Try to execute the SQLProc procedure by using the following code: EXEC SQLProc--Will fail since CLR is not enabled Unless you have explicitly turned on the CLR support, attempting to run the stored procedure fails. 6. To enable the CLR, run the following code: sp_configure 'clr enabled', 1 GO RECONFIGURE GO 7. Execute the SQLProc user-defined procedure that you created earlier. EXEC SQLProc-- Now it will execute successfully Comparison with Traditional Approaches In this section, I look briefly at the potential benefits of CLR integration in SQL Server 2005 and then briefly compare using CLR with each of three traditional approaches: T-SQL Extended stored procedures Middle tier techniques Potential benefits of CLR integration The integration of the CLR offers developers the following advantages: A richer programming model: You have access to programming constructs that are absent from T-SQL. In addition, you have access to the classes of the .NET Framework and can use those classes as a basis for your code. Improved security: Compared to the extended stored procedures you might have used with SQL Server 2000 to carry out tasks not possible or convenient with T-SQL, the CLR offers improved security. User-defined types and aggregates: You can use .NET languages to create your own user-defined types and aggregates. Development in a familiar development environment: Many developers are already familiar with using one of the versions of Visual Studio before Visual Studio 2005. For such developers, creating SQL Server projects in Visual Studio 2005 is an easy step, building on what they already know. Potentially improved performance: The .NET languages potentially offer improved performance and scalability. T-SQL lacks many constructs used in more general purpose programming languages. For example, it does not have arrays, for each loops, collections, or classes. By contrast .NET languages, such as Visual Basic .NET and Visual C# .NET, has support for the preceding constructs and also has object-oriented capabilities such as inheritance, encapsulation, and polymorphism. When the purpose of the code is not simply to manipulate data, the .NET languages and the CLR can be a better choice. The Base Class Library has many classes that support useful functionality including numeric manipulation, string manipulation, file access, and cryptography. If you need to carry out complex numeric manipulation of data, it is likely that Visual Basic .NET or Visual C# .NET is a better choice than T-SQL. Similarly, if you need to carry out complex text handling, the regular expression support in Visual Basic .NET and Visual C# .NET provides much more control than, for example, the LIKE keyword in T-SQL. SQL Server 2005 doesn't support all the classes that are part of the .NET Framework 2.0. The code is intended to run inside SQL Server 2005, so some classes — for example, those for windowing — are inappropriate in that context and are not supported. The Common Language Runtime provides a safer environment for code to run in. For example, it prevents code reading memory that hasn't been written and helps avoid situations where code accesses unmanaged memory. In addition, type safety in the CLR ensures that types are manipulated only in appropriate ways. Taken together, these features of the CLR remove many causes of errors. For larger projects, the ability to organize code by using classes and namespaces allows the developer to structure the code in a way that is more easily understood. Such improved code structure allows you to easily create code and also more easily maintain the code
http://developmentsolutionsjunction.blogspot.com/2011/03/using-common-language-runtime.html
CC-MAIN-2018-05
refinedweb
1,687
57.27
Qt Creator: linking libs from multiple projects I'm a long time user of Visual Studio trying to switch to Qt Creator. I have a project(solution in VS) that consists of several sub-projects. Few libraries that depend on each other and a main app (exe). To make it simple lets consider the following file structure. This is the default layout QtCreator creates for 2 projects : @AppMain appmain.pro main.cpp AppMain-build debug AppMain.exe release AppMain.exe Makefile etc. AppLib1 applib1.pro applib1.h applib1.cpp AppLib1-build debug applib1.a release applib1.a Makefile etc.@ How do I link form the AppMain to this AppLib1? In the main.cpp I've included .h file like so: @#include "../AppLib1/applib1.h"@ The compilation fails obviously because of the missing dependencies. How do I add this? In VS this was done via "Project dependencies" dialog. I found something that looks similar in QtCreator in the "Projects" pane, "Dependencies" tab, but selecting AppLib1 from the list doesn't change anything. Linker is still showing undefined references from the lib. There is also option to add a library in the appmain.pro file editor from the context menu by selecting "Add library...", but it only allows me to select .lib files, which I don't have since I'm using minGW (library is in .a file). Can I add it somehow manually by LIBS += in the .pro file? I need it to switch between the debug and release applib1.a file, and the paths can't be absolute because this project is in version control used on several machines. I'm using RC of Qt Creator 2.1 with minGW on Windows7 x64 Any help will be great. You should add manually to the .pro file something like this: @ debug:LIBS+=debug_library_file_name release:LIBS+=release_library_file_name @ Also you should set "Dependencies" in the "Projects" pane. You may use session (menu File - Session) to save the settings and load them on next run. I am not sure what exactly "library_file_name" is on Windows, probably it is "library_file_name.a" Path in library_file_name may be relative. Also see this post: I ended up with something like this: @debug: LIBS += -L"../AppLib1-build/debug" release: LIBS += -L"../AppLib1-build/release" LIBS += -lApplib1 @ Wow, I can see now that the transition from VS will be super-confusing for me (the project is quite large), all this manual "configuration" work.. Anyway, thanks a lot blex. One question though - what exactly does checking projects in "Dependencies" tab do? It doesn't seem to matter if it's checked or not in my case. [quote author="crossblades" date="1290890172"] One question though - what exactly does checking projects in "Dependencies" tab do? It doesn't seem to matter if it's checked or not in my case.[/quote] I think it is build order [quote author="crossblades" date="1290890172"]I ended up with something like this: @debug: LIBS += -L"../AppLib1-build/debug" release: LIBS += -L"../AppLib1-build/release" LIBS += -lApplib1 @ [/quote] You also can add this to shorten the includes: @ INCLUDEPATH += ../Applib1 @ Then this include works: @ #include "applib1.h" @ [quote author="crossblades" date="1290890172"]I ended up One question though - what exactly does checking projects in "Dependencies" tab do? It doesn't seem to matter if it's checked or not in my case.[/quote] If you set it up, the build order is calculated correctly and if you modify the lib the app is relinked afterwards to pick up the changes. If you don't you might end up with an old application that is not linked against the new build of your lib. - tobias.hunger Moderators I would suggest turning both library and application into one project by adding another .pro-file using the SUBDIRS template. That is a way to wrap many .pro-files into one project. Using this combined project it is way more straight forward to set library and include pathes. If you only rarely work on the library and do not want to have it open in creator all the time then you might want to consider to properly install the library in your system... that makes working with it way easier, too. Thanks for all replies. My whole project consists of several "base" libraries and a few exe projects - something similar to Qt - some base libs are held in QtCore, QtGui etc and bunch of executables share this base (Creator, Designer, Linguist..), except they're statically linked by default. I'll look into the suggested SUBDIRS template. From Your description this looks like something that might be useful to me. Btw are there any good docs to the .pro files format? I can't seem to find any "oficial" spec, so I'm learning by going through Qt srcs, but it's hard if You don't exactly know what You're looking. At this beginning stage all of the components are changing/growing often. I usually have all or almost all of them opened at the same time. What do you mean by properly installing the libs? I don't think there is any standard way to "install" a library(.a or .lib) in windows, except for maybe adding it to the environment path, but I don't need that. You can find the complete documentation "here":. The same content is provided in Qt Assistant, btw. I think Tobias meant "installing" in the sense of installing it (a DLL probably) into some system directory. But from what you've writte this seems not to be the solution for you. Thanks! I'm really scratching my head how could I miss that.
https://forum.qt.io/topic/1959/qt-creator-linking-libs-from-multiple-projects/5
CC-MAIN-2018-43
refinedweb
939
76.72
Linux / RHEL - Writing Global Value from Cache to STDOUT on the Command Line OS Terminal Hello, Is it possible to write global output values to STDOUT, in a similar way to how csession can take routine intput, like below: (In Cache Terminal) %SYS>d ##class(%SYSTEM.License). License Server summary view of active key. Distributed license use: Current License Units Used = 1 Maximum License Units Used = 1 License Units Authorized = 200 Local license use: Current Connections = 1 Maximum Connections = 1 Current Users = 1 Maximum Users = 2 (From Linux OS Terminal) [jhipp@test-sbox ~]# csession TEST "##class(%SYSTEM.License). License Server summary view of active key. Distributed license use: Current License Units Used = 0 Maximum License Units Used = 1 License Units Authorized = 200 Local license use: Current Connections = 0 Maximum Connections = 1 Current Users = 1 Maximum Users = 2 This does not work, but I am wondering if there is a recommended solution: (Cache Terminal) %SYS>w ##class(Security.System). 8304 (Linux OS Terminal) [jhipp@test-sbox ~]# csession TEST "##class(Security.System). <INVALID ARGUMENT> [jhipp@test-sbox ~]# csession TEST "w ##class(Security.System). <INVALID ARGUMENT> A workaround (that I think is a bit clunky, which is why I am asking): [jhipp@test-sbox ~]# echo -e "w ##class(Security.System). Node: test-sbox, Instance: TEST %SYS> 8304 %SYS> I can parse out the line output with grep or awk, but that is probably not the best way to achieve this. Thanks, - James James, for this one, there is no easy answer. But first, here are some corrections FYI: So it's this: csession TEST -U %SYS '##class(Security.System).AutheEnabledGetStored("SYSTEM")' But the problem is that this expression doesn't output its value, and you can't include "write" at the beginning. You should probably describe exactly what you're trying to do in more detail, and someone here will make a suggestion. Hey Joel, Thanks for the input, single quotes is easier than using slashes to escape the double quote characters. That particular command may not be the best example to accomplish what I am looking for, but it was more of just a general question about the best way to write out global values. For instance, we have a script that checks for Users that exists in Cache, so that requires us to write out a global boolean value for Security.Users.Exists(). And similar things like that. Thanks, James Hello Alexey, I appreciate your input. This method accomplishes what I mentioned earlier as well in my workaround. I personally do not like using 'EOF' in my scripts because it can be very sensitive with indentation and whitespace. But regardless, with either method I am going to have to parse out the 5th line of output with awk if I just want to return the true value and not the entire Cache terminal output. I would consider your solution a 'workaround' as well , but you are correct this does work. I just did not know if there was a way to only return the global value to STDOUT much like cconsole can do with routine calls. Thanks, James The simplest way to interact from within bash with Caché looks like this: Output of Caché `write` and `zwrite` commands will go to STDOUT. As usual, you can redirect it wherever you want, e.g. csession TEST -U%SYS << EOF >> /home/james/mysession.log As parsing csession log can be a nasty task, I usually try to avoid it by construct: Hi James, Agree with you, parsing terminal output is not the smartest solution. I always try to avoid it using intermediate files. E.g. (from real life): Here the default DB of the namespace $NspToCheck (or $ZError code) is written to $mydir/db_temp file, than it goes to $DbQms shell variable and processed as needed. Initial answer was amended. Social networks InterSystems resources To leave a comment or answer to post please log in Please log in To leave a post please log in
https://community.intersystems.com/post/linux-rhel-writing-global-value-cache-stdout-command-line-os-terminal
CC-MAIN-2020-50
refinedweb
662
59.53
You can only use a clock... Plugin category: Mechanics (Not sure) Suggested name: HidePlugin What I want: Hello everyone! The plugin I'm looking for is one that when you... I'm still looking for this.. Anyone? Please close this thread. Thank you guys! <3 Plugin category: Fun & Economy Suggested name: CratesPlus What I want: I want a plugin like CratesReloaded. I was fine using that plugin until I... Rocoty I want it when you right click the clock it makes players hide. Then a delay of 5 seconds. Then if you right click again it makes them... Well there is not errors.. Just nothing happens :/ Hello my magic clock is not working for some reason. Code: Main Class: package me.Sean0402.MagicClock; import java.util.ArrayList; import... wptcraft has not been resolved. wptcraft thank you very much! wptcraft I currently have it so when you right click a redstone torch on is opens a gui like hppixels clock. And then you select one to choose.... @Irdemolition yes it is. Hi guys! How do I get the dye item? To go inside my inventory such as green dye for when the players is on. And the red dye when the players are... 1928i what you could do it make then spawn at a Location of your choice. then do this for instance. Mob m = (Mob)... Ok guys. You can stop posting comments now lol FerusGrim It's not copying my default config. so the "Message" string won't be there? ChipDev I know. FerusGrim when I try to edit the config it goes back to what I had it before I edited it? I'm having problems with it still.. Giving me errors when I change the code and put's it back to it's original code. package... Separate names with a comma.
https://dl.bukkit.org/search/97468260/
CC-MAIN-2020-24
refinedweb
304
87.82
25 July 2008 00:44 [Source: ICIS news] NEW DELHI (ICIS news)--?xml:namespace> With the restart, operations were back to normal at all of GNFC’s plants at its production complex at Narmadanagar in Bharuch district of Gujarat State. The company had shut seven fertilizers and chemical plants on 3 July due to a problem with an air compressor at its fuel oil-based ammonia plant. The methanol-II plant was already shut for revamp. The revamped plant has nameplate capacity of 157,750 tonnes/year. The company had resumed ammonia production on 17 July. Subsequently, it restarted its urea, methyl formate, formic acid, acetic acid, ammonium nitrate and toluene di-isocynate
http://www.icis.com/Articles/2008/07/25/9142971/indias-gnfc-restarts-revamped-methanol-plant.html
CC-MAIN-2015-11
refinedweb
113
56.35
Scrapy extension to write scraped items using Django models Project description scrapy-djangoitem is an extension that allows you to define Scrapy items using existing Django models. This utility provides a new class, named DjangoItem, that you can use as a regular Scrapy item and link it to a Django model with its django_model attribute. Start using it right away by importing it from this package: from scrapy_djangoitem import DjangoItem Installation Starting with v1.1 both Python 2.7 and Python 3.4/3.5 are supported. For Python 3 you need Scrapy v1.1 or above. Latest tested Django version is Django 1.9. Install from PyPI using: pip install scrapy-djangoitem Introduction. Usage_djangoitem import DjangoItem class PersonItem(DjangoItem): django_model = Person DjangoItem works just like Scrapy items: >>> p = PersonItem() >>> p['name'] = 'John' >>> p['age'] = '22' To obtain the Django model from the item, we call the extra method DjangoItem.save() of the DjangoItem: >>> person = p.save() >>> person.name 'John' >>> person.age '22' >>> person.id 1 The model is already saved when we call DjangoItem.save(), we can prevent this by calling it with commit=False. We can use commit=False in DjangoItem.save() method to obtain an unsaved model: >>> person = p.save(commit=False) >>> person.name 'John' >>> person.age '22' >>> person.id None As said before, we can add other fields to the item: import scrapy from scrapy_djangoitem import DjangoItem class PersonItem(DjangoItem): django_model = Person sex = scrapy.Field() >>> p = PersonItem() >>> p['name'] = 'John' >>> p['age'] = '22' >>> p['sex'] = 'M' And we can override the fields of the model with your own: class PersonItem(DjangoItem): django_model = Person name = scrapy.Field(default='No Name') This is useful to provide properties to the field, like a default or any other property that your project uses. Those additional fields won’t be taken into account when doing a DjangoItem.save(). Caveats applications (such as a web crawler), specially if the database is highly normalized and with many indices. Setup To use the Django models outside the Django application you need to set up the DJANGO_SETTINGS_MODULE environment variable and –in most cases– modify the PYTHONPATH environment variable to be able to import the settings module. There are many ways to do this depending on your use case and preferences. Below is detailed one of the simplest ways to do it. Suppose your Django project is named mysite, is located in the path /home/projects/mysite and you have created an app myapp with the model Person. That means your directory structure is something like this: /home/projects/mysite ├── manage.py ├── myapp │ ├── __init__.py │ ├── models.py │ ├── tests.py │ └── views.py └── mysite ├── __init__.py ├── settings.py ├── urls.py └── wsgi.py Then you need to add /home/projects/mysite to the PYTHONPATH environment variable and set up the environment variable DJANGO_SETTINGS_MODULE to mysite.settings. That can be done in your Scrapy’s settings file by adding the lines below: import sys sys.path.append('/home/projects/mysite') import os os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings' Notice that we modify the sys.path variable instead the PYTHONPATH environment variable as we are already within the python runtime. If everything is right, you should be able to start the scrapy shell command and import the model Person (i.e. from myapp.models import Person). Starting with Django 1.8 you also have to explicitly set up Django if using it outside a manage.py context (see Django Docs): import django django.setup() Development Test suite from the tests directory can be run using tox by running: tox …using the configuration in tox.ini. The Python interpreters used have to be installed locally on the system. Changelog v1.1.1 (2016-05-04) - Distribute as universal wheel - Fix README’s markup v1.1 (2016-05-04) - Python 3.4/3.5 support - Making tests work with Django 1.9 again v1.0 (2015-04-29) - Initial version Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/scrapy-djangoitem/
CC-MAIN-2021-17
refinedweb
669
59.6
With parsers, for example, it amounts to you have a context-free vs. a context-sensitive language. The functions hidden behind a monadic bind are effectively opaque to any sort of analysis, whereas the static structure of an applicative can be analyzed as much as you want. Ed Kmett does this in his trifecta parsing library (I think there's a couple of other libraries that also do this), but you have to use the applicative interface explicitly where possible to take advantage of the additional optimizations. This would also have benefits for other sorts of EDSLs, for the same reason. An applicative computation might for example be sparked and processed in parallel, whereas it's a lot harder (impossible) to do that if your structure isn't determined beforehand. On Sun, Sep 4, 2011 at 12:24 AM, Ivan Lazar Miljenovic < ivan.miljenovic at gmail.com> wrote: > On 4 September 2011 12:34, Daniel Peebles <pumpkingod at gmail.com> wrote: > > Hi all, > > For example, if I write in a do block: > > x <- action1 > > y <- action2 > > z <- action3 > > return (f x y z) > > that doesn't require any of the context-sensitivty that Monads give you, > and > > could be processed a lot more efficiently by a clever Applicative > instance > > (a parser, for instance). > > What advantage is there in using Applicative rather than Monad for > this? Does it _really_ lead to an efficiency increase? > > -- > Ivan Lazar Miljenovic > Ivan.Miljenovic at gmail.com > IvanMiljenovic.wordpress.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
http://www.haskell.org/pipermail/haskell-cafe/2011-September/095123.html
CC-MAIN-2014-15
refinedweb
252
55.54
Introduction to Linked List – Explanation and Implementation Linked list is a linear data structure, linear refers to storing the elements sequentially in the form of nodes. Unlike arrays, here each element is linked using pointers rather storing them continuously. Advantages of using Linked list against Array: - Size of the block should be known in advance in Array but not in Linked List. - Insertion and Deletion are costly in array even if related info is known. Say deletion of a known element or insertion to preceding/succeeding element, here in both case linked list takes O(1) time however array causes O(n). Disadvantage: - Consumes more memory. - Random access is not possible. - Performance wise array is better due to poor locality of reference in Linked List especially spatial locality. Say we want to store elements 10, 20 and 30 in the linked list data structure. In the image above 3 nodes are used, each having 2 parts- - Data part – It is for storing element data. - Link part – This part has a reference which holds the address of the next node. Each node is pointing to next node and next is to next. The last one contains Null in its reference section. Head is used to point the starting node using which linked list is accessed. Thus, Linked list can be visualized as a chain of nodes where every node point to the next node Declaring, Initializing and Implementation in C language: Declaration – - For Data, int data; - For Link/Next, it is going to point a structure of the same type. struct node_type *link; Thus the declaration would be, typedef struct node_type{ int data; struct node_type *link; }node; typedef node *list; Initialization and implementing Linked list in C – // C program to traverse a link list #include<stdio.h> #include<stdlib.h> //DECLARATION struct node { int data; struct node *next; }; // This function prints contents of linked list from given node void printLinkedList(struct node *list) { while (list) { printf(" %d ", list->data); list = list->next; // accessing and assigning the next element of list } } //)); //ASSIGNINMENT first->data = 10; //assign data in first node first->next = second; // Link first node with second second->data = 20; //assign data to second node second->next = third; third->data = 30; //assign data to third node third->next = NULL; head = first; printLinkedList(head); return 0; } Output:- 10 20 30 Declaring, Initializing and Implementing Linked List in Java : package com.codingeek.datastructure.linkedlist; public class SinglyLinkedListTraversal { //Main to initiate the program. public static void main(String[] args) { //Initializing linked list. LinkedList<Integer> linkedList = new LinkedList<Integer>(); linkedList.add(10); linkedList.add(20); linkedList.add(30); //Traversing and printing data of linked list. linkedList.traverseAndPrint(); } } /* * This class holds the references and performs all operation on link list. */ class LinkedList<T> { private Node<T> head; private Node<T> tail; // Adds a single node at the end of link list. public void add(T element) { Node<T> node = new Node<T>(element); if (head == null) { head = node; tail = node; } else { tail.setNext(node); tail = node; } } // Traverses current state of linked list and print data members. public void traverseAndPrint() { Node<T> current = head; while (current != null) { System.out.print(" " + current.getData()); current = current.getNext(); } } } /* * Object of this class is a single node in linked list. */ class Node<T> { T data; Node<T> next; public Node(T element) { data = element; next = null; } public T getData() { return data; } public Node<T> getNext() { return next; } public void setNext(Node<T> next) { this.next = next; } } Output:- 10 20 30 Note:- Above is the basic representation of linked list data structure in Java but Java also provides it’s implementation – java.util.LinkedList. We will always use this in real world programming solutions. Few points: - A Linked list can be of type Singly Linked list, Doubly linked list or Circular Linked list. - It can be implemented as queue or stack. - Applications of Linked list - Memory management in OS, Symbol table management - MRU/LRU(Most/Least recently used) - Tree, Graph - Stack and Queue - Hashing etc. and many Knowledge is most useful when liberated and shared. Share this to motivate us to keep writing such online tutorials for free and do comment if anything is missing or wrong or you need any kind of help. Keep Learning… Happy Learning.. 🙂
https://www.codingeek.com/data-structure/introduction-to-linked-list-explanation-and-implementation/
CC-MAIN-2018-26
refinedweb
706
53.71
basically that is my question hahahahah I create a function and test both, apparently is the same thing. would like to know if it really is the same or is just my lack of knowledge on the subject. My code: import sys #3 moon weight program #using sys.stdin.readline() mini-program using prompt def moon_weight_sys(): #print("What is your weight on earth? ") #weight_earth = int(sys.stdin.readline()) weight_earth = int(input("What is your weight on earth? ")) #print("How munch do you gain in weight each year? ") #weight_increased = int(sys.stdin.readline()) weight_increased = int(input("How munch do you gain in weight each year? ")) #print("In how many years? ") #years_num = int(sys.stdin.readline()) + 1 years_num = int(input("In how many years? ")) + 1 moon_weight_now = weight_earth * 0.165 for y in range(1, years_num): print(“Year {}: My weight on the moon is {}.”.format(y, moon_weight_now)) weight_earth = weight_earth + weight_increased moon_weight_now = weight_earth * 0.165 moon_weight_sys()
https://forum.learncodethehardway.com/t/whats-the-difference-between-input-and-sys-stdin-readline/1474
CC-MAIN-2022-40
refinedweb
152
77.74
#include <types.h> #include <types.h> Inheritance diagram for UL: [private] Prevent default construction. [inline] Construct a UL from a sequence of bytes. Construct a UL as a copy of another UL. Copy constructor. Construct a UL from an end-swapped UUID. Fast compare a UL based on testing most-likely to fail bytes first. We use an unrolled loop with modified order for best efficiency DRAGONS: There may be a slightly faster way that will prevent pipeline stalling, but this is fast enough! Fast compare a UL based on testing most-likely to fail bytes first *IGNORING THE VERSION NUMBER*. Set a UL from a UUID, does end swapping. Produce a human-readable string in one of the "standard" formats. Reimplemented from Identifier< 16 >.
http://freemxf.org/mxflib-docs/mxflib-1.0.0-docs/classmxflib_1_1_u_l.html
CC-MAIN-2018-05
refinedweb
125
68.97
07 October 2009 16:50 [Source: ICIS news] By Nigel Davis ?xml:namespace> Polyethylene (PE) and polypropylene (PP) demand dropped in 2008 and is expected to fall in 2009 between 5% and 10% from those year-ago levels, LyondellBasell believes. Asia continues to be a bright spot and over the longer term the company still sees relatively healthy But gone are the days when the market was expanding at 8%, 9% or 10%. “We have very moderate growth rates in our plans for polymers,” de Vries said in an interview with ICIS news on the sidelines of the 43rd annual European Petrochemical Association (EPCA) meeting in This has been a year of turmoil for LyondellBasell and other producers in the industry. But the The third quarter started relatively well, helped along by exports from the “The business environment is not going to get better for a company like us,” de Vries said, looking to next year. “With more capacity coming on stream in the The sector will get a better idea of where real demand is during this quarter, he believes. Product pipelines have emptied and filled again since the fourth quarter of 2008. LyondellBasell used to run its polymer businesses with at least 35 days of PP and PE stocks. It has run for the past six months and more with stocks down to 20 days. This is not the easiest of places to be but illustrates the fact that during this downturn suppliers and customers have been forced to do things differently. The trend is apparent across other chemicals markets. There just isn’t the working capital available to run businesses in the way they were being run. Companies of every sort are doing all they can now to preserve cash. Firms are going into the fourth quarter with low stock levels and more used to running product lines tightly. Cashflow management is important through myriad product chains. This control will not be relaxed until some supply/demand tightness returns, and in polyolefins particularly, that is likely to be some time off. The output from large, new, low-cost production facilities in the Stronger margins will only be made when a more favourable supply/demand balance for producers is achieved. The threat primarily is to commodity grade business, but new plants are coming on stream that ultimately have the potential to make higher added-value polymers such as pipe-grade plastics. It is difficult to imagine operators not aiming to penetrate such markets when they have the ability to do so. Producers established in Europe and They are also having to deal differently with customers who might only be prepared to accept modest step-wide price increases given the parlous state of their own downstream demand. The arrival of material from new plants in the The true nature of the impact of the capacity-driven trough for the business, however, particularly in the European market, has yet to be realised.
http://www.icis.com/Articles/2009/10/07/9253567/insight-polymer-makers-are-in-a-new-place.html
CC-MAIN-2014-52
refinedweb
492
56.69
I finally took a little time to get my head around POSIX process groups and sessions. Fair warning: if you don’t know what a PID is in the context of a POSIX process — or, indeed, you think “POSIX” sounds like some type of screw head - then you probably don’t need to bother reading this post. Right, assuming there’s anybody left… For awhile now I’ve had a sort of peripheral awareness of some additional attributes of processes about which I’ve never really bothered too much. The most important attributes, with which most people reading this will likely be familiar, are the PID1, PPID2, UID3 and GID4. There are a few wrinkles like real and effective IDs, but they’re beyond the scope of this post. If you run your favourite process listing command with enough detail (such as ps -eF for example) then you’ll see most of these shown. However, there are a couple of extra attributes that I’ve never looked at in much detail, which are process group ID (PGID) and session ID (SID). Today I decided that ignorance wasn’t bliss at all, in fact it was a blasted pain, so I’ve looked up what they mean. It turns out that they’re quite simple and, potentially, quite useful. So, here goes. A process group is more or less exactly what it sounds like — a way to group processes together. This is useful because it’s possible to direct a signal to a process group instead of a specific process. This can be done with the killpg() system call which takes a PGID as a parameter and has the effect of sending the specified signal to every process within that group. You can specify a PGID of 0 to specify the group in which the calling process is found, and actually a standard kill() call with a PID of 0 does the same thing. The group in which a process is located defaults to the group of the process which created it, but it can be changed with the setpgid() call. Indeed, this is what the shell does when it executes pipelines of commands - each pipeline is put into its own process group, separate from the shell’s group. If any of those commands fork their own children then they’ll also be added to the same group, unless they actively change it. Note that a “pipeline” in this context also applies to the degenerate case of a single command (a pipeline of one!). Conventionally the PGID of a group is the same as the PID of the first process placed in that group, which is referred to as the process group leader. This is an important concept if you want to change your session, but to explain that I’ll have to explain what a session is. The session is another level of grouping — i.e. a session contains one or more process groups. Sessions are generally tied to a controlling terminal5. For example, all process groups created by a particular shell will have the same session ID, which will generally be the PID of the shell process — as an aside, this is a quick way to locate all the commands created by a particular shell process. One important aspect of a session is that when moving a process between process groups, both groups must be members of the same session or the operation fails. A process can be moved to a new session using the setsid() system call. This will create a new process group and place the calling process into it, and then create a new session and place the new process group within that. There are restrictions on which processes may do this, however — see below. Note that the new session will have no controlling terminal, so this system call offers a helpful way for processes to detach from their controlling terminal when they daemonise. Each session has a foreground process group, which is effectively the currently executing command. This is the group to which a signal will be sent if generated by the terminal (e.g. SIGINT in response to CTRL-C or SIGTSTOP in response to CTRL-Z). Also, only processes within the foreground group can read from the terminal. Just as a process group has a leader so does a session have a session leader process, which is often a process group leader as well. Both process group and session leaders have various restrictions on them: session leaders can’t be moved between process groups and process group leaders can’t be moved to a new session with setsid(). The session leader is also the process to receive a SIGHUP if the controlling terminal for the session is closed6. Given all this, we can see how it fits into the “standard” process for daemonising: fork()and terminate the parent — this ensures the new process is an orphan (adopted by init) and also returns control to the calling shell. setsid()to create a new process group and session — we can only do this after the fork()above because otherwise we’d be a process group leader. This has detached us from the controlling terminal, which is exactly what daemons should do. fork()a second time — I believe this is simply so we’re not longer a session leader and can never re-acquire a controlling terminal. There may be additional, more subtle, reasons of which I’m unaware. chdir("/")or some other directory on which the daemon relies — this is to avoid the daemon keeping a directory active which would prevent it being unmounted. If there’s some directory the daemon requires then it actually may be preferable for it to stay active to prevent accidental unmounting. umask(0)just to clear any permissions mask we may have inherited. close()standard file descriptors 0, 1and 2, which are standard input, output and error respectively. Since we’re detached from our terminal it’s not clear where they’ve been directed to anyway. Note that some daemons determine the highest possible file descriptor using sysconf()with _SC_OPEN_MAXand call close()on them all (ignoring errors) just in case the parent had any other open files — this may be overkill if you’re confident in the behaviour of your calling process, but if you’re at all uncertain it’s the safest course, to avoid wasting file descriptors (of which there’s a finite number available). open()three times for each of the file descriptors, redirecting them to somewhere sensible. This could be /dev/nullor /dev/console, or perhaps a log file you’ve already opened. Some code assumes file descriptors will be allocated sequentially so they just assume that the next three open()calls will get descriptors 0- 2, but to be doubly sure you can use dup2()— in that case, however, you should have opened the replacement descriptor before the previous step, otherwise you could have a clash. A detailed description of all these steps is outside the scope of this post, but I wanted to reproduce the full procedure here for context — you can find more details all over the web. Let’s see some illustrations of process groups and sessions. Note that the ps invocations I used below are quite Linux-specific, but you should be able to tailor them to your particular Unix variant with a bit of squinting at the man page. First, we run a simple ps to show the relevant IDs: $ ps -Ho pid,ppid,pgid,tpgid,sess,args PID PPID PGID TPGID SESS COMMAND 1684 3057 1684 59829 1684 /bin/bash 59829 1684 59829 59829 1684 ps -Ho pid,ppid,pgid,tpgid,sess,args Here we can see the bash shell has PID 1684 and this matches the SID of both itself and the ps command which was executing. The PPID of the ps matches the PID of bash as one would expect and the ps process has been assigned a new PGID which matches its own PID, so it is the process group leader. The TPGID field indicates the foreground process group within the session, in this case the PGID of ps since that’s the currently executing command in the session. Second, we’ll add an additional pipeline of commands into the mix: $ cat | sed 's/hello/goodbye/' & [1] 17391 [1]+ Stopped cat | sed 's/hello/goodbye/' $ ps -Ho pid,ppid,pgid,tpgid,sess,args PID PPID PGID TPGID SESS COMMAND 1684 3057 1684 17401 1684 /bin/bash 17390 1684 17390 17401 1684 cat 17391 1684 17390 17401 1684 sed s/hello/goodbye/ 17401 1684 17401 17401 1684 ps -Ho pid,ppid,pgid,tpgid,sess,args Note: you can ignore the “stopped” message, this is a result of cat trying to read from its standard input and failing because it’s in the background. Only the foreground process group can read from the terminal, a process in any other group which tries will be sent SIGTSTP and hence be suspended. So, we can see that both cat and sed have been placed into the same PGID by the shell here, which is different to the PGID of ps. The TPGID of all the entries is still the same as the PGID of ps because ps is again the currently executing command for all groups within the session. Since I’ve used the same shell process as in the previous example, the SID is the same. Now we can see an example of signals being set to the foreground process group (and not just a single process) by executing the following Python script7: import signal import os import time # Initialise do_exit to False, On CTRL-C (SIGINT), set it to True. do_exit = False def handle_signal(signum, stack): global do_exit do_exit = True # Install signal handler. signal.signal(signal.SIGINT, handle_signal) # Fork into two processes to illustrate both receiving a signal. child_pid = os.fork() if child_pid == 0: print "Child is waiting..." else: print "Parent is waiting..." # Loop until the SIGINT handler sets do_exit to True. while not do_exit: time.sleep(0.1) # Print appropriate message and exit. if child_pid == 0: print "Child has caught signal." else: print "Parent has caught signal." Execute this script and then, once parent and child are waiting, hit CTRL-C. You should see the following output, potentially with parent and child messages swapped over in either or both cases: $ python signal-catcher.py Child is waiting... Parent is waiting... Parent has caught signal. Child has caught signal. This clearly shows both processes receiving the SIGINT as a result of CTRL-C. For comparison, if we only send the signal to the child process: $ python signal-catcher.py & [1] 33635 Child is waiting... Parent is waiting... $ ps -Ho pid,ppid,pgid,tpgid,sess,args PID PPID PGID TPGID SESS COMMAND 1684 3057 1684 33680 1684 /bin/bash 33635 1684 33635 33680 1684 python signal-catcher.py 33640 33635 33635 33680 1684 python signal-catcher.py 33680 1684 33680 33680 1684 ps -Ho pid,ppid,pgid,tpgid,sess,args $ kill -INT 33640 Child has caught signal. $ ps -Ho pid,ppid,pgid,tpgid,sess,args PID PPID PGID TPGID SESS COMMAND 1684 3057 1684 33744 1684 /bin/bash 33635 1684 33635 33744 1684 python signal-catcher.py 33640 33635 33635 33744 1684 [python] <defunct> 33744 1684 33744 33744 1684 ps -Ho pid,ppid,pgid,tpgid,sess,args $ kill -INT 33635 Parent has caught signal. [1]+ Done python signal-catcher.py Since the command was executed in the background the output gets interleaved with the shell prompt, so I’ve tidied that up for clarity in the output above. The pertinent details are shown unchanged, however — in particular, you can see the child process (only) receives the signal and terminates, remaining only as a defunct zombie process until its parent reaps its return code with something like wait(). Since our little Python script never reaps this return code, the child process’ descriptor will linger as long as the parent remains alive. We can see that the PGID of the child python process is the same as the parent, as expected. This example also shows clearly the difference between signalling the process group, as in the first example, and signalling a single process, as shown here. Finally, for completeness, let’s see the same example but signalling the parent process first and then the child: $ python signal-catcher.py & [1] 49149 Parent is waiting... Child is waiting... $ ps -Ho pid,ppid,pgid,tpgid,sess,args PID PPID PGID TPGID SESS COMMAND 1684 3057 1684 50394 1684 /bin/bash 49149 1684 49149 50394 1684 python signal-catcher.py 49154 49149 49149 50394 1684 python signal-catcher.py 50394 1684 50394 50394 1684 ps -Ho pid,ppid,pgid,tpgid,sess,args $ kill -INT 49149 Parent has caught signal. [1]+ Done python signal-catcher.py $ ps -Ho pid,ppid,pgid,tpgid,sess,args PID PPID PGID TPGID SESS COMMAND 1684 3057 1684 51192 1684 /bin/bash 51192 1684 51192 51192 1684 ps -Ho pid,ppid,pgid,tpgid,sess,args 49154 1 49149 51192 1684 python signal-catcher.py $ kill -INT 49154 Child has caught signal. This example shows broadly the same principles, but there are a couple of interesting points to note. Firstly, once the parent is dead the shell indicates that the job is “done” — it doesn’t monitor the children of commands that it executes, just when the command itself is completed. Secondly, after the parent has terminated note how the PPID of the child is set to 1. This is because orphaned processes are automatically adopted by the init process (the root of all processes on the system). If this didn’t happen then they would always remain around as defunct zombies after terminating since there’s no parent process to reap their return code. The init process is implemented such that it calls wait() on all of its children to reap their return codes. Note how even though it’s been adopted, it still shares the same session and is still attached to the same terminal, so ps still displays it without need for the -e (or -A) option. Hopefully that’s cleared things up for someone. Well, it’s definitely cleared things up for me — I should try explaining things to myself more often. Process ID, a unique identifier for a process. ↩ Parent process ID, the PID of the process which created this one. ↩ User ID, the user as which the process is executing. ↩ Group ID, the group as which the process is executing. ↩ Although it’s quite possible for a session to have no controlling terminal — this typically the case with daemon processes, for example. ↩ In reality, of course, the situation is a little more complicated and there are circumstances that SIGHUP is not set, such as the terminal having the CLOCAL flag set. You can find the gory details in the man pages. ↩ It’s pretty grotty as far as code quality is concerned, but it’s purely for illustrative purposes. ↩
https://www.andy-pearce.com/blog/posts/2013/Aug/process-groups-and-sessions/
CC-MAIN-2021-49
refinedweb
2,505
68.3
Mathcomp Because C++ doesn't have math-like chained comparisons. Observed Behaviour C/C++ sucks like this: int p= 121; // warning: comparison of boolean constant with arithmetic constant (39) is always true. // Not what we want! if (-17 < p < 39) { cout<< "foo"; } else { cout<< "bar"; } cout<< endl; In the above the order of evaluation is: (-17 < p) → boolwith the end result true. (true < 39) → boolwith the end result true(both because of integer promotion and because of bool comparison, so we're double struck here). Expected Behaviour To be able to write a chained comparison the way it's used in MATH. What we can do: int p= 121; if (something?() < -17 < p < 39) { cout<< "foo"; } else { cout<< "bar"; } cout<< endl; So here mathcomp provides that something. Usage #include "mathcomp/mathcomp.hpp" // ... in code using mathcomp::mathcomp; int p= 121; if (mathcomp< -17 <= p < 39) { cout<< "foo"; } else { cout<< "bar"; } Mathcomp supports left-to-right ordered chained comparisons that means operators < , <= and ==. Note the use of operator< at the beginning to activate chaining comparison. Operators < , <= , == , << can be used to activate chaining. License mathcomp is licensed under the LGPL aka GNU Lesser Public License. License verbatim is provided in /doc/tip/LICENSE.txt. Visit also for license details and for a rundown.
http://chiselapp.com/user/lmachucab/repository/mathcomp/index
CC-MAIN-2019-51
refinedweb
211
66.64
Text::Xslate - Scalable template engine for Perl5 This document describes Text::Xslate version 3.3.4. (easy but slow) my $template = q{ <h1><: $title :></h1> <ul> : for $books -> $book { <li><: $book.title :></li> : } # for </ul> }; print $tx->render_string($template, \%vars); Xslate is developed, which is the best amazingly high scores in instance_reuse condition (i.e. for persistent applications). There are also benchmarks in benchmark/ directory in the Xslate distribution. Xslate employs the smart escaping strategy, where a template engine escapes all the HTML metacharacters in template expressions unless users mark values as raw. That is, the output is unlikely to prone to XSS. Xslate supports the template cascading, which allows you to extend templates with block modifiers. It is like a traditional template inclusion, but is more powerful. This mechanism is also called as template inheritance. Xslate is ready to enhance. You can add functions and methods to the template engine and even add a new syntax via extending the parser. Creates a new Xslate template engine with options. You can reuse this instance for multiple calls to render(). Possible options are: path => \@path // ['.'] Specifies the include paths, which may be directory names or virtual paths, i.e. HASH references which contain $file_name => $content pairs. Note that if you use taint mode ( -T), you have to give absolute paths to path and cache_dir. Otherwise you'll get errors because they depend on the current working directory which might not be secure. will be used. You should specify this option for productions to avoid conflicts of template names. function => \%functions Specifies a function map which contains name-coderef pairs. A function f may be called as f($arg) or $arg | f in templates. Note that these registered functions have to return a text string, not a binary string unless you want to handle bytes in whole templates. Make sure what you want to use returns either a text string or a binary string. For example, some methods of Time::Piece might return a binary string which is encoded in UTF-8, so you'll option, and also can invoke any object methods in templates, Xslate doesn't require specific namespaces for plugins. html_builder_module => [$module => ?\@import_args, ...] Imports functions from $module, wrapping each function with html_builder(). input_layer => $perliolayers // ':utf8' Specifies PerlIO layers to open template files. cascade and include or xml, smart escaping is applied to template expressions. That is, they are interpolated via the html_escape filter. If $type is text smartemeta will be applied to. If you give undef, the line code style is disabled. This option is passed to the parser via the compiler. tag_start => $str // $parser_defined_str Specify the token to start inline code as a string, which quotemeta will be applied to. This option is passed to the parser via the compiler. tag_end => $str // $parser_defined_str Specify the token to end inline code as a string, which quotemeta will be applied to. This option is passed to the parser via the compiler. header => \@template_files Specify the header template files, which are inserted to the head of each template. This option is passed to the compiler. footer => \@template_files Specify the footer template files, which are inserted to the foot of each template. This option is passed to the compiler. warn_handler => \&cb Specify the callback &cb which is called on warnings. die_handler => \&cb Specify the callback &cb which is called on fatal errors. pre_process_handler => \&cb Specify the callback &cb which is called after templates are loaded from the disk in order to pre-process template. For example: # Remove withespace from templates my $tx = Text::Xslate->new( pre_process_handler => sub { my $text = shift; $text=~s/\s+//g; return $text; } ); The first argument is the template text string, which can be both text strings and byte strings. This filter is applied only to files, not a string template for render_string. Renders a template file with given variables, and returns the result. \%vars is optional. Note that $file may be cached according to the cache level.. Loads $file into memory for following render(). Compiles and saves it as disk caches if needed. Returns the current Xslate engine while executing. Otherwise returns undef. This method is significant when it is called by template functions and methods. Returns the current variable table, namely the second argument of render() while executing. Otherwise returns undef. Returns the current file name while executing. Otherwise returns undef. This method is significant when it is called by template functions and methods. Returns the current line number while executing. Otherwise returns undef. This method is significant when it is called by template functions and methods. Adds the argument into the output buffer. This method is available on executing. Checks whether the syntax of $file is valid or invalid as Xslate. If it detects the invalid factor, this method throws the exception. automatically be applied to all template expressions. This function is available in templates as the html filter, but you're better off using unmark_raw to ensure that expressions are html-escaped. uri_escape($str :Str) :Str Escapes URI unsafe characters in $str, and returns it. This function is available in templates as the uri filter. html_builder { block } | \&function :CodeRef Wraps a block or &function with mark_raw so that the new subroutine will return a raw string. This function is used to tell the xslate engine that &function is an HTML builder that returns HTML sources. For example: sub some_html_builder { my @args = @_; my $html; # build HTML ... return $html; } my $tx = Text::Xslate->new( function => { some_html_builder => html_builder(\&some_html_builder), }, ); See also Text::Xslate::Manual::Cookbook.. There are multiple template syntaxes available in Xslate. Kolon is the default syntax, using <: ... :> inline code and : ... line code, which is explained in Text::Xslate::Syntax::Kolon. Metakolon is the same as Kolon except for using [% ... %] inline code and %% ... line code, instead of <: ... :> and : .... TTerse is a syntax that is a subset of Template-Toolkit 2 (and partially TT3), which is explained in Text::Xslate::Syntax::TTerse. There's HTML::Template compatible layers in CPAN. Text::Xslate::Syntax::HTMLTemplate is a syntax for HTML::Template. HTML::Template::Parser is a converter from HTML::Template to Text::Xslate. There are common notes in Xslate. Note that nil (i.e. undef in Perl) handling is different from Perl's. Basically it does nothing, but verbose => 2 will produce warnings on it. Prints nothing. Returns nil. That is, nil.foo.bar.baz produces nil. Returns nil. That is, nil.foo().bar().baz() produces nil. Dealt as an empty array. $var == nil returns true if and only if $var is nil. Perl 5.8.1 or later. If you have a C compiler, the XS backend will be used. Otherwise the pure Perl backend will be used. <: [ $foo->bar @list ] :>. WEB: PROJECT HOME: REPOSITORY: Please make a file on. Patches are always welcome. =head1. Fuji, Goro (gfx) <gfuji@cpan.org>. Makamaka Hannyaharamitu (makamaka) (Text::Xslate::PP) Maki, Daisuke (lestrrat) (Text::Xslate::Runner) This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~gfuji/Text-Xslate/lib/Text/Xslate.pm
CC-MAIN-2016-50
refinedweb
1,166
68.97
1. Description With the introduction of flash 8 "flash.display.BitmapData" class thousands of new application can now be made using Flash.. In this tutorial we will focus our attention to the BitmapData.gePixel() method in order to transform a portion of flash movie into a JPEG (created with PHP/GD) BitmapData has different ways to get pixel color informations: - getPixel(x:Number, y:Number) : Number Returns an integer representing an RGB pixel value from a BitmapData object at a specific point (x, y). - getPixel32(x:Number, y:Number) : Number Returns an ARGB color value that contains alpha channel data as well as RGB data. - getColorBoundsRect(mask:Number, color:Number, [findColor:Boolean]) : Rectangle Determines a rectangular region that fully encloses all pixels of a given color within the bitmap image. What we will use in this tutorial is getPixel(). Here a basic example on how it can work: import flash.display.* var bmp:BitmapData = new BitmapData(this._width, this._height, false) bmp.draw(this); this.onMouseMove = function(){ var pColor:Number = bmp.getPixel(_xmouse, _ymouse) var hexColor:String = pColor.toString(16).toUpperCase() while(hexColor.length < 6){ hexColor = "0" + hexColor } var r = Number("0x" + hexColor.substr(0,2)) var g = Number("0x" + hexColor.substr(2,2)) var b = Number("0x" + hexColor.substr(4,2)) testo.text = "0x" + hexColor + ", {r:" + r + ", g:" + g + ", b:" + b + "}" } Remember to import the flash.display.BitmapData class. Create a new bitmapdata instance and assign to it the same dimensions as the current Stage. Then using draw() we're making an exact copy of the _root movieclip into the bitmapdata object. Using: bmp.getPixel( _xmouse, _ymouse).toString(16).toUpperCase() we will have the hexadecimal color value of the pixel at those coordinates. 2. Advanced Example Ok, we can make a copy of everything in a flash movie using this method.. and so we can also send all the pixel color values to an external application in order to recreate a JPEG of the copied movieclip. The bigger problem is that for a movie of 550x400 whe should collect 220000 colors values, and converting the value into an hexadecimal string this means that the final string to send will be 1320000 chars long. This means a lot of data! Another problem is that we can't collect all the pixel color values in a single for loop, otherwise the flash player will dead suddenly! I made this example ( 500x210 ), with a .flv video included. You can also draw some lines above the .swf using mouse
http://www.sephiroth.it/tutorials/flashPHP/print_screen/
CC-MAIN-2014-41
refinedweb
416
50.23
HorseMount(object, int, int, int) Action - mount a horse. void HorseMount( object oHorse, int nAnimate = TRUE, int bInstant = FALSE, int nState = 0 ); Parameters oHorse The horse to be mounted by the caller. nAnimate If TRUE, the full mounting animation sequence is used. bInstant If TRUE, the caller jumps instantly to the mounted position; otherwise, the caller moves to the horse before mounting instantly. Has no effect if full animation is specified. nState An internal variable used by the function itself, that need not be modified. Description This function will cause the calling object to attempt to mount the horse. Remarks This is an Action which must be added to the caller's queue. As with all Actions, it is not executed immediately. The function doesn't require the horse to be assigned or owned prior to mounting. If the caller is unable to mount the horse for any reason, nothing happens. A floating text error message is displayed if the caller is a PC. If you unintentionally command a rider with no appropriate model to mount, strange things will happen, so you may want to check for this in your script using HorseGetCanBeMounted. In cutscenes, time must be allowed for the animation to complete. The default is (6.0 + HORSE_MOUNT_DURATION) because there's a 6 second timeout on getting into position before the animation. It's wise to allow another second or two for action queue lag etc. Really strange bugs can occur if your script doesn't wait long enough for mounting to complete before pressing on with the action, because this function manages fragile AI switches. If you over-ride the defaults, the formula is (6.0 + (HORSE_MOUNT_DURATION + fX3_MOUNT_DELAY) * fX3_MOUNT_MULTIPLE). Requirements #include "x3_inc_horse" Version 1.69 Example // This example actions the PC to mount their assigned horse. #include "x3_inc_horse" void main() { object oPC = GetPCSpeaker(); object oHorse = HorseGetMyHorse(oPC); if(GetIsObjectValid(oHorse)) { AssignCommand(oPC, HorseMount(oHorse)); } } See Also author: Proleric, editor: Mistress
http://palmergames.com/Lexicon/Lexicon_1_69/function.HorseMount.html
CC-MAIN-2014-52
refinedweb
323
57.87
Supreme Court Judgments Subscribe AHMADI, A.M. (J) AHMADI, A.M. (J) PUNCHHI, M.M. CITATION: 1994 SCC Supl. (2) 45 ACT: HEADNOTE: ORDER 1.The appellants were promoted under Rule 6(1) of the Haryana Service of Engineers, Class 11, Public Works Department (Irrigation Branch) Rules, 1970 (hereinafter called 'the Rules'). Since they belonged to the Haryana Public Works Department (Irrigation Branch) they were governed by source 4 of the said rules. Rule 7(3)(ii) is the other relevant rule which we must notice. It lays down the qualifications and says that no person shall be appointed from source 4 under Rule 6(1) unless he possesses the educational qualification set out therein and has the required experience. It further provides that he will have to pass the departmental examination within three years of such promotion otherwise he will be reverted to his original post and his seniority will be determined from the date of his passing the examination. The State contends that the appellants failed to pass the examination within three years as required by the said provision and, therefore, they were liable to be reverted. But it must be realised that ordinarily every year examinations were held twice and, therefore, the appellants would have had six chances to clear the examination within the period of three years. The appellants contend that in the year 1980 the examination ordinarily to be held in the month of November, was not held and it was held as late as August 1982 which examination the appellants successfully cleared. The word 'ordinarily' would indicate that it was not compulsory on the part of the State to hold the examination twice in a year but it must be realised that the appellants have passed the examination in August 1982 whereas they were reverted in October 1982 i.e. after they had cleared the examination. In that view of the matter there was no question of reverting them since they had qualified for promotion to the next higher post even on the terms of Rule 6(1), source 4, read with Rule 7(3)(ii) of the rules. Under the orders of the Court their reversion was stayed. It is an admitted position that they are continuing to serve in the promotion post. We are, therefore, of the opinion since they had passed the examination in August 1982 and since the rules do not say that if they do not clear the examination within three years they will not be entitled to promotion for all times even if they clear the examination subsequently, they became ripe for promotion on clearing the examination held in August 1982 and, therefore, there was no need to revert them and in any case no such need now survives. It is another matter that under Rule 7(3)(ii) the question of seniority may have to be fixed in accordance with that rule but that is not an issue before us. 2. In the result the appeal is allowed accordingly with no order as to costs. 48 Advocates who appeared in this case : G. Ramaswamy, Senior Advocate (E.M.S. Anam and George Poonthothan, Advocates, with him) for the Appellants; V.R. Reddy, Additional Solicitor General, A.S. Nambiar, Senior Advocate (M.A. Firoz, Advocate, with them) for the Respondents. The Judgment of the Court was delivered by R.M. SAHAI, J.- These are four appeals directed against judgment and order of the High Court of Kerala. The appellants are owners or proprietors of hotels and restaurants who were granted FL-3 licences under Rule 13(3) of the Kerala Excise Rules in October 1992 for the year 1992-93. Their licences were cancelled soon thereafter as in November 1992 the Government had taken a policy decision to cancel all Foreign Liquor (Hotel/Restaurant) Licences under Rule 13(3) of the Kerala Foreign Liquor Rules, 1974 to hotels/restaurants/tourist homes during the financial year 1992-93. They challenged the orders in the High Court by way of writ petitions. The petitions were dismissed on February 1, 1993. Two special leave petitions were filed against this order. One was numbered as 2310-17 of 1993 and the other as 3391 of 1993. Some other petitions came up for hearing before the High Court on March 4, 1993 which were decided on March 10, 1993. This order was challenged by Special Leave Petition (Civil) No. 4152 of 1993. In Special Leave Petition Nos. 2310-17 of 1993 and 3391 of 1993 a Bench of this Court on March 1, 1993 passed the following order: "Issue notice both on special leave petitions as well as on petitions for stay. Mr John Joseph on behalf of Mr P.K. Pillai accepts notice on behalf of Respondent 6. Dasti service is permitted additionally. There will be an interim stay which will enure only up to March 31, 1993 in respect of FL-3 licence for the year 1992-93 and the stay will not enure beyond that period. It is open to the petitioners to approach the concerned authorities for renewal of the licence, if they are so entitled and the concerned authorities thereupon shall dispose of the application in accordance with law and on merits." On March 2, 1993 it is alleged that a statement was made on behalf of the State to the Press that the licence of the appellants shall not be renewed. However, since on March 1, 1993 this Court had permitted the appellants to approach the concerned authorities and yet a statement had been issued on behalf of the State Government the appellants approached the High Court, once again, for issue of direction to opposite parties to renew the licences of the appellants for the years 1993-94. This petition was disposed of on March 30, 1993 directing the respondents to dispose of the applications for renewal filed by the appellants as directed by this Court in accordance with law and on merits. In pursuance of this order applications filed by the appellants for renewal of their licence for 1993-94 appears to have been forwarded by the Excise Commissioner to the Board of Revenue which in its turn returned it with instructions to dispose them of in the light of G.O. No. 179/92/TD dated November 9, 1992. On May 24, 1993 the Excise Commissioner rejected the applications for renewal in the light of G.O. dated November 9, 1992 as directed by the Board. This order has been challenged by a separate Special Leave Petition (C) No. 5808 of 1993 in which notice was issued on May 13, 1993. 49 2.Lengthy arguments were advanced by learned counsel for both the sides. One of the questions that was raised was if the appellants have a fundamental right to carry on trade in liquor. This question has been referred to a Constitution Bench by a Bench of three Judges of this Court in Civil Appeal Nos. 4708-12 of 1989. The Civil Appeal Nos. 6043-50 of 1993 arising out of SLP (C) Nos. 2310-17 of 1993; Civil Appeal No. 6051 of 1993 arising out of SLP (C) No. 3391 of 1993; and Civil Appeal No. 6052 of 1993 arising out of SLP (C) No. 4152 of 1993 are therefore directed to be tagged with Civil Appeal Nos. 4708-12 of 1989. 3.The appeal arising out Special Leave Petition (C) No. 5808 of 1993 is however confined to the short question if the opposite parties committed any error of law in rejecting the application filed by appellants for renewal of licence for 1993-94. Two basic attacks were made on the correctness of the order dated May 24, 1993. One, that the policy of the Government is not in consonance with practice. It was claimed that even though the State claimed implementation of directive principles of the Constitution it had liberalised import of arrack from outside the State. It was claimed that this unmistakenly demonstrates that the State was not interested in enforcing the policy of prohibition but only denying the right to carry on business to the appellants for extraneous reasons. The other ground was that the renewal of 381 licences who were similarly situated as the appellants was contrary both to the policy decision of Government and directive principles of the Constitution. It was also urged that the State being in contempt as it not only made statement to the press which was in direct conflict with the order issued by this Court but even rejected the applications filed by the appellants without examining them on merits was not liable to be heard. The State defended both its policy decision and the order. 4.Although we do not propose to decide if any statement was made on behalf of the State Government and it purported to interfere with the courts of justice as sufficient material has not been placed on record but we consider it necessary to record our disapproval of the nature of affidavit filed by the Secretary (Excise) on such an important issue. Paragraph 11 of the counter affidavit is reproduced below : "I submit that the allegation in Para 5 of Special Leave Petition No. 5808 of 1993 that 'the Government have made its mind clear, on the very next day of the order of this Hon'ble Court which was prominently flashed in all Malayalam newspapers in headline news, by the Hon'ble Chief Minister of the State making a statement to the Press that the licences of the petitioners will in no case be renewed for the year 1993-94', is a vague allegation. Since no paper report has been produced, the deponent is not in a position to verify the veracity of the allegation. However, I deny the imputation that the Government had a closed mind." It has been repeatedly emphasised by this Court that averments in the affidavit should be clear and specific. To our dismay it is not only vague but highly unsatisfactory. An officer of such high stature has not cared to discharge his duty with responsibility. He did not come out clearly if the statement was made or not. A very flimsy pretext was advanced that the appellants did not produce newspaper reports. Even this much is not stated that no newspaper published in Malayalam carried such statement. We are constrained to observe that such affidavits instead of assisting in resolving the issues complicate them. It is 50 capable of creating reasonable apprehension in the mind of an ordinary citizen, that the opposite party did not decide their applications on objective considerations but on invisible yet apparent pressure from extraneous source. We stop here and say no more as in our opinion it is not necessary, for purpose of deciding this appeal. 5.The rules do not appear to make any distinction between renewal of a licence and its grant. We find some merit in the submission of the learned Additional Solicitor General that renewal or fresh grant normally is not dealt with by the same yardstick, yet we do not consider it necessary to pronounce on it as validity of the G.O. issued on November 9, 1992 is subject-matter of challenge in other appeals which we have directed to be heard along with other appeals pending before Constitution Bench. As stated earlier we are concerned in this appeal only with correctness of the order dated May 24, 1993. The opposite parties have rejected the applications filed by the petitioners on the ground that the State Government having taken a policy decision on November 9, 1992 not to issue licences the appellants were not entitled to claim renewal. The order was attempted to be justified by the learned Additional Solicitor General as according to him the appellants formed a separate class inasmuch as they were issued licences in 1992-93 and, therefore, they could not claim to be in the same group as other licensees who were operating from before. According to him since there were two groups or class of persons, one, who were operating from before and the other who were granted licences in the year 1992, the opposite party did not commit any error of law in rejecting the applications of appellants or acted discriminately in renewing the licences of others. We again do not propose to decide this issue in detail or examine it extensively as the validity of the G.O. has been referred to the Constitution Bench. Suffice it to say that the classification which can be sustained must have a reasonable nexus with objective sought to be achieved by the impugned action. The reason for not renewing the licence of the appellants was the prohibition policy that the State is envisaging to enforce. We may agree that this is a valid ground for reducing the number of licensees in the State. We may also agree that such steps can be taken in stages and not at one stroke, but the facts are otherwise. As stated earlier the consumption of liquor has gone up. The volume of imported arrack has been enhanced. Therefore except for the appellants who are 21 in number the State could not point out any circumstance which could establish that the policy of prohibition was being enforced or implemented in the State. True, that some public- interested persons are agitating but the validity of the State action has to be judged on positive steps taken by the State for enforcing the policy. But in the affidavits filed by the State no material has been brought on record to show that any concrete step has been taken in this regard. Moreover the appellants are hoteliers who were granted licence for promoting tourism. No figure has been furnished about traffic in these hotels. The agitation must be against consumption of liquor. How is the State curtailing it by permitting import of arrack has not been explained. In fact it is not disputed in the affidavit filed by the Excise Secretary that import was permitted under new Abkari policy adopted from April 1, 1993 as the State presumed that contractors were purchasing spirit clandestinely and such clandestine imports were adversely affecting State revenue. The affidavit asserts that it "was to get over the above problem in a logical manner that Government 51 desired to make a realistic assumption of consumption". So on the one hand the Government is taking the realistic view by permitting import of arrack which is consumed more by common man and its quota in 1992-93 was one crore bulk litres and on the other cancelling licence of 21 persons in the entire State of Kerala who were granted licence for promoting tourism as it would help in achieving the prohibition policy. We do not comment any further on it. The appellants who were granted licence in 1992-93 and those who are granted licence and are operating from before are hoteliers and are required under rules to conform to two star hotel standard. Both are required to promote tourism. In all respects their licences are same. Further the State does not appear to follow a consistent and uniform policy. In June 1992 it announced its intention not to issue any licence, 'afresh' from September 18, 1991 but it did not adhere to it and within a month it issued another order in February 1992 deciding to grant the privilege of selling liquor for promotion of tourism. In November 1992 it decided to cancel all licences issued in current year. If the licences issued in 1993-94 to licensees operating from before and to the appellants were issued afresh as the rules do not make distinction between renewal and fresh grant then all licensees were on same footing and the attempt to pick and choose the appellants, in our opinion, was contrary to rules without any valid justification. 6.For these reasons appeals arising out of Special Leave Petition Nos. 2310-17, 3391 and 4152 of 1993 are directed to be tagged with Civil Appeal Nos. 4708-12 of 1989. 7.Civil Appeal No. 6042 of 1993 arising out of SLP (Civil) No. 5808 of 1993 is allowed. The respondents are restrained from interfering in the carrying on of appellants as FL-3 licensees subject to complying with other conditions and payment of annual rental proportionately till their application for grant of licence are decided on merits as directed by this Court on March 1, 1993 without adverting to order dated November 9, 1992 or till the policy decision is enforced uniformally. Parties have to bear their own costs. Back
http://www.advocatekhoj.com/library/judgments/index.php?go=1993/october/58.php
CC-MAIN-2018-43
refinedweb
2,772
58.82
Statistics › Distributions › Continuous › Angle › CDF Evaluates the Angle distribution CDF.Controller: CodeCogs Contents This component evaluates the CDF of the Angle distribution with given arguments. x is an angle between 0 and , corresponding to the angle made in an n dimensional space, between a fixed line passing through the origin, and an arbitrary line that also passes through the origin, which is specified by choosing any point on the n dimensional sphere with uniform probability. The Angle CDF is defined by the following formula Example 1 - In the given example, the CDF is evaluated using a fixed value for n, equal to 4, while the first argument x takes values from 0 up to 2 with a step equal to 0.2. The maximum number of precision digits, implicitly set to 17, may be changed through the <em> PRECISION </em> define. #include <codecogs/stats/dists/continuous/angle/cdf.h> #include <iostream> #include <iomanip> #define PRECISION 17 int main() { std::cout << "The values of the Angle CDF with n = 4 and"; std::cout << std::endl; std::cout << "x = {0, 0.2, 0.4, ..., 1.8, 2} are" << std::endl; std::cout << std::endl; for (double x = 0; x < 2.1; x += 0.2) { std::cout << std::setprecision(2); std::cout << "x = " << std::setw(3) << x << " : "; std::cout << std::setprecision(PRECISION); std::cout << Stats::Dists::Continuous::Angle::CDF(x, 4); std::cout << std::endl; } return 0; } Output The values of the Angle CDF with n = 4 and x = {0, 0.2, 0.4, ..., 1.8, 2} are x = 0 : 0 x = 0.2 : 0.001684123127684655 x = 0.4 : 0.013153186649778228 x = 0.6 : 0.042647304023738355 x = 0.8 : 0.095560829038801032 x = 1 : 0.1735907059637424 x = 1.2 : 0.27446855935925973 x = 1.4 : 0.39231882068278456 x = 1.6 : 0.51858635136931963 x = 1.8 : 0.64338711110041569 x = 2 : 0.75706863044011896 References - John Burkardt's library of statistical C++ routines, Parameters Returns - the value of the Angle CDF evaluated with the given arguments Authors - Lucian Bentea (September 2005) Source Code Source code is available when you agree to a GP Licence or buy a Commercial Licence. Not a member, then Register with CodeCogs. Already a Member, then Login.
http://www.codecogs.com/library/statistics/distributions/continuous/angle/cdf.php
CC-MAIN-2018-43
refinedweb
363
57.47
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Creating module problem Hi In Odoo 8 I am trying to build a module. I have this: __init__.py: from . import vrijeme_racun __openerp__.py: { 'name': 'Vrijeme Racun', 'version': '1.0', 'category': 'Tools', 'summary': 'Vrijeme, Racun', 'description': """ Vrijeme za racune ================================ Dodaje vrijeme na racun. """, 'depends' : ['base'], 'data' : [], 'images': [], 'demo': [], 'installable' : True, 'application': True, } vrijeme_racun.pj: from openerp import models, fields, api class override_date_invoice(models.Model): _inherit = 'account.invoice' date_invoice = fields.Datetime(string = 'Invoice Date', readonly = True, states = {'draft': [('readonly', False)]}, index=True,help=’Keep empty to use the current date’,copy = False) When trying to install I get error: File "/var/lib/odoo/.local/share/Odoo/addons/8.0/vrijeme_racun/__init__.py", line 1, in <module> from . import vrijeme_racun ImportError: cannot import name vrijeme_racun I am new at this so if you can help.Thanks in __init__ try: import vrijeme_racun (instead from . import ...) check file name : vrijeme_racun.pj should be vrijeme_racun.py (this might be a typo here) also.. in file class should be account_invoice , not override_date_invoice in __openerp__.py : application: False (cause this is just a modification not a real app module!) and a suggestion... leave date_invoice as is, and simply add new field date_validate : fields.datetime (HR lokalizacija trazi jos dosta toga ..) it ads datetime, but time is all zeroes :) ? well then you need to override one more method in account_invoice class in order to write current time of validating the invoice or add new method to confirm_invoice workflow...
https://www.odoo.com/forum/help-1/question/creating-module-problem-87835
CC-MAIN-2017-09
refinedweb
269
53.47
In Python, we can use the numpy.where() function to select elements from a numpy array, based on a condition. Not only that, but we can perform some operations on those elements if the condition is satisfied. Let’s look at how we can use this function, using some illustrative examples! Table of Contents Syntax of Python numpy.where() This function accepts a numpy-like array (ex. a NumPy array of integers/booleans). It returns a new numpy array, after filtering based on a condition, which is a numpy-like array of boolean values. For example, condition can take the value of array([[True, True, True]]), which is a numpy-like boolean array. (By default, NumPy only supports numeric values, but we can cast them to bool also) For example, if condition is array([[True, True, False]]), and our array is a = ndarray([[1, 2, 3]]), on applying a condition to array ( a[:, condition]), we will get the array ndarray([[1 2]]). import numpy as np a = np.arange(10) print(a[a <= 2]) # Will only capture elements <= 2 and ignore others Output array([0 1 2]) NOTE: The same condition condition can also be represented as a <= 2. This is the recommended format for the condition array, as it is very tedious writing it as a boolean array But what if we want to preserve the dimension of the result, and not lose out on elements from our original array? We can use numpy.where() for this. numpy.where(condition [, x, y]) We have two more parameters x and y. What are those? Basically, what this says is that if condition holds true for some element in our array, the new array will choose elements from x. Otherwise, if it’s false, elements from y will be taken. With that, our final output array will be an array with elements from x wherever condition = True, and elements from y whenever condition = False. Note that although x and y are optional, if you specify x, you MUST also specify y. This is because, in this case, the output array shape must be the same as the input array. NOTE: The same logic applies for both single and multi-dimensional arrays too. In both cases, we filter based on the condition. Also remember that the shapes of x, y and condition are broadcasted together. Now, let us look at some examples, to understand this function properly. Using Python numpy.where() Suppose we want to take only positive elements from a numpy array and set all negative elements to 0, let’s write the code using numpy.where(). 1. Replace Elements with numpy.where() We’ll use a 2 dimensional random array here, and only output the positive elements. import numpy as np # Random initialization of a (2D array) a = np.random.randn(2, 3) print(a) # b will be all elements of a whenever the condition holds true (i.e only positive elements) # Otherwise, set it as 0 b = np.where(a > 0, a, 0) print(b) Possible Output [[-1.06455975 0.94589166 -1.94987123] [-1.72083344 -0.69813711 1.05448464]] [[0. 0.94589166 0. ] [0. 0. 1.05448464]] As you can see, only the positive elements are now retained! 2. Using numpy.where() with only a condition There may be some confusion regarding the above code, as some of you may think that the more intuitive way would be to simply write the condition like this: import random import numpy as np a = np.random.randn(2, 3) b = np.where(a > 0) print(b) If you now try running the above code, with this change, you’ll get an output like this: (array([0, 1]), array([2, 1])) If you observe closely, b is now a tuple of numpy arrays. And each array is the location of a positive element. What does this mean? Whenever we provide just a condition, this function is actually equivalent to np.asarray.nonzero(). In our example, np.asarray(a > 0) will return a boolean-like array after applying the condition, and np.nonzero(arr_like) will return the indices of the non-zero elements of arr_like. (Refer to this link) So, we’ll now look at a simpler example, that shows us how flexible we can be with numpy! import numpy as np a = np.arange(10) b = np.where(a < 5, a, a * 10) print(a) print(b) Ouptut [0 1 2 3 4 5 6 7 8 9] [ 0 1 2 3 4 50 60 70 80 90] Here, the condition is a < 5, which will be the numpy-like array [True True True True True False False False False False], x is the array a, and y is the array a * 10. So, we choose from an only if a < 5, and from a * 10, if a > 5. So, this transforms all elements >= 5, by multiplication with 10. This is what we get indeed! Broadcasting with numpy.where() If we provide all of condition, x, and y arrays, numpy will broadcast them together. import numpy as np a = np.arange(12).reshape(3, 4) b = np.arange(4).reshape(1, 4) print(a) print(b) # Broadcasts (a < 5, a, and b * 10) # of shape (3, 4), (3, 4) and (1, 4) c = np.where(a < 5, a, b * 10) print(c) Output [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[0 1 2 3]] [[ 0 1 2 3] [ 4 10 20 30] [ 0 10 20 30]] Again, here, the output is selected based on the condition, so all elements, but here, b is broadcasted to the shape of a. (One of its dimensions has only one element, so there will be no errors during broadcasting) So, b will now become [[0 1 2 3] [0 1 2 3] [0 1 2 3]], and now, we can select elements even from this broadcasted array. So the shape of the output is the same as the shape of a. Conclusion In this article, we learned about how we can use the Python numpy.where() function to select arrays based on another condition array. References - SciPy Documentation on Python numpy.where() function
https://www.journaldev.com/37898/python-numpy-where
CC-MAIN-2021-04
refinedweb
1,032
72.87
Watchdog not watchdogging? Last night the smart alarm clock I'm working on froze, and thus didn't wake me up. I quickly added a watchdog this morning (was on the to-do list). Then the radar motion sensor had a wire issue, and got disconnected and reconnected. The device froze again. No problem, I thought, the watchdog is probably already doing its thing. But it stayed frozen. Are there limits to the watchdog function of the Arduino Nano? If you use the standard bootloader from Arduino the WD is dsabled, flash with optiboot - alowhum Plugin Developer last edited by alowhum Ah! So that's it! I'll look into it. Thanks for the tip! Is there any way to activate it without resorting to a different bootloader? - mtiutiu Hardware Contributor last edited by mtiutiu It's not portable but for the AVR architecture you can use: void before() { wdt_disable(); wdt_enable(WDTO_8S); // 8s timeout } void loop() { wdt_reset(); } Other watchdog options for the AVR architecture here. Be aware that some of the MySensors functions (sleep and according to below link also send) will change the watchdog settings, so you'll have to set back the watchdog after using these functions. Enhancement request: EDIT: no, that's not what issue 1160 is about. 1160 is about setting up the watchdog so that sketch developers don't need to add custom code to enable the watchdog. Sorry for confusing everyone. - alowhum Plugin Developer last edited by alowhum @mfalkvidd Could you elaborate a bit on what MySensors does to the watchdog? Does it disable it? The above seems to imply I have to re-enable it after every send command? That sounds almost unbelievable. On github a quick search for "wdt_disable" in the code only reveals disabling it when sleep() is called? There I can understand its presence. The node mentioned in the first post already had an AVR watchdog enabled in the manner described. It doesn't use sleep. Still, it didn't seem to reboot. So maybe it's true, and I just haven't found the code in the search? @alowhum I don't have easy access to the code atm, but, as I read it, the change request @mfalkvidd refers to suggests to add a wdt reset to the send() function. It doesn't say the current send() fiddles with the wdt. I've done some digging. Sending I_DEBUG message to the node will call hwCPUFrequency which does things to the watchdog, but it looks like the watchdog settings are saved and then restored afterwards. Sleep uses the watchdog to wake up. When the node is sleeping on timer, it needs to use the watchdog because all other clocks are stopped when sleeping so there is no other way to keep time. I guess it would be possible to save and restore the watchdog settings in hwPowerDown, just like in hwCPUFrequency?That should allow the user to set their own watchdog which is a good start. EDIT: MySensors already saves and restores the watchdog during sleep. Reference link. I guess it would be even better if it was possible to enable the watchdog when not sleeping, using a define (like suggested in the linked github issue). But I am not sure endless reboots (which is a risk with the watchdog enabled) is desirable, so that feature might need to be off by default. I also don't know if it is possible to detect the reset reason and handle boot differently (which might be desirable). All this is for AVR only. I am not sure about the other platforms supported by MySensors. And thanks Yveaux for clarifying he send() behavior. I was very confused by that statement in the github issue. I guess resetting the watchdog in wait() and any other long-running function would make sense. But why would send() take so long time that the watchdog might be tripped? @mfalkvidd I suppose the best place to kick the watchdog in the MySensors stack would be the doYield(), as it also keeps the esp8266 watchdog alive. @yveaux seems like it is supposed to kick the watchdog already: @remark Internally it will call yield, kick the watchdog and update led states. calls hwWatchdogReset which, on avr, resets the watchdog What about a hardware watchdog such as this 555 based watchdog. A hardware based watchdog seems a bit more reliable. @dbemowsk said in Watchdog not watchdogging?: What about a hardware watchdog such as this 555 based watchdog. A hardware based watchdog seems a bit more reliable. the watchdog in AT328 is reliable, do you have documentation behind this comment that it's not? @bjacobse I have just seen things in forums in the past like this where some users explain about the possibility of the microcontroller with the software based watchdog timer hanging to the point where the MC's internal watchdog hangs too. That was my basis for the comment. This link is misleading which causes misunderstanding The ATmega328p have watchdog timer build in, which if enabled uses an internal RC oscillation, and this will for sure reset the MCU, if the timer isn't reset by the user program before time-out. it works rock stable... Something that can cause unreliability is that brown-out detecting of power voltage, will also cause a shutdown, and if the designer isn't aware of stable power voltage, then you get unstable MCU system... Below snippet is from above ATmega328 spec: Please note there are 3 Operating modes: – Interrupt – System Reset – Interrupt and System Reset. [EDIT] below removed as it was not correct, comment from mfalkvidd is correct, that mysensors is using WDT for wakup-call @bjacobse no that is not the reason. In sleep mode, MySensors uses the watchdog to wake up every 8 seconds to increment a counter and immediately go to sleep. This is the most power efficient way to sleep for a set time. Because no other clocks are active in sleep mode, it is impossible to keep time without using the watchdog. I have now checked the code for sleep, and the watchdog settings are saved and restored: So it seems like it should be possible for sketch developers to define their own watchdog, which will be active att all times except: - briefly after receiving an I_DEBUG message - during sleep() @alowhum would you mind posting your sketch? @mfalkvidd This just shows how clever/smart the mysensors code are developed This is an old issue related to bootloader problems on some nano/mega boards. Try this See this project, there have solution for hardware based watch dog, can be easily transform to be usable for Arduino. @tiana Thanks, but I'm trying to avoid adding more hardware as I am creating nodes that I want beginners to be able to make easily. @mfalkvidd Here's the sketch you requested. Work in progress.. /* * * GENTLE ALARM CLOCK * * This is a smart alarm clock: * - It takes into account your sleep cycle and tries to wake you up at an opportune moment in that cycle. * It does this in the 30 minutes before the alarm time you set. So if you set 8AM, it will find the best moment in the 7:30AM till 8AM window. * - Additionally, it has wake-up light functinality to slowly wake you up. If that doesn't do it, it will sound an audio alarm. * * Hardware: * - Arduino Nano * - Arduino Nano wireless expansion board * - NRF24 radio (to connect to MySensors controller) * - OLED screen * - Rotary encoder knob (KY-040) * - Motion sensor (radar type works well, but you can also use PIR) * - Bright LED. It should support setting varying brightness levels. * - Buzzer (could also trigger an MP3 player, or a voice recorder - have fun!) * * * How it works: * - It gets the clock time from the controller. * - You can set the alarm time (and enable or disable the alarm) with the rotary knob. * * - Each minute it detects and sends the total seconds of movement during the previous minute. * You can use a motion sensor of your choice for this. Point it at your sleepingplace in the bed. * * * SETTINGS */ byte motionThreshold = 8; // If the total motion count over the past 5 minutes is this or higher, then the alarm will start ringing. // Do you want encrypted communication? If you do, then all devices in your network need to use the same password. //#define MY_ENCRYPTION_SIMPLE_PASSWD "changeme" /* * * * * * * TO-DO * - Decide on a measurement value to send to the controller. "custom" would make sense, but may not be as widely supported by controllers. * - Toggle the alarm on-off status from the controller. This allows the workday logic to move to the controller. * - Turn off the screen (or show less data on it) during the night. It lower screen brightness somehow. * - Fix bug where audio alarm can stay on despite rotating the knob to turn it off. * - Check if internal clock is somewhat accurate. * - Make rotating knob work better. Maybe use a library.. * - Add snooze? * * NICE-TO-HAVE * - Calculate the work days (mon-fri) on-device, and offer a toggle to only sound the alarm on those days. * - Maybe add another separate button for enabling/disabling the alarm. * - make the minute loop counter work by adding up 60 loops of a second. But might make the clock very imprecise. * - Sensitivity dial in controller */ // // SETTINGS // // and max power can cause issues on cheap Chinese NRF24 radios. // security . // Enable MySensors debug output to the serial monitor, so you can check if the radio is working ok. //#define MY_DEBUG // MySensors devices form a mesh network by passing along messages for each other. Do you want this node to also be a repeater? #define MY_REPEATER_FEATURE // Add or remove the two slashes at the beginning of this line to select if you want this sensor to act as a repeater for other sensors. If this node is on battery power, you probably shouldn't enable this. #define ONE_SECOND 1000 // How many milliseconds does a second last? #define LOOPDURATION 60000 // The main loop runs every x milliseconds. It's like a minute counter on a clock. //#define MEASUREMENT_INTERVAL 5 // After a number of loops we start again. // LIBRARIES (in the Arduino IDE go to Sketch -> Include Library -> Manage Libraries to add these if you don't have them installed yet.) #include <MySensors.h> // MySensors library #define HAS_DISPLAY // Remove this line if you are not using an OLED screen on the node. #ifdef HAS_DISPLAY #define INCLUDE_SCROLLING 0 #define OLED_I2C_ADDRESS 0x3C #include <SSD1306Ascii.h> // Simple drivers for the OLED screen. #include <SSD1306AsciiAvrI2c.h> SSD1306AsciiAvrI2c oled; #endif // Clock variables byte hours = 0; byte minutes = 0; uint32_t unixTime = 0; // The lines below may be useful for a future feature. // leap year calulator expects year argument as years offset from 1970 //#define LEAP_YEAR(Y) ( ((1970+(Y))>0) && !((1970+(Y))%4) && ( ((1970+(Y))%100) || !((1970+(Y))%400) ) ) //static const uint8_t monthDays[]={31,28,31,30,31,30,31,31,30,31,30,31}; // API starts months from 1, this array starts from 0 // Alarm variables #define SADEH_MOTION_THRESHOLD 2 // How many movements a minute will we consider as enough to officially count as 'light sleep' boolean alarmSet = false; // Has the user enabled the alarm? boolean alarmSearching = false; // Set to true if we are in the 30 minutes before the deadline. We are waiting for the best exact moment to wake up the user now. boolean alarmRinging = false; // Wake up! Set to true if the opportune moment has been found. Light (and later audio) should be on now. byte alarmHours = 0; byte alarmMinutes = 0; byte displayHours = 0; byte displayMinutes = 0; // Speaker #define SPEAKER_PIN 7 // The pin where the speaker is connected. // Rotary encoder knob #define ROTARY_CLK_PIN A0 // Connected to CLK on the KY-040 rotary encoder #define ROTARY_DT_PIN A1 // Connected to DT on the KY-040 rotary encoder #define ROTARY_SWITCH_PIN A2 // Connected to SW on the KY-040 rotary encoder int previousRotaryValue; // The previous value read from the rotary encoder, to compare against. boolean rotarySwitchPressed = 0; // The state of the push button on the rotary encoder. boolean lastKnobDirection = 1; // 0 = Counter clockwise, 1 = clockwise. // Sadeh algorithm variables byte consecutiveSleepMinutesRadar = 0; boolean detectedREM = false; byte minutesSinceREM = 0; // Motion sensor details #define MOTION_SENSOR_PIN 3 // On what pin is the radar sensor connected? // LED details #define LED_PIN 4 #define LED_PWM_LENGTH 50 // MICROseconds that each PWN up-and-down phase lasts. You may have too fine-tune this for your LED. int brightness = 0; byte brightnessPercentage = 0; // Mysensors settings. #define RADIO_DELAY 100 // Milliseconds between sending radio signals. This keeps the radio happy. #define CHILD_ID_STATUS 0 // Child ID of the sensor #define CHILD_ID_MOTION_SENSOR 1 // Child ID of the sensor #define CHILD_ID_SET_ALARM 2 // Allows the alarm to be turned on or off from the controller. #define CHILD_ID_RINGING 3 // Allows the controller to set other devices in the room to turn on when the alarm clock is ringing. #define CHILD_ID_SENSITIVITY 4 // Set the movement threshold from the controller interface. MyMessage statusMessage(CHILD_ID_STATUS,V_TEXT); // Sets up the message format that we'll be sending to the MySensors gateway later. The first part is the ID of the specific sensor module on this node. The second part tells the gateway what kind of data to expect. MyMessage motionMessage(CHILD_ID_MOTION_SENSOR, V_TEMP); // Sets up the message format that we'll be sending to the MySensors gateway later. MyMessage dimmerMessage(CHILD_ID_RINGING, V_PERCENTAGE);// Create a dimmer that can be used on the controller to set the value of, for example. another lamp. MyMessage relayMessage(CHILD_ID_SET_ALARM, V_STATUS); // Allow the controller to enable or disable the alarm void presentation() { // send the sketch version information to the gateway and Controller sendSketchInfo(F("Gentle alarm clock"), F("1.6")); wait(RADIO_DELAY); // Register all child sensors with the gateway present(CHILD_ID_STATUS, S_INFO, "Status"); wait(RADIO_DELAY); // General status of the device, as well as the sleep phase present(CHILD_ID_MOTION_SENSOR, S_TEMP, "Motions"); wait(RADIO_DELAY); // Total motion count for the past five minutes present(CHILD_ID_RINGING, S_DIMMER, "Dimmer"); wait(RADIO_DELAY); // The level of the built-in wake-up LED is also mirrored to the controller. That way you could perhaps use some automation to set another lamp in the room to also slowly rise in brightness. present(CHILD_ID_SET_ALARM, S_BINARY, "Alarm"); wait(RADIO_DELAY); // Allow the controller to turn the alarm on or off. present(CHILD_ID_SENSITIVITY, S_DIMMER, "Sensitivity"); wait(RADIO_DELAY); // Set the motion count threshold that will trigger alarm. } void setup() { // Output updates over the serial port Serial.begin(115200); while (!Serial) {} // Is this really necessary? Serial.println(F("Hello world!")); pinMode(MOTION_SENSOR_PIN, INPUT); // Set motion sensor pin as input // LED pinMode(LED_PIN, OUTPUT); // Set the LED pin as output analogWrite(LED_PIN, 0); // Let's test the LED light wait(2000); analogWrite(LED_PIN, 255); wait(2000); analogWrite(LED_PIN, 0); // rotary encoder knob pinMode (ROTARY_CLK_PIN,INPUT); // Rotary encoder clock pin pinMode (ROTARY_DT_PIN,INPUT); // Rotary encoder data pin pinMode(ROTARY_SWITCH_PIN,INPUT_PULLUP); // Rotary encoder switch pin previousRotaryValue = digitalRead(ROTARY_CLK_PIN); rotarySwitchPressed = digitalRead(ROTARY_SWITCH_PIN); #ifdef HAS_DISPLAY // Start the display (if there is one) oled.begin(&Adafruit128x64, OLED_I2C_ADDRESS); oled.setFont(Adafruit5x7); oled.ssd1306WriteCmd(SSD1306_DISPLAYON); oled.setScroll(false); oled.setCursor(0,0); oled.print(F("ALARM CLOCK")); #endif // Check if there is a network connection if(isTransportReady()){ Serial.println(F("Connected to gateway!")); send(statusMessage.setSensor(CHILD_ID_STATUS).set( F("Hello world") )); wait(RADIO_DELAY); send(dimmerMessage.set(0)); wait(RADIO_DELAY); // Reset the dimmer level to 0. requestTime(); wait(RADIO_DELAY); // Request the current time from the controller. //Serial.print(F("Time: ")); Serial.println(controllerTime); #ifdef HAS_DISPLAY // Show connection icon on the display oled.setCursor(90,0); oled.print(F("W")); #endif }else{ Serial.println(F("! NOCONNECTION")); #ifdef HAS_DISPLAY oled.setCursor(90,0); oled.print(F(" ")); #endif } // Get last known preferences from onboard storage alarmHours = loadState(1); // To what hour was the alarm set? if(alarmHours > 24){alarmHours = 8;} alarmMinutes = loadState(2); // To which minure was the alarm set? if(alarmMinutes > 55){alarmMinutes = 0;} alarmSet = loadState(3); // Was the alarm set to on or off? if(alarmSet > 1){alarmSet = 1;} if(loadState(4) != 255){ motionThreshold = loadState(4); // What is the desired sensitivity } wdt_enable(WDTO_8S); // Starts the watchdog timer. If it is not reset once every few seconds, then the entire device will automatically restart. } void loop() { /* the main loop has four levels: * - Continuously: * - - Check if the rotary knob has been turned. * - Flickering: this runs at the rate of 1000 times per second. * - - It is used to PWM the LED. * - Flutter: this runs once a second. It does things like * - - Increase the LED brightness once the alarm is in ringing mode, and * - - Check if the motion sensor is in its active state. * - Heartbeat: this runs once a minute. * - - It takes note of the total movement over the last minute, and sends along the data. * - - It checks if it's time to wake up the user. * - - It also updates the minute counter on the display * */ // Main loop variables static unsigned long lastLoopTime = 0; // Holds the last time the main loop ran. static int loopCounter = 0; // Count how many heartbeat loops (minutes) have passed. // Creating variables to track sleep. static byte motionCounter = 0; // The movement count for the last minute. static byte movementsList[5]; // An arrray (list) that stores the last 5 motionCounter values. static int motionTotal = 0; // Total movement count for the past 5 minutes // Rotary knob static int rotaryValue; static boolean takeStep = 0; rotaryValue = digitalRead(ROTARY_CLK_PIN); if (rotaryValue != previousRotaryValue){ // Check if the rotary encoder knob is rotating alarmRinging = false; // In case the alarm was ringing, it should be turned off now. noTone(SPEAKER_PIN); // In case the alarm was still making sound somehow, turn it off. takeStep = !takeStep; // On every loop though this gets changed into its opposite. So 0 -> 1 -> 0 -> 1 etc if(takeStep){ // If we have already ignored a step, then go further. // Check in which direction it's rotating. if (digitalRead(ROTARY_DT_PIN) != rotaryValue) { Serial.println(F("Counterclockwise")); if(lastKnobDirection == 0){ // Rotating left rapidly decreases the alarm time. if(alarmMinutes <= 30){ alarmMinutes = 30; if(alarmHours == 0){alarmHours = 23;}else{alarmHours--;} } if(alarmMinutes >= 30){ alarmMinutes = 30; } Serial.print(F("New alarm hours: ")); Serial.println(alarmHours); lastKnobDirection = 1; } lastKnobDirection = 0; } else { // Rotating right slowly increases the alarm time. Serial.println (F("Clockwise")); alarmMinutes = alarmMinutes + 5; if(alarmMinutes > 55){ alarmMinutes = 0; alarmHours = alarmHours + 1; if(alarmHours > 23){alarmHours = 0;} } Serial.print(F("New alarm: ")); Serial.print(alarmHours); Serial.print(F(":")); Serial.println(alarmMinutes); lastKnobDirection = 1; } } #ifdef HAS_DISPLAY updateClockDisplay(); #endif } previousRotaryValue = rotaryValue; // Check if the rotary knob button is being pressed. // boolean switchPosition = digitalRead(ROTARY_SWITCH_PIN); if (digitalRead(ROTARY_SWITCH_PIN) == 1 && rotarySwitchPressed == 0){ rotarySwitchPressed = 1; Serial.println(F("Button pressed")); if(alarmSearching || alarmRinging){ turnOffRinging(); }else{ alarmSet = !alarmSet; // If the alarm is not ringing, then pressing the button turns the alarm setting on or off completely. } #ifdef HAS_DISPLAY updateClockDisplay(); #endif wait(50); } else if (digitalRead(ROTARY_SWITCH_PIN) == 0){ rotarySwitchPressed = 0; } #ifdef HAS_DISPLAY oled.set1X(); oled.setCursor(110,0); // In the top-right corner.. if(!alarmRinging){ // If the alarm is not going off, show the motion count for this minute. oled.print(motionCounter); oled.println(F(" ")); }else if (brightness < 255){ // Otherwise, show the brightness level of the LED (at least intil it reaches 255). oled.setCursor(104,0); oled.print(F("*")); oled.print(brightnessPercentage); oled.println(F(" ")); }else{ oled.setCursor(104,0); oled.println(F(" ")); } #endif // // FLICKERING // Software pulse width modulation (PWM) for the high brightness LED, using modulo. // if(alarmRinging == true){ unsigned int pulsy = map(brightness,0,255,0,LED_PWM_LENGTH); // Depending on the intended brightness level, the LED should be on during a proportianl period of the PWM time. if(micros() % LED_PWM_LENGTH < pulsy ){ digitalWrite(LED_PIN, 1); //Serial.print(F("|")); // For debugging }else{ digitalWrite(LED_PIN, 0); //Serial.print(F(".")); // For debugging } }else{ digitalWrite(LED_PIN, 0); } // // FLUTTER // Runs every second. By counting how often this loop has run (and resetting that counter back to zero after a number of loops), it becomes possible to schedule all kinds of things without using a lot of memory. // static boolean loopDone = false; // used to make sure the 'once every millisecond' things only run once every millisecond (or 2.. sometimes the millis() function skips a millisecond.); // Allow the next loop to only run once. This entire construction saves memory by not using a long to store the last time the loop ran. if( (millis() % ONE_SECOND) > ONE_SECOND - 4 && loopDone == true ) { loopDone = false; } // Main loop to time actions. if( (millis() % ONE_SECOND) < 4 && loopDone == false ) { // this module's approach to measuring the passage of time saves a tiny bit of memory. loopDone = true; //if (millis() - lastLoopTime > ONE_SECOND) { //lastLoopTime = millis(); // this variable is now already used by the minute counter. wdt_reset(); // Reset the watchdog timer. If the device crashes, then the watchdog won't be reset, and this will in turn cause it to reset the entire device. // Check if the movement sensor is seeing movement. boolean motionState = digitalRead(MOTION_SENSOR_PIN); if (motionState == HIGH) { motionCounter++; Serial.print(F("~")); Serial.print(motionCounter); } // WAKE UP! Every second we make the LED a little brighter. if(alarmRinging == true){ brightness++; brightnessPercentage = map(brightness, 0, 255, 0, 100); send(dimmerMessage.set(brightnessPercentage)); Serial.print(F("Sending dimmer brightness: *")); Serial.println(brightness); }else{ brightness = 0; } // WAKE UP! Make noise. if(alarmRinging == true && brightness > 255 && brightness < 386){ Serial.print(F("**")); Serial.println(brightness); if (brightness % 2 == 0){ // Makes noise every other second. tone(SPEAKER_PIN, 1000); }else{ noTone(SPEAKER_PIN); } } // WAKE UP! The user should be awake by now. if(brightness == 765){ // Turn of the light and the alarm //alarmSearching = false; turnOffRinging(); } } // // HEARTBEAT // Runs every minute. By counting how often this loop has run (and resetting that counter back to zero after a number of loops), it becomes possible to schedule all kinds of things without using a lot of memory. // if (millis() - lastLoopTime >= LOOPDURATION) { lastLoopTime = millis(); loopCounter++; if(loopCounter > 5){ Serial.print(F("loopCounter ")); Serial.println(loopCounter); loopCounter = 1; requestTime(); Serial.print(F("Minutes since REM phase: ")); Serial.println(minutesSinceREM); } // Fun but incomplete stuff to detect REM phases if(detectedREM && minutesSinceREM < 250){ minutesSinceREM++; } // Update the clock unixTime += 60; // Add a minute to the clock. Maybe adding 59 is better? breakUpTime(unixTime); // Turn the unix time into human-readable time updateClockDisplay(); // Save the alarm details. (They will only be overwritten if they have changed). saveState(1, alarmHours); saveState(2, alarmMinutes); saveState(3, alarmSet); saveState(4, motionThreshold); // ALARM ACTIVE CHECK - Should we start the wake up procedure? if(alarmSet == true && hours == alarmHours && minutes == alarmMinutes){ Serial.println(F("ALARM SET - Starting wake up procedure: alarm now in searching phase")); alarmSearching = true; // Here we enable searching for the right moment to wake up. send(statusMessage.setSensor(CHILD_ID_STATUS).set( F("Starting wake up") )); wait(RADIO_DELAY); } // If there was movement for 5 minutes, then the user is sleeping lightly, and its now time to WAKE UP! // if(alarmSet == true && alarmSearching == true && movementsList[1] >= SADEH_MOTION_THRESHOLD && movementsList[2] >= SADEH_MOTION_THRESHOLD && movementsList[3] >= SADEH_MOTION_THRESHOLD && movementsList[4] >= SADEH_MOTION_THRESHOLD && movementsList[5] >= SADEH_MOTION_THRESHOLD){ if(alarmSet == true && alarmSearching == true && motionTotal >= motionThreshold){ // if there was a high movement count in the last 5 minutes, then the user is in a light sleep moment. alarmSearching = false; // We are no longer looking for the right moment to ring the alarm, since we just found it. alarmRinging = true; // Time to really ring the alarm! } // If we are in the alarm active phase, but there was not a good moment to wake up the user, then at the end just sound the alarm at the defined alarm time. Like a normal alarmclock. if(alarmSet == true && alarmSearching == true && hours == displayHours && minutes == displayMinutes){ alarmRinging = true; // maybe set the brightness high here too? } // SADEH algorithm. All this is not really required. static byte consecutiveSleepMinutesMotion = 0; static byte consecutiveAwakeMinutesMotion = 0; if(motionCounter < SADEH_MOTION_THRESHOLD){ consecutiveAwakeMinutesMotion = 0; if(consecutiveSleepMinutesMotion < 250){ consecutiveSleepMinutesMotion++; }else if (consecutiveSleepMinutesMotion == 15){ detectedREM = true; send(statusMessage.setSensor(CHILD_ID_STATUS).set( F("DEEP SLEEP") )); wait(RADIO_DELAY); } }else{ consecutiveSleepMinutesMotion = 0; if(consecutiveAwakeMinutesMotion < 250){ consecutiveAwakeMinutesMotion++; } if(consecutiveAwakeMinutesMotion == 5){ if(detectedREM && minutesSinceREM > 60){ Serial.println(F("Light sleep after REM")); send(statusMessage.setSensor(CHILD_ID_STATUS).set( F("Light sleep after rem") )); wait(RADIO_DELAY); minutesSinceREM = 0; // we found the light sleep phase in a good time segment after the rem phase. So reset this counter. }else{ Serial.println(F("Light sleep")); send(statusMessage.setSensor(CHILD_ID_STATUS).set( F("Light sleep") )); wait(RADIO_DELAY); } // Definitely moved away from the REM phase, so reset those variables: consecutiveSleepMinutesMotion = 0; detectedREM = false; } } movementsList[loopCounter] = motionCounter; Serial.print(F("Loop number and motion sensor movements: ")); Serial.print(loopCounter); Serial.print(F(" -> ")); Serial.println(movementsList[loopCounter]); motionTotal = movementsList[1] + movementsList[2] + movementsList[3] + movementsList[4] + movementsList[5]; Serial.print(F("Sending motion total:")); Serial.println(motionTotal); send(motionMessage.set(motionTotal)); wait(RADIO_DELAY); // We ask the server to acknowledge that it has received the data. It it doesn't, remove the connection icon. if( send(motionMessage.set(motionTotal)) ){ // was ),1) ){ Serial.println(F("Connection is ok")); #ifdef HAS_DISPLAY // add W icon oled.set1X(); oled.setCursor(90,0); oled.print(F("W")); #endif }else { Serial.println(F("Connection lost")); #ifdef HAS_DISPLAY // remove W icon oled.set1X(); oled.setCursor(90,0); oled.print(F(" ")); #endif } // every loop (minute) the movement counter is reset. motionCounter = 0; } } void turnOffRinging() { if(brightness != 0){ Serial.println(F("Send: resetting dimmer to 0")); send(dimmerMessage.set(0)); } brightness = 0; alarmRinging = false; // If the alarm is ringing, then pressing the button stops the ringing. The alarm will still be set for the next day. alarmSearching = false; // Turning the alarm during the searching phase means we should no longer look for a moment to ring the alarm. noTone(SPEAKER_PIN); } void receive(const MyMessage &message) { Serial.print("__Incoming change for child: "); Serial.println(message.sensor); if (message.type==V_STATUS && message.sensor == CHILD_ID_SET_ALARM) { // Toggle of alarm on or off Serial.println(F("__RECEIVED ALARM TOGGLE")); // Change alarm state alarmSet = message.getBool()?1:0; if(alarmSet == false && (alarmSearching || alarmRinging)){ turnOffRinging(); } updateClockDisplay(); // Write some debug info } if (message.type == V_PERCENTAGE && message.sensor == 4) { // If it's the desired sensitivity level // Retrieve the power or dim level from the incoming request message int receivedSensitivity = atoi( message.data ); motionThreshold = byte(receivedSensitivity); saveState(4, motionThreshold); Serial.print(F("Requested motion threshold is ")); Serial.println( motionThreshold ); } } void receiveTime(unsigned long controllerTime) { Serial.print(F("Received time: ")); Serial.println(controllerTime); unixTime = controllerTime; breakUpTime(unixTime); #ifdef HAS_DISPLAY updateClockDisplay(); // Update the hours on the display #endif } #ifdef HAS_DISPLAY void updateClockDisplay() // Update clock time display { oled.set2X(); // Switch to large font size oled.setCursor(30,2); if(hours < 10){ oled.print(F(" ")); } oled.print(hours); oled.print(F(":")); if(minutes < 10){ oled.print(F("0")); } oled.print(minutes); // This is important: the display shows the alarm time as being 30 minutes later than the internal alarm time. This is done for more easy programming, since we mostly case about the moment 30 minutes before the alarm should go off. displayHours = alarmHours; if(alarmMinutes >= 30){displayHours++;} if(displayHours > 23){displayHours = 0;} displayMinutes = (alarmMinutes + 30) % 60; //update alarm time display oled.setCursor(17,5); if(!alarmSet){ oled.print(F(" ")); oled.set1X(); oled.setCursor(25,5); } if(displayHours < 10){oled.print(F(" "));} oled.print(displayHours); //oled.print(hours); // Used for debugging oled.print(F(":")); if(displayMinutes < 10){ oled.print(F("0")); } oled.print(displayMinutes); //oled.print(minutes); // Used for debugging if(alarmSet && !alarmSearching){ oled.print(F(" SET")); } // Alarm set, but we're not in the 30 minuts before the deadline yet. else if(alarmSet && alarmSearching){ oled.print(F(" ON!")); } // We're in the 30 minutes before the user wants to wake up at the latest. else if(alarmSet && alarmRinging){ oled.print(F(" !!!")); } // NOW IS THE BEST TIME TO WAKE UP! WAKE UP! } #endif void breakUpTime(uint32_t timeInput){ // Break the given time_t into time components. // This is a more compact version of the C library localtime function // Note that year is offset from 1970! uint32_t time; time = (uint32_t)timeInput; uint32_t Second = time % 60; time /= 60; // now it is minutes minutes = time % 60; time /= 60; // now it is hours hours = time % 24; time /= 24; // now it is days //int Wday = ((time + 4) % 7) + 1; // Which day of the week is it. Sunday is day 1 Serial.print(F("Received time: ")); Serial.print(hours); Serial.print(F(":")); Serial.println(minutes); /* possible todo: learn what the work-days are (mon-fri), and create a toggle to only be active on work days. year = 0; days = 0; while((unsigned)(days += (LEAP_YEAR(year) ? 366 : 365)) <= time) { year++; } //byte Year = year; // year is offset from 1970 days -= LEAP_YEAR(year) ? 366 : 365; time -= days; // now it is days in this year, starting at 0 days=0; month=0; monthLength=0; for (month=0; month<12; month++) { if (month==1) { // february if (LEAP_YEAR(year)) { monthLength=29; } else { monthLength=28; } } else { monthLength = monthDays[month]; } if (time >= monthLength) { time -= monthLength; } else { break; } } byte Month = month + 1; // jan is month 1 byte Day = time + 1; // day of Hreen. * * */ @alowhum thanks. I don't see anything obvious that would mess up the watchdog. I would try to get a debug log from where it crashes, to see if there are any clues. does the arduino LED flashes bright and fast when it crashes?
https://forum.mysensors.org/topic/9946/watchdog-not-watchdogging/10
CC-MAIN-2019-51
refinedweb
4,777
57.37
07 February 2006 19:10 [Source: ICIS news] HOUSTON (ICIS news)--US benzene spot business was done at $2.85/gallon ($848 or Euro708/tonne) delivered duty paid (DDP) Houston-Texas City (HTC) on Tuesday morning, down 5-7 cents/gallon from late Monday on lower European benzene spot numbers and weaker crude oil, US Gulf aromatics traders said. Tuesday morning’s price drop was nearly identical to the $20/tonne net decrease in European benzene spot prices earlier in the day. Crude oil prices also caused US benzene prices to fall. March New York Mercantile Exchange (Nymex) crude oil futures prices reached an intra-day low of $63.50/bbl, down $1.61/bbl from the Monday settlement. Crude prices were lower on Tuesday in response to inventory build forecasts. The drop in ?xml:namespace> The Shipping brokers said the next available loading dates for Asia Pacific-USG would be late March. Freight rates were being talked in the upper $70s/tonne to upper $80s/tonne, above the average $65/tonne spread between US Gulf and Asian benzene spot
http://www.icis.com/Articles/2006/02/07/1040150/feb-us-benzene-spot-prices-drop-below-2.90gallon.html
CC-MAIN-2014-41
refinedweb
180
65.22
The VectorTile layer We now know how to load tiled images, and we have seen different ways to load and render vector data. But what if we could have tiles that are fast to transfer to the browser, and can be styled on the fly? Well, this is what vector tiles were made for. OpenLayers supports vector tiles through the VectorTile layer. A world map rendered from vector data We'll start with the same markup in index.html as in the Basics exercise. <html> <head> <meta charset="utf-8"> <title>OpenLayers</title> <style> html, body, #map-container { margin: 0; height: 100%; width: 100%; font-family: sans-serif; } </style> </head> <body> <div id="map-container"></div> <script src="./main.js" type="module"></script> </body> </html> As usual, we save index.html in the root of our workshop folder. For the application, we'll start with a fresh main.js in the root of the workshop folder, and add the required imports: import 'ol/ol.css'; import MVT from 'ol/format/MVT'; import VectorTileLayer from 'ol/layer/VectorTile'; import VectorTileSource from 'ol/source/VectorTile'; import {Map, View} from 'ol'; import {fromLonLat} from 'ol/proj'; The data source we are going to use is a simple map of the countries of the world from Natural Earth data, served as vector tiles by GeoServer. The map setup we're going to create here is the same that we have used in previous exercises: const map = new Map({ target: 'map-container', view: new View({ center: fromLonLat([0, 0]), zoom: 2, }), }); The layer type we are going to use this time is a VectorTileLayer, with a VectorTileSource: const layer = new VectorTileLayer({ source: new VectorTileSource({ format: new MVT(), url: '' + 'ne:[email protected]%[email protected]/{z}/{x}/{-y}.pbf', maxZoom: 14, }), }); map.addLayer(layer); Our data source provides only zoom levels 0 to 14, so we need to pass this information to the source. Vector tile layers are usually optimized for a tile size of 512 pixels, which is also the default for the VectorTile source's tile grid. The data provider requires us to display some attributions, which we are adding to the source configuration as well. As you can see, a VectorTileSource is configured with a format and a url, just like a VectorSource. The MVT format parses Mapbox Vector Tiles. Like with raster tiles, the tile data is accessed by zoom level and x and y coordinates of the tile. Therefore, the URL includes a {z} placeholder for the zoom level, and {x} and {y} placeholders for the tile coordinates. The working example at shows an unstyled vector tile map like this:
https://openlayers.org/workshop/en/vectortile/map.html
CC-MAIN-2021-43
refinedweb
438
61.56
Old Ghosts: XML Namespacesby Leigh Dodds January 10, 2001. Negotiating Peace. The current situation of undefined behavior has been described by Rick Jelliffe as a "lucky dip." The increasingly common practice of placing a schema for the namespace at the URI has been decried by many vocal members of the XML community, primarily because this doesn't take into account the many and varied schema languages currently in use. Shortly before Christmas, XML-DEV returned to this topic once more, exploring a worrying possibility raised by Paul Tchistopolskii. In this scenario, if some popular tool were to define some behavior involved when dereferencing a Namespace URI, then this de facto usage would limit the possibility of standardizing the intended behavior in the future. A potentially serious loophole. Michael Champion dubbed it the "Tool X Horror Scenario." The example that the "Tool X" discussion brings to my mind is Microsoft's implementation of a draft XSLT spec in IE5. Even though this was done with the best of intentions, and MS has worked very hard to provide updates that supported the XSLT Recommendation, IE5 became an "evil de facto implementation" that has caused immense confusion and additional work for the foot soldiers of XML. I could easily imagine a similar round of confusion if some popular tool (IE6, or Xerces perhaps?) -- with the best of intentions, and in full compliance with the letter of the namespaces spec -- were to offer some added functionality *if* namespace URIs point to some specific type of object. (If you really want a nightmare scenario, imagine that it is some proprietary object, not one defined by an open standard.) As the history of HTML shows, the Ordinary Joe developers of the world care a lot more about the "standards" defined by the behavior of popular tools than by the words in some document on the W3C website. Naturally this scenario prompted a lot of heated debate, threatening to reignite the Namespace discussion. Rick Jelliffe attempted to defuse the situation, suggesting that effort would better invested in discussing solutions to the problem, rather than debating the Namespace specification itself. Jelliffe colorfully named this the "Treaty of Wulai" after the hot springs he'd spent the day enjoying. Jelliffe suggested how the issue might be resolved by proposing a convention to be used when dereferencing a Namespace URI. For example, the convention could be that dereferencng the namespace URI (when it is an http:, at least) results in: - a structural schema (XML Schema, DTD, or HTML documentation, or other schema like Schematron, RELAX, XDR, SOX, DSD, etc determined by content negotiation); - a semantic schema (RDF Schema) also containing links to structural schema(s) according to a well-known convention; - some definite kind of directory or resource discovery document, to be decided, which allows systematic retrieval of lots of different kinds of resource, including links to semantic and structural schemas; - or nothing. While many XML-DEV members couldn't resist the opportunity to do battle on the Namespace issue once more, for many the prospect of a peace treaty was very welcome. Under the Radar Of the four options presented by Jelliffe only the last two found supporters: deprecate dereferencing Namespace URIs or provide a resource discovery document. Paul Tchistopolskii in particular strongly urged deprecation. I think it is not sane to keep this hole open. I think it will take years to understand what could *really* be pointed by that URL/URI and until that - let us close the door ? Right? ... Let us look at this hole. Whatever will be attached to that URI - it will affect almost every XML document in the world and this could be done at any point of time. Tchistopolskii believed that closing the loophole was the important first step, giving the community time to fully debate the alternative options. Tim Bray took the opposing view, believing that an XHTML based resource discovery format could be devised relatively quickly. I think this would be the ideal kind of thing to retrieve when you dereference a namespace URI. Readable by humans and also machine-processable and fully extensible. If I were feeling particularly grandiose, I'd also describe such a thing as a key building block for the Semantic Web. Bray's plan found favor very quickly, and proposals rapidly began appearing. Dan Brickley had already observed that RSS 1.0 and Dublin Core use a similar technique (human-readable documentation referencing machine-processable schema). Simon St. Laurent made a quick XLink based proposal, Jonathan Borden contributed XMLCatalog, and Tim Bray himself proposed XML Namespace Related-resource Language. Debate quickly centered on the relative merits of the proposals, prompting further revisions. Sean Palmer began the winnowing process, synthesizing Borden and Brays separate proposals into XML Namespace Catalog Language. There isn't space here to discuss the differences between each proposal, but, happily, they have rapidly stabilized into a clear favorite: Resource Discovery Description Language (RDDL). Resource Discovery Description Language. Michael Brennan provided an enthusiastic summary of the advantages of RDDL. I'm very excited about RDDL. RDDL is simple, lightweight, easy to implement, and offers the added bonus that it is displayable in ordinary web browsers. The approach of placing human readable documentation at the end of the URL that also contains a machine readable catalog of other related resources is a perfect approach, IMO. I love this. This is great! XML-DEV has begun discussing an API for RDDL which will provide a means to process RDDL documents from within XML applications. It should be noted that although the primary use case for RDDL is as the format retrieved when dereferencing a Namespace URI, it is not limited to that. It could be used when an application requires any additional resources while processing an XML document. However there are still issues to be resolved. Notably, Rick Jelliffe has been urging that XML-DEV consider existing situations where Namespace URIs are being resolved directly to useful resources.. These are important to be addressed early in the evolution of RDDL. It should be possible to incorporate the use cases that Jelliffe is highlighting without significant changes to the RDDL proposal as it stands. Open issues aside, the promising thing is that XML-DEV is at last talking about concrete solutions rather than indulging in Namespace flame wars. It's worth noting that the last project that garnered such immediate progress on XML-DEV was the SAX API. In this light, RDDL can be seen as a promising start to the New Year.
http://www.xml.com/pub/a/2001/01/10/rddl.html
crawl-002
refinedweb
1,087
51.48
Make C++ a piece of cake. Project description Make C++ a piece of cake. Cupcake is a thin layer over CMake and Conan that tries to offer a better user experience in the style of Yarn or Poetry. Audience To use this tool, your C++ project must fit a certain profile and follow some conventions. The profile is what I call a basic C++ project: - A name that is a valid C++ identifier. - Zero or more public dependencies. These may be runtime dependencies of the library or executables, or they may be build time dependencies of the public headers. Users must install the public dependencies when they install the project. - Some public headers nested under a directory named after the project. - One library, named after the project, that can be linked statically or dynamically (with no other options). The library depends on the public headers and the public dependencies. - Zero or more executables that depend on the public headers, the library, and the public dependencies. - Zero or more private dependencies. These are often test frameworks. Developers working on the library expect them to be installed, but users of the library do not. - Zero or more tests that depend on the public headers, the library, the public dependencies, and the private dependencies. The conventions are popular in the community and seem to be considered best practices: - The project is built and installed with CMake [1]. - The project uses semantic versioning. - The project installs itself relative to a prefix. Public headers are installed in include/; static and dynamic libraries are installed in lib/; executables are installed in bin/. - The project installs a CMake package configuration file that exports a target for the library. The target is named after the project, and it is scoped within a namespace named after the project. Dependents link against that target with the same syntax whether it was installed with CMake or with Conan. Commands package This abstracts the conan create ↗️ command. It: Copies a Conan recipe for your project to your local Conan cache, a la conan export ↗️. Builds the recipe for your current settings (CPU architecture, operating system, compiler) and the Release build type, a la conan install ↗️. Configures and builds an example that depends on your project as a test of its packaging, a la conan test ↗️. That example must reside in the example/ directory of your project with a CMakeLists.txt that looks like this: add_executable(example example.cpp) target_link_libraries(example ${PROJECT_NAME}::${PROJECT_NAME}) Etymology I love Make, but it’s just not cross-platform. Just about every other single letter prefix of “-ake” is taken, including the obvious candidate for C++ (but stolen by C#), Cake. From there, it’s a small step to Cppcake, which needs an easy pronunciation. “Cupcake” works. I prefer names to be spelled with an unambiguous pronunciation so that readers are not left confused, so I might as well name the tool Cupcake. A brief Google search appears to confirm the name is unclaimed in the C++ community. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cupcake/
CC-MAIN-2020-40
refinedweb
523
65.42
On Fri, 2005-04-01 at 10:05 -0600, John A Meinel wrote: > David Allouche wrote: > >. > > > Well, one thing that I've notice digging through the baz code, is that > they actually are trying to restructure it, to potentially allow changes > to the namespace. I can't say where they are going, but some of the > structures look like a good place to change from c--b--v to maybe > c--b--v--s, or c--b1--b2--b3--v, or something like that. The goal with the restructuring is to make libarch more agile. Tom has talked about 'sub-version-branches' for example. (c--b1--v--b2). Supporting that /should/ be no more work than teaching the namespace that that is a valid patch-container, and the various serialisation formats how to store such things. At the moment that is quite a hurdle : the revision library, and the {arch} tree are not structured well for that (the path length grows with bad O for every added component). The baz archive is structured well to handle such variation - but will still have problems : names varying only by case, and names so long some fs's cannot have a single path element with that name. > So it might be possible that because of a potential namespace > restructure they are looking to use a more generic term "id" then > sticking with revision. The reason I didn't use 'tree-revision' is that its likely confusion to users to see 'tree-version' and 'tree-revision'. The former would be called 'branch' or 'tree-branch' or something similar in most other [RV]CS's. version and revision are used interchangably by other [RV]CS tools - we're the only one that I'm aware of that has two terms with such a similar dictionary meaning having radically different meanings in the tool. ( meaning 3 and meaning 2). When the user interface /requires/ users to understand that our meaning of version is 'the controlled softwares version' and not 'the last commit I did' - then its confusing. So while tree-id isn't /great/ - and I'm looking for a great name - it: is less confusing than having tree-version and tree-revision is intuitive to new users (yes, I've asked). better than renaming tree-version when the rest of the model still talks about 'VERSION' when other systems would say 'branch'. Or 'Line of development'. > The internal structure they use is called arch_patch_id, which calls > fully qualified version name == branch. So internally the place where > patches go is a "branch". Right... see above. If you abstract out the namespace as a policy object, not a functional one, then you can change it rapidly when you want to ;). In that model you have 2 components for a revision : the branch its in, and the patch itself. Everything else is policy, and should be mutable without breaking the rest of the code. Thats the goal anyway. Rob signature.asc Description: This is a digitally signed message part
http://lists.gnu.org/archive/html/gnu-arch-users/2005-04/msg00007.html
CC-MAIN-2018-05
refinedweb
501
68.81
get current day, month, year, day of week/month/year in java In this tutorial we will see how to get current date, day, month, year, day of week, day of month and day of year in java. import java.util.Calendar; import java.util.Date; import java.util.TimeZone; class Example { public static void main(String args[]) { Calendar calendar = Calendar.getInstance(TimeZone.getDefault()); //getTime() returns the current date in default time zone Date […] How to get current timestamp in java Its() […] How to get time in milliseconds in java using Date/Calendar class There are two ways to get time in milliseconds in java. 1) Using public long getTime() method of Date class. 2) Using public long getTimeInMillis() method of Calendar class Here is the complete self explanatory example: Do refer the comments in the program. import java.util.Calendar; import java.util.Date; public class TimeDemo { public static void main(String[] […] How to get current date and time in java Using SimpleDateFormat and Date/Calendar class, we can easily get current date and time in Java. Below are the code snippets of both the ways: Current date and time can be obtained using two methods: 1) Using Date class Specify the desired pattern while creating object of SimpleDateFormat. Create an object of Date class. Call the […] Java date Below […] How to Parse Date in Desired format – Java Date This post is to discuss few important points about parse() method. If you are looking for String to Date and Date to String conversion then refer the following posts: Convert String to Date in Java Convert Date to String in Java Converting strings to desired date format is a time consuming and tedious process in […] Date Formatting In Java With Time Zone This tutorial will help you getting the current time, date and day in any given format for any particular time zone in java. Listed below are some IDs for some common Time zones in the US: Time Zone Java Time Zone ID Hawaiian Standard Time US/Hawaii Alaska Standard Time US/Alaska Pacific Standard Time US/Pacific Mountain […] Java calendar class: add/subtract Year, months, days, hour, minutes Java’s Calendar class provides a set of methods for manipulation of temporal information. In addition to fetch the system’s current date and time, it also enables functionality for date and time arithmetic. Adding Time Period (Months and days) to a Date Suppose you want to add a time period to a date. How will you […]
http://beginnersbook.com/category/technology/java-guide/java-date/
CC-MAIN-2016-44
refinedweb
417
56.49
I'm learning Java, and I've made a simple program called Savings that takes an initial amount, an annual deposit, and calculates balances with a 6.5% APY calculated annually. I added rounding to make it down to cents, but when it gets to the 7 iteration, it is no longer rounded. Here's the code: public class Savings { public static final double INTEREST_RATE = 0.065; public static void main(String[] args) { account(1000.0, 100, 25); } public static void account(double initial, int deposits, int years) { double balance = initial; for(int i = 1; i <= years; i++) { double interest = Math.round(balance * INTEREST_RATE * 100)/100.0; System.out.print(i + "\t" + balance + "\t" + interest + "\t" + deposits + "\t"); balance = balance + interest + deposits; System.out.println(balance); } } } Output 1 1000.0 65.0 100 1165.0 2 1165.0 75.73 100 1340.73 3 1340.73 87.15 100 1527.88 4 1527.88 99.31 100 1727.19 5 1727.19 112.27 100 1939.46 6 1939.46 126.06 100 2165.52 7 2165.52 140.76 100 2406.2799999999997 8 2406.2799999999997 156.41 100 2662.6899999999996 9 2662.6899999999996 173.07 100 2935.7599999999998 10 2935.7599999999998 190.82 100 3226.58 11 3226.58 209.73 100 3536.31 If I add change the last balance to: balance = Math.round((balance + interest + deposits)*100)/100.0; it works fine. I just mostly am trying to figure out why the original gets a funky answer. I've worked through the debugger, and it just doesn't make sense. Thanks for the help!
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/18377-basic-code-looks-right-gets-funky-printingthethread.html
CC-MAIN-2014-15
refinedweb
267
82.51
Application lifecycle (Windows Runtime apps) This topic describes the lifecycle of an app, from the time it is deployed through its removal. By suspending and resuming your app appropriately, you ensure that your customer has the best possible experience with your app. App execution state This illustration represents the transitions between app execution states. The next several sections of this topic describe these states and events. For more detail about when each state transition occurs and what your app should do in response, see the docs for the ApplicationExecutionState enumeration. App launch An app is launched whenever it is activated by the user while the process is in the NotRunning state. An app may be in the not running state because it was just has not yet been launched, it was previously running and crashed, or it was suspended but could not be kept in memory and was therefore terminated. When an app is launched, the OS displays a splash screen for the app. To configure this splash screen, see Adding a splash screen. While its splash screen is displayed, an app should ensure that it's ready for its user interface to be displayed to the user. The primary tasks for the app are to register event handlers and set up any custom UI it needs for loading. the Extended Splash Screen documentation and Splash screen sample for more details. After the app completes activation, it enters the Running state and the splash screen is torn down. Showing a window, returning from the activation handler, and completing a deferral are specific ways that an app completes activation. For more info, see: App activation An app can be activated by the user through a variety of contracts and extensions. To participate in activation, your app must register to receive the Activated | activated event. Your app's activation event handler can test to see why it was activated and whether it was already in the Running state. Apps can be activated as follows. Apps that are built for Windows 8.1 and later can also be activated with these types. Windows Phone apps can be activated with these types. Your app can use activation to restore previously saved data in the event that the operating system terminates your app, and subsequently the user re-launches it. The OS may terminate your app after it has been suspended for a number of reasons. The user may manually close your app, or sign out, or the system may be running low on resources. If the user launches your app after the OS has terminated it, it receives an activated event and the user sees the splash screen of your app until the app is activated. You can use this event to determine whether your app needs to restore the data which it had saved when it was last suspended, or whether you must load your app’s default data. The activated event arguments include a PreviousExecutionState property that tells you which state your app was in before it was activated. This property is one of the values from the ApplicationExecutionState enumeration. The table below summarizes the possibilities: PreviousExecutionState could also have a value of Running or Suspended, but in these cases your app was not previously terminated and therefore you don’t have to worry about restoring data. Note Note that if you log on using the computer's Administrator account, you can't activate any Windows Store apps. For more info, see App extensions. App suspend An app can be suspended when the user switches away from it or when the device enters a low power state. Most apps stop running when the user switches away from them. When the user moves an app to the background, the OS waits a few seconds to see whether the user immediately switches back to the app. If the user does not switch back, the OS suspends the app. If an app has registered an event handler for the Suspending | suspending event, this event handler is called right before the app is suspended. You can use the event handler to save relevant app and user data to persistent storage. We recommended that you use the application data APIs for this purpose because they are guaranteed to complete before the app enters the Suspended state. For more info, see Application data. You should also release exclusive resources and file handles so that other apps can access them while your app isn't using them. Generally, your app should save its state and release its exclusive resources and file handles immediately in the event handler when the suspending event is received, and generally take about a second to do so. If an app does not return from the suspending event within 5 seconds on Windows and between 1 and 10 seconds on Windows Phone, the OS assumes that the app has stopped responding and terminates it. The operating system attempts to keep as many suspended apps in memory as possible. Keeping these apps in memory ensures that users can quickly and reliably switch between suspended apps. However, if there aren't enough resources to keep your app in memory, the OS can terminate your app. Note that appears as it did when it was suspended. There are some apps that need to continue to run to complete background tasks. Your app can continue to play audio in the background; for more info, see Quickstart: Adding audio in a Windows Runtime app. Background transfer operations continue even if your app is suspended or even terminated; for more info, see Quickstart: Downloading a file. For guidelines, see Guidelines for app suspend and resume. For example code, see: App visibility When the user switches from your app to another app, your app is no longer visible but remains in the running state until the OS can suspend it (for about 10 seconds). If the user switches away from your app but activates or switches back to it before it can suspended, the app remains in the running state. Your app doesn't receive an activation event when app visibility changes, because the app is still running. The OS simply switches to and from the app as necessary. If your app needs to do something when the user switches away and back, it can handle the VisibilityChanged | msvisibilitychange event. The visibility event is not serialized with the application data is lost, Resuming | resuming event, it is called when the app is resumed from the Suspended state. You can refresh your content using this event handler. If a suspended app is activated to participate in an app contract or extension, it receives the Resuming | resuming event first, then the Activated |: Note On Windows Phone, the OS within about 10 seconds. In Windows 8.1 and later, after an app has been closed by the user, the app is only removed from the screen and switch list without being Windows.UI.ViewManagement.ApplicationView.TerminateAppOnFinalViewClose property. If an app has registered an event handler for the Suspending | the OS or by the user. If your app needs to do something different when it is closed by the user than when it is closed by the OS, you can use the activation event handler to determine whether the app was terminated by the user or by the OS. See the descriptions of ClosedByUser and Terminated states in the docs for the ApplicationExecutionState enumeration. We recommend that apps not close themselves programmatically unless absolutely necessary. For example, if an app detects a memory leak, it can close itself to ensure the security of the user's personal data. When you close an app programmatically, the OS in the Dev Center. When the user activates an app after it crashes, its activation event handler receives an ApplicationExecutionState value of NotRunning, and should simply display its initial UI and data. App removal When a user deletes your app, the app is removed, along with all its local data. Removing an app doesn't affect the user's data, such as files in the Documents or Pictures libraries. Application lifecycle programming interfaces - Windows.ApplicationModel namespace - Windows.ApplicationModel.Activation namespace - Windows.ApplicationModel.Core namespace - Windows.UI.WebUI namespace - Windows.UI.Xaml.Application class - Windows.UI.Xaml.Window class - WinJS.Application namespace Related topics - Guidelines for app suspend and resume - Samples - App activated, resume, and suspend using the WRL sample
http://msdn.microsoft.com/en-us/library/IE/hh464925.aspx
CC-MAIN-2014-52
refinedweb
1,399
53.51
Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll Preview Separate Your Stylesheet Into Partials9:37 with Guil Hernandez Sass partials let you split your stylesheet into separate files. They help modularize your CSS and keep things easier to maintain. Each partial is a single file and is like a small piece of the big CSS puzzle; it contains a portion of your stylesheet. [MUSIC] 0:00 The CSS file for even a moderately complex website is immensely long, 0:04 containing hundreds or even thousands of styles. 0:08 Finding the single style you want to edit could be like finding a needle in 0:12 a haystack. 0:15 What's worse? 0:16 When it's time to add a new style to a style sheet with thousands of styles, 0:16 where should you put it? 0:20 The more styles a site has, 0:21 the more difficult it can be to maintain a single style sheet. 0:23 Currently, our style sheet is one lengthy list of variables, mix-ins, 0:27 placeholders and rule sets. 0:30 And if we keep adding components to the site, it's going to get crowded fast. 0:32 Fortunately Sass has a great solution to this problem, partials. 0:37 They're one of the main benefits of working with Sass. 0:40 Partials is like split your style sheets into separate files. 0:43 They help marginalize your CSS and keep things easier to maintain. 0:46 Each partial is a single file and its like a small piece in the big CSS puzzle. 0:50 It contains a portion of your style sheet. 0:54 You will use partials to group related styles. 0:56 For example you could place audio variables inside a partial and 0:59 your CSS reset and base styles inside other partials. 1:02 You can create as many partials as you like while you're writing you CSS and 1:05 when it's time to write your finals CSS file, 1:09 Sass will merge the partials into a single CSS file. 1:10 Let me show you how we can break our main style up into small chunks using Sass 1:14 partials. 1:17 It's possible to split regular CSS into multiple files, however 1:19 each CSS file creates and additional HTTP request for the browser to process. 1:23 Too many of these request affect the performance of your site, so 1:28 many developer avoid this approach. 1:31 Sass partials give you the best of both world, multiple files to organize your 1:33 styles with the performance benefit of a single production ready CSS file. 1:38 You can create tens or 1:42 hundreds of partials without impacting your site's performance. 1:43 Using partials is a two-step process. 1:47 First, you create the partial files to organize your CSS 1:49 into related groups of styles. 1:53 And second, import the partials into a regular Sass file. 1:54 Sass compiles the imported partial files and 1:58 outputs the final CSS into a single file. 2:01 So let's start with creating partials. 2:03 To create a partial, simply add an underscore character 2:06 to the beginning of the name for a sass or scss file. 2:08 The underscore is when instruct says that the file should not be compiled into CSS. 2:12 So we're gonna break out related sections of our style sheet into partials and 2:16 this will help organize our project. 2:20 For instance, variables can be defined in their own partial. 2:22 So I'm going to create new folder named Partials inside the scss folder. 2:25 In the partials folder let's create a new file and 2:34 name it _variables.scss. 2:39 Now we'll cut all variables out of style.scss and 2:49 paste them in our new variables partial. 2:54 Saving these changes produces an error in the console and output CSS. 3:01 The error says that there is an undefined variable 3:08 of color-text on line 35 of style.scss. 3:13 You see, Sass won't directly compile partial files. 3:17 It ignores files with the underscore character, so in this case, 3:21 Sass did not include the context of this variables partial when compiling to css. 3:24 So right now all variables in our project are undefined. 3:30 Saas will compile the contents of a partial only when you import 3:36 it from within a a regular Sass or scss file using the import directive. 3:40 So at the very top of style.scss, I'll write a comment for our partial imports. 3:46 Then I'll import the new variables .sspartials into this 3:54 file by typing @import followed by the path to the file inside quotes. 3:59 When importing a partial, you can leave out the underscore in the file name and 4:12 the .scss word.sass extension. 4:17 Just make sure you wrap the file name or path to the file in single or 4:20 double quotes otherwise Sass will throw an error. 4:24 Keep in mind that the style.scss file 4:27 does not have an underscore character in its name. 4:30 It's a regular Sass file. 4:32 So when we save and compile these changes the undefined variable error 4:34 no longer appears in the console or the output CSS. 4:39 All our rules can now reference the variables inside the variables partial and 4:44 include their values in the output CSS. 4:48 Next, let's break out other related sections of our style sheet into partials, 4:52 like the mixins and place holders. 4:57 So I'll create a new file inside the partials folder 4:59 named _mixins.scss. 5:04 Then I'll select and cut all mixins out of style.scss and 5:10 paste them inside the mixins partial. 5:15 You can import multiple partials by respecifying the import directive, so 5:24 for example in style.scss we can type import, 5:29 Followed by partials mixins. 5:35 But by now, you know that Sass helps you to avoid retyping and repeating code. 5:39 So of course, Sass provides a shortcut for grouping your partial imports. 5:43 You define just one import directive then write the files as a comma separated 5:48 list I usually place them on separate lines to make the code easier to read. 5:54 All right, let's keep going. 6:04 Next, we'll create a partial for our placeholder selectors. 6:05 So inside the partials holder, we'll create a new file named _helpers.scss. 6:09 Then I'll cut the clear fix and button place holders out 6:20 of style.scss and paste them inside helpers.scss. 6:25 And import the new helpers partial in style.scss. 6:32 So I'll replace the semi-colon here with a comma. 6:39 Then on the next line specify the path to the helper's partial with 6:42 partial/helpers then add the semi-colon to this line. 6:47 So by separating our code into the multiple scss file, so 6:52 our style sheet is starting to look less cluttered. 6:56 And you can create as many partials as you want while building your project, and 6:59 Sass will compile them to a single file to use in production. 7:02 You can also import partials onto other partials. 7:08 To make our project even more organized, let's place 7:11 all the main styles written here in style.scss in a partial of their own. 7:15 So in the partials folder let's create a new file 7:19 named _main-styles.scss. 7:25 Then I'll go ahead and select and 7:29 cut all remaining rules out of style.scss and 7:33 paste them into main-styles.scss 7:37 Then will import the file at the top of style.css. 7:45 So now, styles.scss only does one thing, 7:54 it lists the partials that Sass should import and compile. 7:57 And because there's no underscore in the file name it's the file that Sass uses 8:01 to name the output CSS file. 8:05 On a larger Sass project with dozens or hundreds of partial files, 8:09 you normally won't place all your partials inside one folder. 8:13 You'll likely organize and 8:17 sort them into multiple directories containing several partials. 8:18 For example, you may have a directory containing partials for 8:22 your CSS reset, base and typography styles. 8:24 Another directory just for your layout and component styles. 8:28 And one for your variables, functions, mixins, and helpers. 8:31 You could also place all your meta queries into their own partial. 8:35 But I'm going to teach you a convenient way to manage meta queries with Sass 8:38 in the next video. 8:41 Now, I'm going to stop here but you don't have to. 8:43 Why don't you practice creating partials by breaking main-styles.scss out into 8:46 other partials for the base, layout and component styles. 8:51 You could even take things a step further and 8:55 sort all related partials into directory of their own Just remember to 8:57 keep your partials simple by only breaking up related bits of your code. 9:02 In the next video, I'll show you how I organize the files. 9:06 It's important to mention, that when importing partials, the order in which you 9:11 import them matters because that's the order in which they compile to CSS. 9:15 So import any dependencies first or a code that's being referenced in other rules 9:19 like variables, mixins and helper rules. 9:23 Then input your base layout, component styles and so on. 9:26 I've included links to Treehouse videos that teach you advanced methods for 9:30 structuring your Sass projects in the teachers notes. 9:34
https://teamtreehouse.com/library/separate-your-stylesheet-into-partials
CC-MAIN-2022-33
refinedweb
1,782
72.46
Provided by: manpages-dev_4.04-2_all NAME _llseek - reposition read/write file offset SYNOPSIS #include <sys/types.h> #include <unistd.h> int _llseek(unsigned int fd, unsigned long offset_high, unsigned long offset_low, loff_t *result, unsigned int whence); Note: There is no glibc wrapper for this system call; see NOTES.. This system call exists on various 32-bit platforms to support seeking to large file offsets.. To invoke it directly, use syscall(2). However, you probably want to use the lseek(2) wrapper function instead. SEE ALSO lseek(2), lseek64(3) COLOPHON This page is part of release 4.04 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.ubuntu.com/manpages/xenial/man2/_llseek.2.html
CC-MAIN-2021-49
refinedweb
125
68.77
getting the SQL out of PreparedStatementBalamaci Serban Jun 1, 2005 6:30 PM I was wondering, if one can get the SQL out of a PreparedStatement. For example the code in public class TestClass { public void aMethod () { PreparedStatement sql=con.prepareStatement("INSERT INTO person(id,adress,age) VALUES(?,?,?,?)"); sql.setInt(1,22); sql.setString(2,?Str. .....?); sql.setInt(3,24); sql.executeUpdate(); } } We could write an interceptor, intercept prepareStatement and get the parameter, but the SQL String has those nasty ? ? ? in it. Next we have another interceptor intercept the ..PreparedStatement->set*(..) but how would i be able to get the info extracted with the first interceptor(the SQL String with ? ? ? in it), if only i could get, that then i would use a regular expresion to parse it and add the info of that perticular set* method instead of the ? and finally at executeUpdate() i would just have the plain clear as day SQL. Metadata to pass the data from one Interceptor to the other is out of the question i guess cause i belive that's for chains of interceptors, right? How about we dinamicaly add a field to the TestClass and put the partial result of all the calls in there. Could we do that? Is it possible to make it work. Any ideas? 1. Re: getting the SQL out of PreparedStatementBill Burke Jun 1, 2005 6:41 PM (in response to Balamaci Serban) proxy the return from prepareStatement(). The proxy can hold the SQL and then you could get it at runtime. 2. Re: getting the SQL out of PreparedStatementBalamaci Serban Jun 2, 2005 2:42 PM (in response to Balamaci Serban) Something like this?: public class sqlPreparedStatement implements PreparedStatement { PreparedStatement st; SQLString SQLstr; public sqlPreparedStatement(PreparedStatement st) { this.st=st; } public setString(int par, String str) { //do the wright work on the string ... st.setString(par,str); } ........ ....... } and in the interceptor: return new sqlPreparedStatement((PreparedStatement)invocation.invokeNext()); well that would be a great idea. If I understood it corectly i could say the solution is simplisticaly beautiful. PS: 10x again Bill, just wondering how the hell do u get the time to answer on the forum when i've seen articles on aop,interviews on theserverside, the aop panel(that AspectJ IBM guy was boring:) ), heard u on ejb 3.0 tutorials on jboss, developing JBossAop, jessus u shouldn;t be answering the forum(although not many other people seem to be answering). 3. Re: getting the SQL out of PreparedStatementBalamaci Serban Jun 5, 2005 4:26 AM (in response to Balamaci Serban) i can't seem to be able to intercept prepareStatement. Could it be because Connection is an interface or i;m doing something wrong? 4. Re: getting the SQL out of PreparedStatementBalamaci Serban Jun 5, 2005 4:27 AM (in response to Balamaci Serban) "drakonis" wrote: i can't seem to be able to intercept prepareStatement. Could it be because Connection is an interface or i;m doing something wrong? Darn XML <bind pointcut="execution(public * java.sql.Connection->prepareStatement(..))"> <interceptor class="com.project.SQLInterceptor"/> </bind> 5. Re: getting the SQL out of PreparedStatementKabir Khan Jun 7, 2005 4:42 AM (in response to Balamaci Serban) We ignore all classes in the java.* and javax.* packages for the "normal" pointcuts. Use a caller side pointcut: 6. Re: getting the SQL out of PreparedStatementBalamaci Serban Jun 7, 2005 9:08 AM (in response to Balamaci Serban) Thanks, it killed the only remaining 2 braincels i got left du!#$2(can't even write) till i gave up. should work now.
https://developer.jboss.org/thread/88407
CC-MAIN-2018-43
refinedweb
596
55.34
a Horizontal StackLayout that contains various controls, the last of which is an Entry control that would extend off the right-hand side of the screen if it were not contained in the StackLayout, on iOS and Android typing into the Entry results in scrolling (either the Entry or the StackLayout) that ensures the caret at which typed characters appear is always in view. On WinPhone however, scrolling does not happen, so the caret and the characters being typed can actually be off the side of the screen. This makes for a poor user experience - scrolling should happen in an Entry so that the caret is always visible. @John, I have checked this issue but not able to reproduce it. Could you please provide me sample project so that I can reproduce this issue at my end. Thanks. Created attachment 11806 [details] Csharp file that demonstrates issue Source file that demonstrates problem described. You will need to change the namespace to match your project, but otherwise it will run as is. Thanks @John I have checked this issue with provided sample code in comment #2 and getting different behavior on iOS and Windows Phone. Screencast for iOS: Screencast for WindowsPhone: Please check the screencast and let me know If I have missed anything for testing this issue. Thanks. I've just given this a quick test and it looks like it has been fixed in XF 2.0.1
https://xamarin.github.io/bugzilla-archives/31/31143/bug.html
CC-MAIN-2019-47
refinedweb
239
64.54
Konf is a Python package which designed to simplify the use of variables in configuration files. json and yaml supported out of the box. Project description Konf Tiny tool that designed to solve problems of Python configuration files located outside of VCS. Installation: pip install konf Running tests: nosetests Why Konf? Sometimes there is a need to get some settings outside of Python code and then to use them in an application. This can be secret keys, authentication tokens, URLs of third-party services, or other settings which depends on the server. Developers (or IT engineers) are faced with several challenges: - Validation of data importing from config. It may be simple typing, matching with range of possible values or with regexes. - Respect of all settings. Check that config contains all required data. Also, it can be useful to check that there are no extra (redundant) things inside a config (because it can be the data, forgotten to consider in an application). - Understanding what happens when something goes wrong. Correct representational exceptions allows immediately understand (just having looked at logs) what the problem is. Useful when deploying servers. Features: - Allows to DRY import variables - Readability for humans - JSON and YAML support out of the box (In fact, additional libraries will be automatically installed for support it) - Typing or validation of all importing data. And this is required because human factor prevention - Python 2.7, 3+ compatible - 100% code coverage - Custom format of configuration files can be used. If I missed and at now anyone uses something else except of supported formats, you can create an issue about it, and probably the new format will be supported in next versions. For Python data structures validation is used excellent library kolypto/py-good For YAML parsing is used a great lib of Kirill Simonov PyYAML Quick start Just look at the code. from konf import Konf # Filling of variables from config file tokens.yaml in k_ object k_ = Konf('tokens.yaml') # Getting variables from k_: first argument is a name of variable (specified in the config), # second can be a type or validator SECRET_KEY = k_('secret_key', basestring) AUTH_TOKEN = k_('auth_token', basestring) # In the next example is used a validator: list, that must contain # only objects with basestring type (str or unicode) CLIENTS = k_('clients', [basestring]) # And dict with two required keys with appropriate types DELAYS = k_('delays', {'case1': int, 'case2': int}) You can find more details and advanced examples about natural validation on good page Ok, what happened next? Imagine that tokens.yaml is missing. In case of this, after the script execution, we can see next exception message: konf.main.ReadError: Can`t access to the configuration file "tokens.yaml" Let’s create a file tokens.yaml and input next: --- secret_key: FOO auth_token: BAR clients: Q, delays: case1: 15 case2: 17 Exception is raised: Traceback (most recent call last): File "/Users/me/python/examples/example.py", line 19, in <module> CLIENTS = k_('clients', [basestring]) File "/Users/me/python/examples/konf/konf/main.py", line 126, in __call__ raise self.ValidationError(e) konf.main.ValidationError: expected a list Then fix this mistake: --- secret_key: FOO auth_token: BAR clients: [Q] delays: case1: 15 case2: 17 Now all be OK, because [Q] represents a list of values, not a string. Note: you can see the list of all supported exceptions in the end of this documentation page. Default values Do you need to use a value if any variable is not contained in a config file? You can use default value. from konf import Konf k_ = Konf('extra.yml') # 3rd arg is a default. If variable STRICT is not contained in config file, # USE_STRICT will be False USE_STRICT = k_('STRICT', bool, False) # You can also use None as default value WINNER = k_('WINNER', int, None) # Default values will never be validated, because you forcibly declaring it. # So, the next example is legit. SHIFT_TIME = k_('SHIFT', int, complex(42, 42)) Checking redundant variables Sometimes you want to be sure that all of the variables in a config file are used and you haven’t forgotten anything. In this situation the check_redundant() method can be helpful. from konf import Konf k_ = Konf('bar.yaml') FOO1 = k_('foo1', int) FOO2 = k_('foo2', int) # If config file contains anything except foo1 and foo2, # RedundantConfigError will be raised after call of this method! k_.check_redundant() # Fail Default values and check_redundant() also working fine together. from konf import Konf k_ = Konf('foo.yaml') X = k_('X', int, 0) Y = k_('Y', int, 0) # If X and Y doesn't contained in the config file, RedundantConfigError will not be raised # after next line of code, because they have default values. # So, it's just like X == 0 and Y == 0 k_.check_redundant() # Success More complex example Write the content to a social_auth.yml in a readable form: --- SN: vk: key: '123' secret: qwerty google: key: '456' secret: uiop twitter: key: '789' secret: zxc ok: key: '000' secret: vbn public_name: m Step-by-step process it in settings.py # 0. Select configuration file k_ = Konf('social_auth.yml') # 1. Declare validators # You can cache validators inside a Konf object as if it's a standard python dict k_['v1'] = { 'key': basestring, 'secret': basestring, } k_['v2'] = { 'key': basestring, 'secret': basestring, 'public_name': basestring } # 2. Get variables from config # For avoid copy-paste and massive chunks of code, just declare a new variable # and pass data from config to it sn = k_('SN', { 'vk': k_['v1'], # You can choose validator you want, for example v1... 'google': k_['v1'], 'twitter': k_['v1'], 'ok': k_['v2'] # ...or v2 }) # 3. Fill everything to a python variables which are required for 3rd-party library SOCIAL_AUTH_VK_OAUTH2_KEY = sn['vk']['key'] SOCIAL_AUTH_VK_OAUTH2_SECRET = sn['vk']['secret'] SOCIAL_AUTH_GOOGLE_OAUTH2_KEY = sn['google']['key'] SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET = sn['google']['secret'] SOCIAL_AUTH_TWITTER_KEY = sn['twitter']['key'] SOCIAL_AUTH_TWITTER_SECRET = sn['twitter']['secret'] SOCIAL_AUTH_ODNOKLASSNIKI_KEY = sn['ok']['key'] SOCIAL_AUTH_ODNOKLASSNIKI_SECRET = sn['ok']['secret'] SOCIAL_AUTH_ODNOKLASSNIKI_OAUTH2_PUBLIC_NAME = sn['ok']['public_name'] # 4. Check that config doesn't contain some garbage # (this might mean you forgot to get these variables, or this config is wrong, some draft for example) k_.check_redundant() # 5. If server is running without errors, and you will meet issue with this 3rd-party library later, # you can be sure that problem isn't in your configuration file. # Otherwise, you'll just catch a error on a start server stage. List of supported Exceptions Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/konf/
CC-MAIN-2018-26
refinedweb
1,076
54.63
Hi Guys, I am new to this and hope I can get a pointer in the right direction. I am doing an experiment where I need to stream 4 analogue inputs to measure voltages and one digital input to count pulses from a radiation detector. The pulses from the radiation detector are 3.3V square waves of around 50 microseconds, and I need to get the counts per second. I have the 4 analogue inputs working, but not sure how to configure the last input for the detector. Any assistance would be appreciated. Steven import sys import traceback from datetime import datetime import time import u3 def labjack(num_samples): MAX_REQUESTS = 100 # Number of requests per second SCAN_FREQUENCY = 20000 # Hz d = None d = u3.U3() d.configU3() # Check if U3 is HV d.getCalibrationData() d.configIO(FIOAnalog=4) # Set the FIO0 to FIO3 to Analog (d3 = b00000011) d.streamConfig( NumChannels=4, # Number of channels to stream PChannels=[0, 1, 2, 3], # Numbers 0-7 for positive channel inputs NChannels=[31, 31, 31, 31], # Numbers 0-7 for negative voltage inputs or 32 for single ended Resolution=3, ScanFrequency=SCAN_FREQUENCY) try: d.streamStart() missed = 0 dataCount = 0 packetCount = 0 readings = {"AIN0":[],"AIN1":[],"AIN2":[],"AIN3":[]} for r in d.streamData(): if len(readings["AIN0"]) >= num_samples: d.streamStop() d.close() break if r is not None: if r["errors"] != 0: print("Errors counted: %s ; %s" % (r["errors"], datetime.now())) if r["numPackets"] != d.packetsPerRequest: print("----- UNDERFLOW : %s ; %s" % (r["numPackets"], datetime.now())) if r["missed"] != 0: missed += r['missed'] print("+++ Missed %s" % r["missed"]) for k in readings.keys(): readings[k] += r[k] dataCount += 1 packetCount += r['numPackets'] else: print("No data ; %s" % datetime.now()) except: d.streamStop() print("Stream stopped.\n") d.close() now = int(datetime.now().strftime('%s%f')) avgs = {'time':now} for k in readings.keys(): avgs[k] = sum(readings[k])/len(readings[k]) d.close() return avgs while True: avgs = labjack(100) print(avgs) A counter would be your best option to capture the edges while streaming:... You would use our ConfigIO function to setup the counter:... You will need to use special channels to capture the counter readings while streaming. Since stream mode is limited to 16-bit channel reads and the counter returns a 32-bit value, you will need to stream a special counter channel (lower two bytes of the counter return) and the TC_Capture channel (upper two bytes of the counter return) then recombine the two into the full 32-bit counter value:-... Thanks for quick reply, I will try to follow the instructions. Support, I followed your links, but most of the directions were above my level, do you have any sample programs in Python showing how to assign a pin to count pulses and output a result? Even a few basic steps to get me started would be helpful. Thanks.. Steven Hi Guys, I tried my best to follow instructions in the above post, but despite dedicating most of my weekend to this nothing has worked. I'm obviously missing something. Does anyone have a short Python sample script showing how to configure a counter? "You will need to use special channels to capture the counter readings while streaming." This also confused me, not sure which channels should I use, I tried 210 and 224 but consistently got errors. d.configIO( ? ) d.streamConfig( ? ) I assume the problem must be in my settings. Any help would be much appreciated. Steven There is an example for configuring a counter in our Python source: d.configIO(EnableCounter0 = True, FIOAnalog = 15) If you do not change configIO to set TimerCounterPinOffset it will use the default value of 4, thus Counter0 would appear on FIO4 if no other timers or counters are set up. The pin offset must be 4 or higher when using the U3. If you have Counter0 enabled, you would want to read channel 210 and 224 under stream mode. d.streamConfig( NumChannels=6, # Number of channels to stream PChannels=[0, 1, 2, 3, 210, 224], # Numbers 0-7 for positive channel inputs NChannels=[31, 31, 31, 31,31,31], # Numbers 0-7 for negative voltage inputs or 32 for single ended Resolution=3, ScanFrequency=SCAN_FREQUENCY) You may need to reduce the scan frequency. The maximum scan rate is 50000Hz, and that is split across all channels (so 10000 if sampling 5 channels). Just a quick reply to say thanks, with your support and a bit of help from my son (programmer) I got all functions including the counter to work on my Dashboard. This was coded in Python3 using Dash Plotly on a Linux platform. Cheers.. Steven
https://labjack.com/forums/u3/configuring-u3-count-pulses
CC-MAIN-2022-27
refinedweb
772
64.71
In this section, we are going to sort the order of words in all the specified sentences. As the specified text consists of sentences that are terminated by either '.' or '?' or '!' followed by a single space. We have used split() method to split the sentences. Now in order to split the words of each sentence, we have again used used the split() method and using the Array.sort() method, we have arranged the sentences in alphabetical order of words. Here is the code for sorting Sentence in Alphabetical order: import java.util.*; import java.util.regex.*; public class StringExample { public static void main(String[] args) { String st = "hello! how are you? when are you coming? hope to see u soon."; String s1 = "", s2 = "", s3 = "", s4 = ""; Pattern p = Pattern.compile("[?!.]"); String arr[] = p.split(st); for (int i = 0; i < arr.length; i++) { s1 = arr[0]; s2 = arr[1]; s3 = arr[2]; s4 = arr[3]; } String st1 = "", st2 = "", st3 = "", st4 = ""; String a1[] = s1.split(" "); Arrays.sort(a1); for (int i = 0; i < a1.length; i++) { st1 += a1[i]; } String a2[] = s2.split(" "); Arrays.sort(a2); for (int i = 0; i < a2.length; i++) { st2 += a2[i] + " "; } String a3[] = s3.split(" "); Arrays.sort(a3); for (int i = 0; i < a3.length; i++) { st3 += a3[i] + " "; } String a4[] = s4.split(" "); Arrays.sort(a4); for (int i = 0; i < a4.length; i++) { st4 += a4[i] + " "; } System.out.println(st1 + "! " + st2 + "? " + st3 + "? " + st4 + "."); } } Output hello! are how you? are coming when you ? hope see soon to u.
http://www.roseindia.net/tutorial/java/core/arrangeSentences.html
CC-MAIN-2014-35
refinedweb
252
79.97
I make use of WebView within my app. It seems to be a bit short on error handling and timeout handling, which occasionally results in, well... nothing - just a blank view. How can I: (1) get a status from the WebView telling me whether it is still loading the DOM, has loaded the DOM and rendered the view, has got an error (and what the error is) etc? (2) get notified when the status in (1) changes (3) specify a timeout on a WebView? (if I can identify whether it has been populated using (1) I can probably workaround this using a timer) And, just in case I need it in future: (4) access the DOM? Many thanks, John H. For sure. For iOS, you would create a custom renderer and inherit from WebViewRenderer. Then within that, you would assign a custom WebView Delegate to the custom renderer's Delegate property. Then you can override LoadFailed() from with your custom WebView Delegate like so: public class CustomWebViewRenderer : WebViewRenderer { protected override void OnElementChanged(VisualElementChangedEventArgs e) { base.OnElementChanged(e); if(e.OldElement == null) { Delegate = new CustomWebViewDelegate(); } } } internal class CustomWebViewDelegate : UIWebViewDelegate { #region Event Handlers public override bool ShouldStartLoad(UIWebView webView, NSUrlRequest request, UIWebViewNavigationType navigationType) { //Could add stuff here to redirect the user before the page loads, if you wanted to redirect the user you would do the redirection and then return false return true; } public override void LoadFailed(UIWebView webView, NSError error) { Console.WriteLine("\nIn AppName.iOS.CustomWebViewRenderer - Error: {0}\n", error.ToString()); //TODO: Do something more useful here //Here, you can test to see what kind of error you are getting and act accordingly, either by showing an alert or rendering your own custom error page using basic HTML } public override void LoadingFinished(UIWebView webView) { //You could do stuff here such as collection cookies form the WebView } #endregion } Let me know if you would like to see the Android code as well. Answers Hi @JohnHardman, Did you try to use a try-catch on your code snippet which loads a WebView? When the WebView has timed out, the framework throws an exception. Hi @jefnazario I don't have a try/catch around the WebView at the moment (I should do, oops...), but no exception is being thrown on the occasions when the WebView never populates with the expected data. How do you specify the timeout period? Thanks, John H. Hello @JohnHardman, I didn't need, yet, specify the timeout from webView request, so, I'm sorry, I can't help you on this. About the try-catch, when I used WebView the try-catch instruction worked fine to me. Check if the website which you're making the request is up or down, maybe the request was timed out only in the website page, and not on WebView request. Let me know if that solve your problem. Best regards. Hello @jefnazario The web-site that I have been using for testing is one that is very slow to start up if it hasn't been accessed recently. It can take a good 20 seconds or more. On subsequent accesses, it is almost instantaneous (as far as a user is concerned). I haven't monitored the network traffic yet to see exactly what is happening, but occasionally I end up with a WebView that is unpopulated, which I would not have expected. It may be that the slow web-site start-up messes with any timeout handling internal to the WebView. If the web-site has been accessed recently, the WebView works fine, which makes me think something is awry with the timeout (or other error) handling. If there is no explicit method of specifying timeouts etc, I guess I will have to sniff the network traffic. I'm going to be studying for a university exam for the next couple of weeks, so will check the traffic when I get back to app development after that. Thanks again, John H. @JohnHardman If you are using IIS then it is most definitely one of the issues talked about here: In regards to the WebView, if you make a custom renderer, you can override the LoadFailed()method to find out what kind of errors are happening and decide what to do about them. For instance, I check to see if an error comes in talking about "internet connection appears to be offline". If I find that, I send a notification through MessagingCenterto display an alert. @hvaughan - Many thanks for your response. Not IIS I'm afraid. Regarding LoadFailed(), could you share how you have done this please? I've had a look at custom WebView renderers for Android and iOS, but haven't spotted where the LoadFailed() override would go. @JohnHardman For sure. For iOS, you would create a custom renderer and inherit from WebViewRenderer. Then within that, you would assign a custom WebView Delegate to the custom renderer's Delegateproperty. Then you can override LoadFailed()from with your custom WebView Delegate like so: Let me know if you would like to see the Android code as well. @hvaughan - Apologies for the delay in replying, and many thanks for the iOS code. It works a treat - for the time being, I've added a few lines to create an HTML message in case of error that is then displayed in the WebView. I need to pretty up the HTML, but it's functional. Yes please, if you have an Android equivalent, could you post that as well please? Many thanks again, John H. @JohnHardman No problem and sure thing. Depending on the version of Xamarin Android your solution is running, you may or may not need that second OnReceivedErrormethod (see the summary comments for an explanation), so I will comment it out and will also comment out the [Obsolete]annotation (which may not even be necessary with the latest Xamarin Android version, but ReSharper suggested it). So if you build you project and it complains about a deprecated OnReceivedErrormethod, then you can uncomment the method: There a bunch of Android WebView specific settings having to do with JavaScript, Zooming, cookies, back buttons, etc. that I left out but let me know if you would like to see those too. Thanks for the solution. I was actually looking for the Android implementation and your solution worked perfectly. I must admit, i'm not very knowledgeable when it comes to custom renderers therefore resolving the missing 'using' statements was a bit of a challenge. And i was actually working on hope because first of all, i had to change 'FormsWebView' to 'WebView', something i wasn't sure about. Then there is the 'class FormsWebViewClient : WebViewClient' where the magic lives. I don't know what this class is, but your comments helped point me to the 'region' of interest (i.e. '#region Event Handlers') where i modified the code to handle errors. Could you explain what 'FormsWebViewClient' is.
https://forums.xamarin.com/discussion/comment/350740
CC-MAIN-2020-50
refinedweb
1,151
61.16
User Name: Published: 07 Jan 2008 By: Mehfuz Hossain Download Sample Code Mehfuz shows how to create a custom LINQ provider using the open source project LINQExtender. In my previous article - LINQ provider basics - I have explained how LINQ to Entity work. I used examples mostly from my LINQ.Flickr project. Although creating a provider is fun, there are some repetitive tasks along the way, like expression processing and data extraction. Therefore things could be much easier with a common framework that takes care of complexes and monotonous tasks, while developers are presented with a simple model, by which they can get going with their providers without any expression overhead. LinqExtender exposes such model, which lets the developer focus only on the application logic - not on the query internals - while creating custom home made providers. It sits between the core LINQ framework and a custom provider. The stack looks similar to the one shown in figure 1. Creating a custom provider on LinqExtender is easier than anything else on the planet. As I said, it takes care of all the complex expression processing. All it takes is some overridden methods, where appropriate logics need to be placed. LinqExtender is not only for providers that work on an external service. It can also be used to create home-made LINQToSql providers. Though the original LINQToSql provider that comes with the core LINQ framework is enough to start with, it is desirable to create a custom LINQToSql provider that performs the simplest task to serve the purpose. Anyway, in the LinqExtender project I have created one provider called OpenLinqToSql, which I use to exercise the extender itself. Though simple, it lets you insert, update, and query objects and even provides server side paging with CTE (Common Table Expression). Therefore, to give a better example, I will demonstrate how OpenLinqToSql was made on top of LinqExtender. OpenLinqToSql extends the capability of LinqExtender and it supports both standard and compact (SQLCE) database. I would also like to add that the purpose of this article is to get acquainted with LinqExtender, not to create a LinqToSql provider. But the example was chosen to show the usefulness of the extender while creating a data intensive provider. The typical stack diagram of OpenLinqToSql is similar to the following. First let's create a query object on which the query will be performed. You create it by inheriting from QueryObjectBase and overriding the IsNew property, which is used to track if the object is newly added to the collection or not, and to track the object during the add/update of the query object. As we are creating a new LinqToSql provider, it needs to be able to contain any query object. Therefore, it must be of type T. As our provider is named SqlQuery, a general way of declaring it is like so: QueryObjectBase IsNew T SqlQuery In this case, Book will be a query object that is the replica of a database entity. In the future, I will make a tool that will generate the objects representing the entities in the database, though hand coding the entity is not that difficult as well. For the time being, let's do it in this way. The Book entity has ID, Author, Title, LastUpdated, ISBN columns in database and the object representation looks like so: Note that the LinqVisibleAttribute is in the LinqExtender.Attribute namespace. This is to enable a property to be able to do processing in LinqExtender. Also, it has a public property called UseInQuery, which by default it is set to true. In any case, if we don't want to include a property in a query, we can turn it off like in the following code snippet. In this case, it will still be visible by the extender. LinqVisibleAttribute LinqExtender.Attribute UseInQuery true IdentityAttriubute is defined in OpenLinqToSql, and inherits from LinqExtender.Attribute.UniqueAttribute. It is declared in the following way: IdentityAttriubute LinqExtender.Attribute.UniqueAttribute Finally, to differentiate valued and non-valued field, that all the non-string property in the Book class are defined as Nullable. Later I will show why. Nullable Before moving to the details of the SqlQuery class, I need to mention that creating a Query provider with LinqExtender requires three simple steps: Note that all of these methods are protected, which means that they are called only by the extender framework to process a request. Now, let's dig into the provider. First, the SqlQuery class is created. As I said earlier, the query object for the SQL provider is underministic, which means that I can port it to different tables in database. Therefore, unlike the external API (Flickr), for which I know the possible objects to query on or get result from - it is not the case for user defined database objects. Therefore, the Query provider declaration is slightly different from that of known object types: Let's move to the overriding of the Query methods. According to step 3, we first override the Process method. In its body, we will generate the SQL query, based on the value that is passed with the bucket object (which is filled by the extender against the query expression). Then, we run the query against the database with a DataContext class. Finally, we build the T object and add it to the IModiy<T> items collection. IModiy<T> First, we need to know if any order by clause is provided in the query. If not, then we will perform an order by on a unique field. (This is a requirement when we are building a WITH statement that has the Over clause, but not for normal select statement). IsAscending false FieldName In the code, bucket.UniqueItems returns the array of property names on which UniqueAttribute or a child class is used. bucket.UniqueItems UniqueAttribute Next, we have to build the select query, based on the expression items. Let's examine the portion of the if block that will generate the SQL with a WITH statement if Take is provided in the query block. Note that when Skip > 0 but no take is provided, we need to show an exception as well (as currently it is not supported). There we need to provide the following exception: Skip > 0 Now, bucket.ItemsToTake is null if there is no Take in the query or less it will have numeric value. Note that I have declared itemToTake as Nullable, so that a user can distinguish between valued and not valued state of the property. bucket.ItemsToTake Take itemToTake The whole if-else logic for processing SQL looks like the following: 1. Building the Select statement using WITH A typical WITH clause generated by this logic looks like so: The first task is to get the list of property names for the T type (e.g. Book). One way to do it is by using Reflection to extract the names out of it. An easier way is to use bucket.Items to get the names, which basically is a IDictionary<string,BucketItem>, where string is the name of the property and BucketItem contains the extended information about the property and how it is used in the query expression. Book bucket.Items IDictionary<string,BucketItem string BucketItem Getting the property names the easy way is done like in the following code: Then, we properly format the string with the fields that we just got. Earliar, we have built the orderByBuilder StringBuilder, which we used here to create the OVER clause. Here, two things are possible: If any orderby is used in query, then do the orderby using the mentioned property or object value. Otherwise, by the default, use the unique property of the object. orderByBuilder Next, we have to append the entity that the query targets. Here, bucket.Name will give name of the object - or the user-defined name if OriginalNameAttribute is provided - that maps to the entity name. bucket.Name OriginalNameAttribute Finally, we have to build the WHERE clause and append the final stuff of WITH. That is, select between items. Here, CreateWhereClauseIfPossible is used to build the Where clause, which internally calls BuildClause. It basically builds the clause based on the query expressions. For that I haven't used any black arts; just iterated over Bucket.Items in the following way: CreateWhereClauseIfPossible BuildClause Bucket.Items Here, bucket.Items[propertyKey].Name is the name of the property or user-defined name. Earlier in the article I have talked about declaring the properties of a query object other than string as Nullable. This is where it comes useful with if (bucket.Items[propertyKey].Value != null), to check if the property is used in a query expression. Finally, bucket.Items[propertyKey].ReleationType contains the enum operator (Equal, LessThan,etc.) that is used against the property for filling the values in the where clause of the expression. GetEquavalentSqlOperator contains some switch statements that return SQL string operator values based on the enum type. bucket.Items[propertyKey].Name if (bucket.Items[propertyKey].Value != null bucket.Items[propertyKey].ReleationType enum Equal LessThan GetEquavalentSqlOperator switch 2. General Select Statement This is pretty simple, in contrast to the WITH statement construction. 3. Run query and Fill IModify collection. This is done through the following call: Inside the method we create a db context and then we execute the query: db For each row, we create a new T type object and call its FillProperty to populate each property. Here bItems.Keys (Bucket.Items.Keys) gives a list of property names and bItems[key].Name (Bucket.Items[key].Name) either gives the property name or user defined name representing the entity column(by OriginalNameAttribute). FillProperty bItems.Keys (Bucket.Items.Keys) bItems[key].Name Bucket.Items[key].Name The final task is to override the 3.b -> AddItem and 3.c - > RemovItem. These are pretty simple as the fetched object is passed by user through context.Add and context.Remove calls. All is needed is to generate the Insert and Delete statement based on T property values. 3.b -> AddItem 3.c - RemovItem context.Add context.Remove For the Insert statement, the code looks like: For the Delete statement, the code looks like: Going back to BuildClause, we checked whether the query was an insert or not. We did it because the same routine is reused for insert, delete and select, which generates slightly different SQL for insert statements. That's it, we are ready to roll. The DataContext class requires a simple config entry, so in the app/web.config file we need to have the following lines: For parsing the configuration, I have created an OpenLinqDataProviderConfiguration class, which loads the settings in the constructor of DataContext, which I left to you to explore. We have created a sort of custom LinqToSql provider, without using any expression processing and Reflection. We also showed how LinqExtender proves to be useful in this case. It can be used for external source based providers (e.g. Flickr) in the same way. You can take a live preview of that at. Also, don't forget to check out to download OpenLinqToSql for a more in-depth look of the LinqExtender in action.
http://dotnetslackers.com/articles/csharp/CreatingCustomLINQProviderUsingLinqExtender.aspx
CC-MAIN-2017-26
refinedweb
1,865
63.19
There’s a lot more to most websites than meets the eye these days, and I thought an interesting Python project to take on at the start of my Christmas break would be to uncover the extra requests hidden below the surface with some help from tcpdump. The challenge? Tcpdump’s output is a huge list of the packets it captures, each one loaded with so much information that scanning the raw output for the domains and URLs that give away sites’ activity is near impossible. Luckily, Python has a few tricks up its sleeve that can help to filter out the data we need in a more readable format. Setup and running tcpdump As usual, the first step is to get the basics set up. This means importing the os module and using it to run the tcpdump command that will give us some data to play with. import os os.system(‘sudo tcpdump > tcpdump_data’) I’ve gone for the vanilla version, but you could jazz this up by specifying ports or other network interfaces if needed. Everything that’s captured will be stored in a file called tcpdump_data, ready for us to start picking out the interesting bits. Now the tcpdump process is complete and the data exists, it’s time to load it with Python (the “r” allows us to read the file) and define the keywords we’re looking for. tcpdump_data = open(“tcpdump_data”, “r”) keyword1 = ‘HTTP: GET’ keyword2 = ‘AAA?’ I’m interested in seeing which websites have been browsed, so I want to pick out two key bits of data: DNS requests (preceded by “AAA?”), which will show the domains visited, and HTTP GET requests (preceded by “HTTP: GET”), which will show the individual directories and pages browsed, albeit only for sites that use HTTP and not HTTPS. Defining some pretty colours Although you could do fancier things with this kind of script, in my example both of these pieces of data will be displayed in one feed, so it’d be nice to show them in different colours to make them a bit easier to pick out. I’ll use purple and yellow. class colours: PURPLE = ‘\033[95m’ YELLOW = ‘\033[93m’ I had to do a bit of Googling to work out how to do this. As usual, it was Stack Overflow that came up with solution – in this case one that uses a simple class to define colours. Parsing the tcpdump data Now it’s time to do the heavy work – parsing the tcpdump data we produced and opened earlier and picking out the domains and GET request paths that we come after our keywords. for line in tcpdump_data: if keyword1 in line: data = line.split(“GET “, 1)[1] print colours.PURPLE + data, if keyword2 in line: data = line.split(“AAA? “, 1)[1] print colours.YELLOW + data, Here we iterate through the data line by line, looking for occurrences of “HTTP: GET” and “AAA?”. If one of these keywords exists in a line, the words after it (i.e. the domains and URL paths) will be printed to the terminal in some nice colours. Output The result is a list that gives us an interesting look at the requests that go on in the background when we visit websites – take the BBC News site, for example: There’s plenty more potential here, too. These data sets could be written to separate files, for example, or stored in a way that more clearly shows the association between each entry. But my main objective was to see if I could scrape the relevant data and make it easier to read domains and URLs from tcpdump, and I’m happy with the result. Chris Dlugosz (CC BY 2.0). Cropped.
http://mattcasmith.net/2017/12/18/grabbing-domains-and-urls-from-tcpdump-data-using-python/
CC-MAIN-2019-35
refinedweb
623
66.78
What's the best way to tell from a shell script if the host has restarted since the last time the script was run? I can think of storing the uptime in a file on each run, and checking if it has decreased, but that doesn't seem completely robust (a server might restart start quickly, store a low uptime, then reboot slowly and come up with a higher uptime). Is there something like a "started at" value which would be guaranteed to change only on a reboot? Or some other good way of detecting a restart? If you don't want to use cron, you could also create a directory in /dev/shm. Since that location is in memory, it will always be empty when the computer starts up. If it's not empty, you haven't rebooted. /dev/shm Use the @reboot directive in /etc/crontab to create the directory when the system starts. @reboot Another approach would be to touch a file somewhere in the startup scripts and search for that in the script, the script could then delete once it's run the first time. Just be careful you deal with runtime level changes and so forth nicely. sar command will help you in this. sar The sysinfo() OS call gives you the time since the last reboot. This is where uptime gets it data before performing some formatting - its simpler to write you're own wrapper around this than parsing the output of uptime / working out how to read the start time of init (pid 1). #include <stdio.h> #include <time.h> #include <errno.h> int main(int argc, char **argv) { struct sysinfo info; int status; status=sysinfo(&info); if (status==0) { printf("%d\n", info.uptime); } else { fprintf(STDERR,"error %d occurred\n", errno); } exit (status); } You'll also need to touch a file from your script and poll its mtime (using stat(1) from your script or stat(2) from C) #!/bin/bash # # usage: $(basename $0) FILE # returns TRUE if the file has been run since last boot # [[ $# -ne 1 ]] && { grep 'usage' "$0"; exit 255; } # access time of script/program atime="$(stat -c %X "$1")" # Last boot as unix timestamp btime="$(date -d "$(who -b | awk '{ print $3, $4 }')" +%s)" if [[ $atime -gt $btime ]]; then exit 0 # true else exit 1 # false fi This will virtually get you there, this will check if a file has been accessed since the last boot. #!/bin/bash unixTimestamp () { date --utc --date "$1" +%s } if [ "$1" == "" ];then echo "You must specify a filename (e.g. './lastRun.sh foobar.sh')" exit 255 fi FILENAME=$1 if [ -f ${FILENAME} ]; then if [ -n ${FILENAME} ]; then LASTRUN=`ls -laru ${FILENAME}|cut -f6,7,8 -d' '|sed 's/'${FILENAME}'//g'|sed 's/^ *//;s/ *$//'` LASTBOOT=`who -b|sed 's/.*system boot//g'|sed 's/^ *//;s/ *$//'` echo "Last Run: '${LASTRUN}'" echo "Last Boot: '${LASTBOOT}'" if [ "$(unixTimestamp ${LASTRUN})" -lt "$(unixTimestamp ${LASTBOOT})" ]; then echo "${FILENAME} has not been run since last boot." exit 1 else echo "${FILENAME} has been run since last boot" exit 0 fi fi else echo "Unable to find file '${FILENAME}'..." exit 255 fi If you have run the file it will have been accessed, but it also may have been edited by somebody and not run, or maybe touched. EDIT A bit of explanation: LASTRUN=ls -laru ${FILENAME}|cut -f6,7,8 -d' '|sed 's/'${FILENAME}'//g'|sed 's/^ *//;s/ *$//' ls -laru ${FILENAME}|cut -f6,7,8 -d' '|sed 's/'${FILENAME}'//g'|sed 's/^ *//;s/ *$//' For the above command: ls -laru ${FILENAME}|cut -f6,7,8 -d' ' # Will take the filename and display the last access time |sed 's/'${FILENAME}'//g' # Will strip the filename out to leave you with the date and time |sed 's/^ *//;s/ *$//' # Will strip out whitespace before and after the date/time This will leave you a date/time ready to convert to a unix timestamp LASTBOOT=who -b|sed 's/.*system boot//g'|sed 's/^ *//;s/ *$//' who -b|sed 's/.*system boot//g'|sed 's/^ *//;s/ *$//' who -b # Will show the time the system last booted |sed 's/.*system boot//g' # Will strip out the text "system boot" to leave you with a date/time |sed 's/^ *//;s/ *$//' # Will strip out whitespace before and after the date/time the unixTimestamp function takes the Date/TIME variable and converts it to a unix timestamp and then it's just a case of seeing which is the higher number. The higher number is the most recent, so we can work out if the last thing that happened was a reboot or a file access. By posting your answer, you agree to the privacy policy and terms of service. asked 3 years ago viewed 1510 times active
http://serverfault.com/questions/255911/detecting-restart-since-last-run-in-a-shell-script-on-linux
CC-MAIN-2015-11
refinedweb
788
73
[nant-dev] NAnt pre- and post-build events within the solution task What does everyone think of adding native NAnt pre- and post-build events to the solution task. These tasks would be independent of the existing project pre- and post-build events. Ideally, we would make the current pre- and post-build expansion macro variables available via properties: My [nant-dev] Re: latest nightly build Taking a look at the build file for the tests, it looks like the MSNet tests DLL doesn't include any files but the ../../src/CommonAssemblyInfo.cs. It's possible that the fileset changes broke parent directory traversal. Gert Driesen wrote: Hi Martin, I'll look into that error later. For now, [nant-dev] Major performance enhancements landed With the help of Gert Dreisen, we've landed some major file-scanning performance changes. You'll find them in any nightly builds from today onward. I'm seeing major CPU usage drops as well as improvements in scanning many filesets. I can now run a full rebuild on an unchanged [nant-dev] Re: temp leekage with and without [EMAIL PROTECTED] running and it seemed to make no differance. I've had no Re: [nant-dev] Re: temp leekage it's finished. Only the directory itself remains. /Nicke Matthew Mastracci wrote: [nant-dev] Re: fileset/directoryscanner hang I think we decided to just document this side-effect. My memory fails me here ... But if that is what we decided, then I guess we should indeed document it in both the filset doc and in the release notes .. Sounds fair. The reason for keeping this was that it was more consistent with patterns Re: solution task GacCache / AppDomain construction (was : Re: Re: Re: [nant-dev] Bugs) -developers = = = = = = = = = = = = = = = = = = = = Your sincerely: Jackfan [EMAIL PROTECTED] 2004-09-04 = = = = = = = = = = = = = = = = = = = = Your sincerely: Jackfan [EMAIL PROTECTED] 2004-09-04 -- Matthew Mastracci [EMAIL PROTECTED Re: solution task GacCache / AppDomain construction (was : Re: Re: Re: [nant-dev] Bugs) of previous runs cached, this shouldn't have an appreciable impact on performance. Matt. On Sat, 2004-09-04 at 09:04, Gert Driesen wrote: - Original Message - From: Matthew Mastracci [EMAIL PROTECTED] To: Gert Driesen [EMAIL PROTECTED] Cc: Jackfan [EMAIL PROTECTED]; Nant-Developers (E-Mail [nant-dev] Re: Two broken testcases - edge case question , -- Troy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Matthew Mastracci Sent: Thursday, 8 July 2004 5:34 AM To: Nant-Developers (E-mail) Subject: [nant-dev] Two broken testcases - edge case question It looks like the regex optimization broke an edge case Re: Fw: [nant-dev] current cvs build failure : ----- Original Message - From: "Ian MacLean" [EMAIL PROTECTED] To: "Matthew Mastracci" [EMAIL PROTECTED] Cc: "Gert Driesen" [EMAIL PROTECTED] Sent: Friday, July 09, 2004 4:12 AM Subject: Re: Fw: [nant-dev] current cvs build failure Matthew Re: Fw: [nant-dev] current cvs build failure se directory is set to each of the path elements and searched as above. If a pattern is rooted, it is tested against the entire path string. If the full pattern path happens to correspond to the base directory, it may apply to files beneath the base directory. We should probably document t Re: Fw: [nant-dev] current cvs build failure On Thu, 2004-07-08 at 20:12, Ian MacLean wrote: Matthew Mastracci wrote: Strange - I had the same issue while I was developing (turned out that the DirectoryScanner was busted), but the fix for that was checked in. Where was the fix ? Maybe its manifesting itself in another way on my [nant-dev] Some profiling hot-spots to optimize Here's a few hot-spots that someone could probably pick off pretty easy. All percentages are relative to total time. This is from running a solution task based build. I haven't tried a csc-based build yet. ProjectSettingsLoader::ProcessPlatform - 5.6% +- [nant-dev] Re: Fileset scanning speed-ups in CVS 3% of the total runtime of NAnt during my test build - effectively dwarfed by most other operations. Matt. Matthew Mastracci wrote: Just as an extra note- Before the change, there were 13 regex comparisons in one of our projects. The change reduces this to 128000 string comparisons (taking [nant-dev] Two broken testcases - edge case question It looks like the regex optimization broke an edge case: **/* now matches the base directory, as well as any subdirectories on a FileSet.DirectoryNames call. For instance, in the following directory structure, all three will be matched with a base directory of C:\foo: C:\foo C:\foo\bar Re: [nant-dev] Re: ResGen assembly references? Can anyone give me a hint on how to create these assembly references in 1.0/1.1 via VS.NET? I haven't seen them before. Gert Driesen wrote: - Original Message - From: Ian MacLean [EMAIL PROTECTED] To: Gert Driesen [EMAIL PROTECTED] Cc: Matthew Mastracci [EMAIL PROTECTED]; Nant [nant-dev] Re: [nant-commits] CVS: nant/src/NAnt.Console NAnt.Console.exe.config,1.58,1.59 ; } text3 = assembly1.GetName().Name; LicenseCompiler.assemHash.Add(text3, assembly1); if (text3 != text1) { continue; } return assembly1; } ... Gert - Original Message - From: Matthew Mastracci [EMAIL [nant-dev] Re: ResGen assembly references? Gert Driesen wrote: Can't we do a quick parse of the resx sources to see if they require those references ? A large number of cases do not require any references at all. Yeah we probably could ... Just didn't/don't have time for that, and didn't see much need for it at that point ... I had to [nant-dev] Re: [nant-commits] CVS: nant/src/NAnt.Console NAnt.Console.exe.config,1.58,1.59 to call lc.exe multiple times. We should probably be documenting some of these quirks somewhere. I usually end up running into something that looks like a bug, but turns out to be by design. Matt. Matthew Mastracci wrote: Gert Driesen wrote: Matthew, The .NET 1.1 lc.exe indeed has a command line [nant-dev] Projects and current working directory Something that changed in the last six months or so (perhaps longer?) is that the projects no longer set their working directory equal to the base directory of the project itself. I'm proposing the following patch to Project.cs to fix this. Since I don't know if anyone relies on the current, [nant-dev] Fileset scanning speed-ups in CVS I just checked in a change to fileset scanning that eliminates a large chunk of time (checked via profiling) during many of the common NAnt operations. One of the biggest losers on the profiling run was Regex.Match(), called many, many times during a build. The new code replaces a good Re: [nant-dev] Fileset scanning speed-ups in CVS for a general solution build to match (or exceed) VS.NET's compilation speed wherever possible. Matt. - Original Message - From: Matthew Mastracci [EMAIL PROTECTED] I just checked in a change to fileset scanning that eliminates a large chunk of time (checked via profiling) during many [nant-dev] Re: Fileset scanning speed-ups in CVS of the built-in ones like **/CVS/**, etc. If we cache the compiled Matt. Matthew Mastracci wrote: I just checked in a change to fileset scanning that eliminates a large chunk of time (checked via profiling) during many of the common NAnt operations. One of the biggest losers on the profiling run [nant-dev] wix task for NAnt :) or, when Microsoft went open-source. Looks like Microsoft has finally jumped on the true open source bandwagon with the release of WiX - an XML to MSI compiler. It's unfortunately not under a GPL-compatible license (preventing direct linking), but the command-line interface would work just Re: [nant-dev] Re: Solution task fixes + speedups Gert Driesen wrote: So anyways, I finally understand that you are correct - VS.NET does check AssemblyFolders before HintPath. IMHO this is a strange way to do things. VS.NET doesn't make it easy to easily reproduce a build between two machines, does it? No, definitely not. In my opinion, [nant-dev] CopyLocal - once and for all :) It seems like our CopyLocal logic in the solution task doesn't seem to match VS.NET 2003. Did this change between 2002/2003? As far as I can see, if the CopyLocal flag is not specified in 2003, it should be treated as false. Does anyone know for sure what this is in 2002? Matt. Re: [nant-dev] CopyLocal - once and for all :) Gert Driesen wrote: Matthew, The CopyLocal behaviour depends on whether the assembly is a system assembly or not (and some other criteria too perhaps). I'm pretty sure the current implement matches the behaviour of VS.NET 2003, but I might be wrong ofcourse. Do you have a example of where the [nant-dev] Re: NAnt pedantic mode Gert Driesen wrote: You mean an attribute that didn't exist ? Properties that don't exist cause a build error already ... Yep - that's what I was thinking. But I agree that we should indeed have this mode (or just always run NAnt in this mode, what do you propose) ... In what cases should NAnt Re: [nant-dev] Re: Solution task fixes + speedups options) to locate assemblies. (feel free to add support for this to the solution task :-)) Gert - Original Message - From: Matthew Mastracci [EMAIL PROTECTED] To: Gert Driesen [EMAIL PROTECTED] Cc: Nant-Developers (E-mail) [EMAIL PROTECTED] Sent: Tuesday, March 16, 2004 11:35 PM Subject Re: [nant-dev] Re: Solution task fixes + speedups Gert Driesen wrote: I can also guarantee 100% that VS.NET (2003) is only using the hintpath as a last resort ;) I've reverted the change in CVS. Thanks for the explanation :) Matt. --- This SF.Net email is sponsored by: IBM Linux Tutorials [nant-dev] Solution task fixes + speedups I've spent a bit more time speeding up the solution task and fixing differences from VS.NET. Here's a short summary of what to expect with the latest CVS: 1. We only create one AppDomain per solution build and per project build. We were creating dozens of AppDomains per project build [nant-dev] Re: Change to call task makes upgrade difficult James C. Papp wrote: This was also my rational of not just adding a flag to call/. The depends/ task is just a dynamic form of the target/ task's depends attribute..., in all other respect their ultimate functionality would be identical. The depends/ task was not meant to be used as a way to [nant-dev] NAnt pedantic mode I've run into a number of build-script bugs today that are related to NAnt task properties changing/disappearing/obsoleting/etc. What does everyone think of a command-line flag to put NAnt into pedantic mode? This would throw an error if any build task tried to use a property that didn't Re: [nant-dev] Re: Remove support for WebDAV from solution task ? wrote: Matthew, can you tell me what makes our WebDAV access to IIS so much more troublesome, than what VS.NET uses ? Thanks, Gert - Original Message - From: Matthew Mastracci [EMAIL PROTECTED] To: Nant-Developers (E-mail) [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Tuesday [nant-dev] Re: Wild targets/Current target As a curiousity, wouldn't you be able to use a regular target and properties to define what to call? For instance, your gateway project can just do this: target name=process description=Builds recursively all subprojects foreach item=Folder property=foldername [nant-dev] Re: Wild targets/Current target Giuseppe Greco wrote: As a curiousity, wouldn't you be able to use a regular target and properties to define what to call? For instance, your gateway project can just do this: Yes, but this is exactly what I'm trying to avoid... When a master build file contains more than 3 or 4 targets to [nant-dev] Re: touch task bug I've had a fix for this one locally for a while - just checked it in. Gert Driesen wrote: Matt, Can you please submit a bug report for this ? Thanks, Gert - Original Message - *From:* Steele, Matt mailto:[EMAIL PROTECTED] *To:* '[EMAIL PROTECTED]' mailto:'[EMAIL Re: [nant-dev] Re: touch task bug Sure... shouldn't be too tough. I'll just create a testcase that runs the touch task twice in a row on the same file. Gert Driesen wrote: - Original Message - From: Matthew Mastracci [EMAIL PROTECTED] To: Gert Driesen [EMAIL PROTECTED]; Nant-Developers (E-mail) [EMAIL PROTECTED] Sent [nant-dev] Nightly task docs broken Looks like the nightly task docs broke at some point- You don't have permission to access /nightly/help/index.html on this server. Permissions issue with SCP? Matt. --- The SF.Net email is sponsored [nant-dev] Re: SolutionTask Guids should be unique. The only way to get this is by copying a project and forgetting to change it. VS.NET doesn't warn you of this, but behaves badly in subtle ways. Ian MacLean wrote: Is this the right patch ? if it is possible to have two valid projects with an identical guid should we [nant-dev] Re: new xml type For tasks (such as the solution task), what about having C# classes in NAnt that map to XML itself? They could appear as XML to xmlpeek/xmlpoke/xmlforeach, but would be backed by C# classes internally. This would make it far simpler to return values from tasks. Martin Aliger wrote: Hi all, [nant-dev] [Fwd: [Nprof-developers] Re: solution task] Original Message Subject: [Nprof-developers] Re: solution task Date: Tue, 06 Jan 2004 11:09:22 -0700 From: Matthew Mastracci [EMAIL PROTECTED] To: Martin Aliger [EMAIL PROTECTED], [EMAIL PROTECTED] References: [EMAIL PROTECTED] Sounds good to me too - just ensure that you've [nant-dev] Re: new xml type tasks) and adding context info so tasks know about following task, and preceding tasks, sounds interesting. But I'm not sure how this ties in here. What would the user experience be? What would the syntax be? - Original Message - From: Matthew Mastracci [EMAIL PROTECTED] For tasks [nant-dev] Re: solution task -0700 From: Matthew Mastracci [EMAIL PROTECTED] To: Martin Aliger [EMAIL PROTECTED], [EMAIL PROTECTED] References: [EMAIL PROTECTED] Sounds good to me too - just ensure that you've got compilerargs specified by language. There's a number of flags that are valid for a single compiler only. Martin Aliger [nant-dev] Re: Will there be a nAnt-0.84rc2 and final before the end of the year? Gert - are we branched for 0.84? I have some minor checkins I'm itching to get in for the 0.85. I can also look at some solution cleanups over my vacation time in the next few weeks. Gert Driesen wrote: - Original Message - From: Scott Hernandez [EMAIL PROTECTED] To: Morris, Jason [nant-dev] Re: Adding XML support to foreach or new task / could also be updated but this probably falls out of the scope of that. This is pretty specialized, and will need to be very specific to xml. - Original Message - From: Matthew Mastracci [EMAIL PROTECTED] To: Nant-Developers (E-mail) [EMAIL PROTECTED] Sent: Thursday, December 18, 2003 12:25 PM [nant-dev] basedir semantics change? Has somebody modified the project basedir semantics? The latest CVS version isn't working correctly for me. It's acting as if the basedir attribute wasn't specified. I'll try to find out the date that it was busted, but I'm curious if anyone remembers changing anything to do with this. This [nant-dev] basedir changes Well, with additional investigation, it appears that the problem is that something has broken the includesList element in my build file. My build files are structured like so: \build\scripts\nightly.build basedir=. \build\scripts\project1.build basedir=../.. It seems as if project1.build is [nant-dev] Basedir changes - possible problems So it looks like the recent basedir changes have revealed a long-standing issue w.r.t. assumptions about the current directory. It turns out that there are a few places where the tasks assume that the project's base directory is the same as the current directory. The two big ones I've found Re: [nant-dev] Re: project references problems in solution task Looks pretty good to me. Do you notice an impact on compile speed? I can try patching my local copy of NAnt and running it through our build torture test - 90+ projects with all sorts of inter-project and 3rd-party references. :) I may not have time to do this until next week, however. [nant-dev] Re: verbosity of some tasks That would be cool. It would be nice to move the reference code into a common place (the csc and vbc tasks) to avoid code duplication. Matt. Gert Driesen wrote: - Original Message - From: Matthew Mastracci [EMAIL PROTECTED] To: Nant-Developers (E-mail) [EMAIL PROTECTED] Sent [nant-dev] Re: verbosity of some tasks Sorry! That was a think-o on my part. I meant resource code. Should have read: It would be nice to move the *resource* code into a ... Been a tough week ;) Matt. Gert Driesen wrote: - Original Message - From: Matthew Mastracci [EMAIL PROTECTED] To: Gert Driesen [EMAIL PROTECTED Re: [nant-dev] Re: project references problems in solution task (re-cc'ing the list) AppDomains allow you load/unload assemblies at will. The .NET runtime tends to exhibit some odd/unpredictable behaviour as you load assemblies with the same name, especially if they don't have strong names. You can end up with types no longer resolving as expected. By [nant-dev] Re: verbosity of some tasks At some point I'd like to use the multiple .resx input of resgen.exe - it would be nice to swallow output if no errors occur and just output: [resgen] Transformed 5 .resx files The solution builds would be a lot easier to read. Perhaps I can compress all of the reference copy operations into [nant-dev] Re: building nightbuilds What about a framework task that sets appropriate properties based on provided flags? This could allow us to hide a number of different tests to select appropriate frameworks: !-- Select the currently executing framework -- framework type=current / !-- Would these be useful at all? -- [nant-dev] Re: BSD license for NAnt? I don't know if there has been a consensus about the license change. The discussion kind of petered off after a while. I'm still supporting either LGPL or GPL with linking/plugins exception, however. Matt. Jaroslaw Kowalski wrote: Hi! According to releaseplan.html a Move to an Apache or [nant-dev] Re: FW: [Nant-users] solution stuff Make sure your COM dll is registered on the server that is building your project. Gerold Kathan wrote: hi - we are trying to get our solution to build by nAnt - actually we are not successful - there seems to be something missing - we reference a COM dll (BAWPublicV5) and solution task [nant-dev] Re: solution task and linked VS.Net files. Viehland, Kirk wrote: Nant developers, I am getting this error when I compile a VS.Net 2003 solution with an ProductInfo file that has been linked from a parent directory. Just as a side note - what is a Linked file within a solution? How is one created? [nant-dev] target run[after|before]=.../ I've been thinking about some of the target dependency functionality, and I was wondering what people thought about having some extra specification of target ordering. There's a couple of types in here, so I've broken them up. I'm interested in hearing opinions of what people think of these. [nant-dev] Re: echo proposal: sync w/ Ant Gert Driesen wrote: No problem for me, but I suggest holding off on this change until after the 0.8.4 release ... would that be ok for you ? Sure. I'll keep the changes in my tree until 0.8.4 is out the door. Matt. --- This SF.net email is [nant-dev] NAnt 0.8.4 release ASAP? It looks like tons of bug reports are coming from people using the 0.8.3 version of the solution task. Should we put out 0.8.4 ASAP? Any volunteers for the release? This might cut down on the repetitive bug reports. :) Matt. --- This [nant-dev] Editing .build files w/syntax hilighting in VS.NET I'm not sure if this was posted to these lists before, but this registry modification with allow you to edit .build file with nice XML syntax hiliting in VS.NET 2003. Windows Registry Editor Version 5.00 Re: [nant-dev] MSBuild #1 difference - the source availability! :) John Lam wrote: I've spent a fair amount of time recently with MSBuild, and have the following set of observations about its relationship to [N]Ant: --- This SF.net email is sponsored by: SF.net [nant-dev] solution task speedups I pointed nprof at NAnt to see if I could get the solution task to build a little quicker for those of us with extra-large solutions. The code in Reference.cs was re-loading projects over and over if project references were being used. This re-loading of projects consumed over 99% of the [nant-dev] [need review] Fix for assembly path in NUnit2 test runner I've found a bug in the NUnit2 test domain. It seems that it changes the current directory to be the directory of the testing assembly, but then tries to use the relative path to access the assembly. I haven't checked it in, because this behavour has changed at some point between 0.8.3 and Re: [nant-dev] Licensing I agree, though in [2] and [3] I believe that changes (if any) to the core NAnt code should be contributed back. Scott Hernandez wrote: All of these scenarios should be allowed, IMHO. - Original Message - From: Brant Carter [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Friday, October Re: [nant-dev] Licensing While replying to your note, I noticed the following on our license page: --- NAnt ships with a prebuilt version of NDoc. The NAnt license does not apply to these components located in the bin folder of the distribution. NDoc is licensed under the GNU Re: [nant-dev] Licensing I'm not sure that I agree with changing the license to a BSD or Apache-style license. The code I've contributed was for a GPL project - changing it now would be the same to me as a bait-and-switch scheme pulled by a company. NAnt works well as a GPL'd project. It's effectively a stand-alone Re: [nant-dev] RE: RE: [Fwd: Ready to tackle next release] ://lists.sourceforge.net/lists/listinfo/nant-developers -- Matthew Mastracci [EMAIL PROTECTED] --- This sf.net email is sponsored by:ThinkGeek Welcome to geek heaven. ___ nant-developers mailing Re: [nant-dev] RE: RE: [Fwd: Ready to tackle next release] . Thanks for the clarification, Matt. Ian MacLean wrote: Matthew Mastracci wrote: Please let me know if I'm totally out to lunch on this one- I'm guessing that this resx issue you are describing is a problem with the solution task. I think he's talking about Compilerbase.cs. The regular compiler Re: [nant-dev] solution task fix Unfortunately, VS.NET uses the name of the class as the generated resources filename. Changing this would break any designer-created code. BTW, as a friendly suggestion. :) I can't read your mail from Sept. 9 because it appears to be encoded in something other than text/plain. I get Re: RE : [nant-dev] solution task fix [mailto:[EMAIL PROTECTED] Sent: Wednesday, September 17, 2003 5:00 AM To: Matthew Mastracci; Vincent Labatut Cc: [EMAIL PROTECTED] Subject: Re: [nant-dev] solution task fix VS.NET uses the full name of the class (namespace.classnam) for the behind the scenes .resx files (those resource files Re: [nant-dev] license task broken scanner (I'm not sure, cause I have not much time to dive into this yet). If I give something like D:\licensed_components\*.dll as includes in the nested assembly, he tries to load dlls without the ending. Eg A file name assembly.dll gets only assembly . HTH, -sa -- Matthew Mastracci [EMAIL Re: [nant-dev] solution task addin Quick note- should the compiler be specified for each of these args? Not all args are supported by all compilers. Specifying an argument that a compiler does not support may break your build. I'd wager that a good number of places used mixed-language builds. BTW, thanks for all of the Re: [nant-dev] solution task addin I agree with Martin. Sometimes you need to override a build setting on your build machine. At our shop, we tag each build with a string version that doesn't fit Microsoft's idea of a version number. This produces a warning and, on assemblies that build with warning = error, a build failure! Re: [nant-dev] NUnit security Especially those people using Draco.NET to build Sourceforge projects. :) Martin Aliger wrote: Seems ok. It is not problem for me - just a general thought. Maybe we could add note about it into doc for NUnit{1,2} tasks. Could be problem for projects like Draco.NET or CruiseControl.NET Re: [nant-dev] [PATCH] New fileset option fromframework heaven. ___ nant-developers mailing list [EMAIL PROTECTED] -- Matthew Mastracci [EMAIL PROTECTED] --- This sf.net Re: [nant-dev] 0.83-rc3 AFAIK, there wasn't a problem with the hint path per se, but rather that it didn't have a way to override the hint path for people who store their 3rd-party DLLs in a different place per-developer. We're using the 0.8.3 solution task here. As long as the fix for the .resx resources RE: [nant-dev] solution task question ___ nant-developers mailing list [EMAIL PROTECTED] -- Matthew Mastracci [EMAIL PROTECTED] --- This sf.net email is sponsored by:ThinkGeek Welcome Re: [nant-dev] last suggestion - Solution task again assembly-folders includes name=c:\temp\build\dll/ /assembly-folders /solution Martin -- Matthew Mastracci [EMAIL PROTECTED] --- This sf.net email is sponsored by:ThinkGeek Welcome to geek heaven. http Re: [nant-dev] last suggestion - Solution task again Good find. I like your assembly-folders idea. Unfortunately, I'm quite busy at work and don't have any time for solution task development. It should be pretty straight-forward to implement if someone has an hour or so to space. Martin Aliger wrote: I did some exploration in this field and Re: [nant-dev] last suggestion - Solution task again I think I understand. Each developer has a bunch of reference directories that are set up in VS.NET, right? You're right - we would likely need to add a references tag to the solution task to handle this situation. Matt. Martin Aliger wrote: Odd... We simply check in .csproj files and Re: [nant-dev] last suggestion - Solution task again with the correct reference path for that workstation. brant ... From: Martin Aliger [EMAIL PROTECTED] To: Matthew Mastracci [EMAIL PROTECTED] CC: ! nant [EMAIL PROTECTED] Subject: Re: [nant-dev] last suggestion - Solution task again Date: Mon, 25 Aug 2003 10:52:59 +0200 I'd recommend Re: [nant-dev] solution task fixes Eddie - ignoring non-csproj and non-vbproj files looks good. We shouldn't do this without at least warning the user that we are ignoring their projects, however. Can you add a log message to this test? Eddie Tse wrote: Hi All, I've been experimenting with the solution task from CVS and had Re: [nant-dev] last suggestion - Solution task again I'd recommend against comparing only filenames. This will likely end up causing trouble down the line. VS.NET is certainly a mess when it comes to hint paths, but I've found that they are generally accurate. I don't even think it uses them half of the time. :) Can you describe your Re: [nant-dev] Avoid using WebDAV with solution task : Matthew Mastracci [EMAIL PROTECTED] To: J. Jason De Lorme [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Saturday, August 02, 2003 1:05 AM Subject: Re: [nant-dev] Avoid using WebDAV with solution task Just a note- there was recently a submission that added a webmap URL-filesystem mapping Re: [nant-dev] Contributing go about contributing all of this? Thanks - Tom Cabanski, President Objective Advantage, Inc. Phone: +1-281-348-2517x15 -- Matthew Mastracci [EMAIL PROTECTED Re: [nant-dev] [0.8.3] RC2 build failure party libraries (for multiple framework versions). Apparently you've committed the NAnt.VSNet.build from the main branch to the 0.8.3 branch. Gert - Original Message - From: Matthew Mastracci [EMAIL PROTECTED] To: John Barstow [EMAIL PROTECTED] Cc: Nant-Developers (E-mail) [EMAIL Re: [nant-dev] Solution/Project Parser Perhaps it might be better to design an .xsd and use the automatic XML deserialization routines. I've found that this way is much cleaner than the methods used in the current solution and slingshot tasks. Bernard Vander Beken wrote: Hello Yves, Have you looked at the parsing logic and Re: [nant-dev] Solution/Project Parser concept. And then it was Matthew Mastracci I believe (correct me if I'm wrong) that introduced the solution task into NAnt. Unfortunatly, I'm not quiet happy with its current implementation, because there's no clear separation between the solution/project as data (the content of the .sln and .*prj Re: [nant-dev] Solution/Project Parser ). Ofcourse, it all depends on your focal point, meaning if you don't use VS.NET, you probably couldn't care less about the solution concept. And then it was Matthew Mastracci I believe (correct me if I'm wrong) that introduced the solution task into NAnt. Unfortunatly, I'm not quiet happy with its Re: [nant-dev] Solution/Project Parser Does VS.NET save all of your files before doing a build, or do you have to manually save them? Bill Conroy wrote: My whole team does this all too. Here is the writeup I gave out a while back to the OT list on how to do this: Here are the steps I used to integrate[note I do not address keeping [nant-dev] FIXED: Major temp directory leak in solution task release candidate. Matthew Mastracci wrote: I've just realized that the temp directory leak in the solution task is likely slowing my build process down by an order of magnitude! Those who run checkin/nightly builds with this task may wish to consider an automated process to clear the temp Re: [nant-dev] Building solution works in VS.NET but not when usingNAnt solution task This is a bug. I'm pretty sure it's fixed in CVS. Use the lowercase version of your configuration (ie: release) instead of Release. [EMAIL PROTECTED] wrote: I have built some projects successfully and have made sure that the dependencies are build in order, however, when NAnt tries to Re: [nant-dev] Using TlbImp Without VS.NET Installed Install the framework SDK. This includes all of the utilities required to fully build a VS.NET solution, minus devenv.exe. Aaron Jensen wrote: Part of our .NET software requires use of a .NET wrapper around a COM object. As such, during a build (using the sln / taks), tlbimp.exe is called Re: [nant-dev] NAnt INTERNAL ERROR - Solution Task, VS.NET 2003 Do you have a release configuration in your MobileAdministratorPlugins project? Vitaly Livshits wrote: Hello, I am getting the following error when trying to to build my solution. Do you know what might be causing it? The .build file is attached. Thanks, Vitaly Livshits Senior Software [nant-dev] VB projects now supported I've also checked in support for reading resources of VB projects and generating the appropriate dependent resource name. This was supposed to go in RC1, but I managed to check it into the wrong CVS branch. Look for it in the final 0.8.3 release. Matt. [nant-dev] Solution task fixes checked in For those interested, some recent bugfixes checked in: - Generate the correct filename for .resx files without dependent files (ie: x\foo.resx in project some.namespace - some.namespace.x.foo.resources) - Convert the requested configuration to lowercase to match the rest of the solution Re: [nant-dev] [RC1-Bug] solution task and VB.NET web application Bizarre- I thought this was checked in before RC1. You might need to wait for the final release. Philippe Lavoie wrote: Im resending this with RC1 in the title so that John doesnt miss it J Im getting the following output which I though was solved by Matthew [solution] - Re: [nant-dev] [RC1-Bug] solution task and VB.NET web application I thought it was both, but I made a number of fixes and it might have slipped through my mental net. Will the final release be made against the HEAD or RC1? On Thu, 2003-07-17 at 19:25, Ian MacLean wrote: Matthew Mastracci wrote: Bizarre- I thought this was checked in before RC1. You might
https://www.mail-archive.com/search?l=nant-developers%40lists.sourceforge.net&q=from:%22Matthew+Mastracci%22&o=newest
CC-MAIN-2022-21
refinedweb
5,496
64
Group results by Component Owner Priority Reporter Resolution Severity Status Type descending Show under each result: Description Max items per page Reported by dvd@…, 6 years ago. Hi all, I'm puzzled by a strange behavior of yaml If you run the attached script you obtain an output like this: {'date': '732638', 'fields': {}, 'guid': '0000010f153544cf8a314808007f000000000001', 'expiration': None} {'date': '732638', 'fields': {'': {}, 'it': {'title': 'Hello World'}}, 'guid': '0000010f153544cf8a314808007f000000000001', 'expiration': None} *Please note the 'fields' value (I wrap the output to increase legibility)* the first line is printed inside this custom constructor function: def news_constructor(loader, node): nodes = loader.construct_mapping(node) print nodes return nodes the second line (the correct one) is the print of the return values of this function Can you help me or explain me this strange behavior? Reported by jean.philippe.mague _at_ gmail _d.o.t_ com, 6 years ago. when installing pysyck from syck-0.61+svn231+patches.tar.gz, I end up with a module with only the Node type and the load function (and several blah funtions). When I install it from PySyck-0.61.2.tar.gz (with syck previouly installed) every goes just fine. I use python 2.4 on a Ubuntu Edgy Reported by dukebody@…, 6 years ago.? By Edgewall Software. Visit the Trac open source project at
http://pyyaml.org/query?status=closed&max=3&page=3&col=id&col=resolution&col=summary&col=owner&col=reporter&desc=1&order=resolution&row=description
CC-MAIN-2013-20
refinedweb
216
65.73
Golden Master Testing: More Refactoring, More Understanding Free JavaScript Book! Write powerful, clean and maintainable JavaScript. RRP $11.95 After a tedious slog through views and controllers, we finally reached the part of refactoring that becomes an adventure: Grasping at straws. Second guessing everything. Obsessively running the tests. Nurturing the delicate hope that our new-found understanding is, indeed, insight and not just another failed assumption. In the previous article, we dissected the first line of the compute method, tracing that line back carefully and meticulously. We discovered that the charts are concerned with dates, most likely on some sort of weekly basis, for the past year. Next we will untangle the code that processes the raw data into the correct format for the chart to consume. Once Upon a Time module Stats class RunningTimeData include Rails.application.routes.url_helpers attr_reader :title, :dates, :key, :today, :y_legend, :y2_legend, :x_legend, :values, :links, :values_2, :x_labels, :y_max def initialize(title, timestamps, key, now) @title = title @dates = timestamps.map(&:to_date) @key = key @today = now.to_date end def compute @actions_running_per_week_array = convert_to_weeks_from_today_array # cut off chart at 52 weeks = one year @count = [52, total_weeks_of_data].min # convert to new array to hold max @cut_off elems + 1 for sum of actions after @cut_off @actions_running_time_array = cut_off_array_with_sum(@actions_running_per_week_array, @count) @max_actions = @actions_running_time_array.max # get percentage done cumulative @cumulative_percent_done = convert_to_cumulative_array(@actions_running_time_array, dates.count ) @url_labels = Array.new(@count){ |i| url_for(:controller => 'stats', :action => 'show_selected_actions_from_chart', :index => i, :id=> @key, :only_path => true) } @url_labels[@count]=url_for(:controller => 'stats', :action => 'show_selected_actions_from_chart', :index => @count, :id=> "#{@key}_end", :only_path => true) @time_labels = Array.new(@count){ |i| "#{i}-#{i+1}" } @time_labels[0] = "< 1" @time_labels[@count] = "> #{@count}" # Normalize all the variable names @y_legend = I18n.t('stats.running_time_legend.actions') @y2_legend = I18n.t('stats.running_time_legend.percentage') @x_legend = I18n end def total_weeks_of_data weeks_since(dates.last) end def weeks_since(date) days_since(date) / 7 end def days_since(date) (today - date).to_i end def convert_to_weeks_from_today_array convert_to_array(dates, total_weeks_of_data+1) {|date| [weeks_since(date)] } end # uses the supplied block to determine array of indexes in hash # the block should return an array of indexes each is added to the hash and summed def convert_to_array(records, upper_bound) a = Array.new(upper_bound, 0) records.each { |r| (yield r).each { |i| a[i] += 1 if a[i] } } a end # returns a new array containing all elems of array up to cut_off and # adds the sum of the rest of array to the last elem def cut_off_array_with_sum(array, cut_off) # +1 to hold sum of rest a = Array.new(cut_off+1){|i| array[i]||0} # add rest of array to last elem a[cut_off] += array.inject(:+) - a.inject(:+) return a end def convert_to_cumulative_array(array, max) # calculate fractions a = Array.new(array.size){|i| array[i]*100.0/max} # make cumulative 1.upto(array.size-1){ |i| a[i] += a[i-1] } return a end end end And so the story begins, with a variable named @actions_running_per_week_array. Actions running is something that we’ve seen before, and it can be translated into unfinished TODOs. The thing is, we don’t care that they’re TODOs. We extracted the created_at timestamps from the TODOs, coerced them into dates, and that’s all the chart is interested in. Simple datapoints. It also seems unnecessary to encode the type name into the name. def datapoints_per_week convert_to_weeks_from_today_array(dates, total_weeks_of_data+1) end convert_to_weeks_from_today_array takes two arguments, both of which are available in the entire class. There is no need to pass either of them. Taking them off leaves us with a spurious abstraction: def datapoints_per_week convert_to_weeks_from_today_array end Inlining the method results in a vague sense of déjà vu. def datapoints_per_week convert_to_array(dates, total_weeks_of_data + 1) {|date| [weeks_since(date)] } end The entire method body is a call to another method, which takes arguments that are available to the whole class. Inline it! Step back for a moment and think about the methods that we just inlined. It started out like this: def datapoints_per_week convert_to_weeks_from_today_array end def convert_to_weeks_from_today_array convert_to_array end def convert_to_array # several lines of complicated logic end If a method definition contains a single line, and that line is just another method call, then the two method names better be worth it. Contrast the above with the following method, which we defined in the course of the previous refactoring. def total_weeks_of_data weeks_since(dates.last) end In the latter example both names are meaningful. They’re at different levels of abstraction, and together they tell a good story. Discovering Datapoints Let’s poke at datapoints_per_week with our test_stuff test, to figure out how the data is structured. def test_stuff now = Time.utc(2014, 1, 2, 3, 4, 5) timestamps = [ now - 5.minutes, now - 1.day, now - 7.days, now - 8.days, now - 8.days, now - 30.days, ] stats = Stats::RunningTimeData.new("title", timestamps, "key", now) assert_equal "something", stats.datapoints_per_week end The spread of data here goes from just a few minutes ago, to a month ago. I’m particularly interested in figuring out what happens in a week without any data. The failure looks like this: Expected: "something" Actual: [2, 3, 0, 1] The index of the array represents the number of weeks ago, and the value is the number of datapoints that occurred that week. Here’s how the array of datapoints is produced: def datapoints_per_week a = Array.new(total_weeks_of_data+1, 0) dates.each {|date| [weeks_since(date)].each {|i| a[i] += 1 if a[i] } } a end Shield your eyes, children, we’ve got nested iteration! The inner loop iterates over an array of one element. That seems a bit pointless. All we need is an index: dates.each {|date| i = weeks_since(date) a[i] += 1 if a[i] } There is no way in which weeks_since will ever return nil or false, so we can delete the postfix if. While we’re at it, let’s rename a to frequencies. def datapoints_per_week frequencies = Array.new(total_weeks_of_data + 1, 0) dates.each {|date| frequencies[weeks_since(date)] += 1 } frequencies end The datapoints have been clustered by week. Next it looks like we’re going to truncate it. # cut off chart at 52 weeks = one year @count = [52, total_weeks_of_data].min The comment suggests that we only want to show data back to a certain date. That kind of makes sense, but then you might wonder why the SQL query didn’t simply limit the data based on some cutoff date. Perhaps there’s more to it. If we look at how @count is used, it gets even more confusing. # convert to new array to hold max @cut_off elems + 1 # for sum of actions after @cut_off @actions_running_time_array = cut_off_array_with_sum(datapoints_per_week, @count) Assuming that @cut_off actually means @count, this comment contradicts the previous one. Either it’s cutting the data off at a year, or it’s summing up the dates outside the cutoff as a single entry. I don’t see how it could be both. We need to poke at it to see what’s actually going on. def test_stuff chart = Stats::RunningTimeData.new("title", [], "key", Time.now) per_week = [1, 0, 3, 4, 5] assert_equal [], chart.cut_off_array_with_sum(per_week, 1) assert_equal [], chart.cut_off_array_with_sum(per_week, 2) assert_equal [], chart.cut_off_array_with_sum(per_week, 3) assert_equal [], chart.cut_off_array_with_sum(per_week, 4) assert_equal [], chart.cut_off_array_with_sum(per_week, 5) assert_equal [], chart.cut_off_array_with_sum(per_week, 6) end One at a time, the failures fill in the blanks. # datapoints_per_week = [1, 0, 3, 4, 5] [1, 12] # cut off at 1 [1, 0, 12] # cut off at 2 [1, 0, 3, 9] # cut off at 3 [1, 0, 3, 4, 5] # cut off at 4 [1, 0, 3, 4, 5, 0] # cut off at 5 [1, 0, 3, 4, 5, 0, 0] # cut off at 6 The cutoff method doesn’t discard data, it batches all the overflow into a single slot. Moreover, it also fills in any empty slots within the desired time period with 0. @count seems like an overly generic name. What it really tells us, is exactly how many weeks of data will be displayed in the chart, regardless of how many weeks of data are available in the raw data. Renaming count to total_weeks_in_chart, and actions_running_time_array to datapoints_per_week_in_chart tells a slightly more coherent story. This is verbose, but until we have teased out all of the concepts, it’s going to be hard to pick a better name. def datapoints_per_week_in_chart cut_off_array_with_sum(datapoints_per_week, total_weeks_in_chart) end Notice the familiar pattern: The arguments are globally available, and the method definition consists of a single call to another method. Inlining cut_off_array_with_sum gives us: def datapoints_per_week_in_chart a = Array.new(total_weeks_in_chart + 1) { |i| datapoints_per_week[i]||0 } a[total_weeks_in_chart] += datapoints_per_week.inject(:+) - a.inject(:+) a end This looks a bit scary, but can be simplified somewhat: def datapoints_per_week_in_chart frequencies = Array.new(total_weeks_in_chart) {|i| datapoints_per_week[i].to_i } frequencies << datapoints_per_week.inject(:+) - frequencies.inject(:+) end Moving on, the code is introducing a completely new term: cumulative_percent_done. I’m not really sure that done is the word that we are looking for. We’re dealing with unfinished TODOs, not completed TODOs. And even so, we don’t care. Let’s rename it to cumulative_percentages: def cumulative_percentages convert_to_cumulative_array(datapoints_per_week, dates.count) end As usual, we can inline the contained method: def cumulative_percentages a = Array.new(datapoints_per_week_in_chart.size) {|i| datapoints_per_week_in_chart[i]*100.0/dates.count } 1.upto(timestamp_counts.size-1) {|i| a[i] += a[i-1]} a end That looks more than a little frightning. It’s doing a lot. It would be helpful to name the concepts, so let’s extract some methods. The first part is creating an array of percentages by week. def percentages_per_week Array.new(datapoints_per_week_in_chart.size) {|i| datapoints_per_week_in_chart[i]*100.0/dates.count } end Think about what this is doing. - It is creating a new array that is the same length as an existing array, and - It is deriving each value in the array from the corresponding value in the original array. Another word for this is map. def percentages_per_week datapoints_per_week_in_chart.map {|count| count * 100.0 / dates.count } end This method is doing quite a bit, as well. Naming the block cleans it up considerably. def percentages_per_week datapoints_per_week_in_chart.map(&percentage) end def percentage Proc.new {|count| count * 100.0 / dates.count} end The fully refactored cumulative_percentages logic looks like this: def cumulative_percentages running_total = 0 percentages_per_week.map {|percentage| running_total += percentage } end That’s not nearly as frightening as it seemed just a moment ago. Cleanup The rest of compute can be blown apart with simple extractions. The resulting code can be seen below. It is even longer than the original, but the concepts, originally hidden behind cryptic names at odd levels of abstraction, have now been brought to the surface. Are the names right? Probably not. This was not a refactoring to create a good arrangement of the code, it was a refactoring to discover what is there. Inline, inline, extract, inline, extract, extract. This is the rhythm of refactoring when codebase has not yet found the right abstractions. The result of this refactoring is understanding, not good code. For that, we’re going to take one final pass at this. We will ask the question if this were two things, what would they be? and out of the ashes if this refactoring, a tiny, cohesive abstraction will appear. module Stats class RunningTimeData include Rails.application.routes.url_helpers attr_reader :title, :dates, :key, :today def initialize(title, timestamps, key, now) @title = title @dates = timestamps.map(&:to_date) @key = key @today = now.to_date end def compute # FIXME: delete call from controllers end def y_legend I18n.t('stats.running_time_legend.actions') end def y2_legend I18n.t('stats.running_time_legend.percentage') end def x_legend I18n.t('stats.running_time_legend.weeks') end def values datapoints_per_week_in_chart.join(",") end def links url_labels.join(",") end def values_2 cumulative_percentages.join(",") end def x_labels time_labels.join(",") end def y_max # add one to @max for people who have no actions completed yet. # OpenFlashChart cannot handle y_max=0 1 + datapoints_per_week_in_chart.max + datapoints_per_week_in_chart.max/10 end private def url_labels urls = Array.new(total_weeks_in_chart) {|i| url(i, key) } urls << url(total_weeks_in_chart, "#{key}_end") end def url(index, id) options = { :controller => 'stats', :action => 'show_selected_actions_from_chart', :index => index, :id=> id, :only_path => true } url_for(options) end def time_labels labels = Array.new(total_weeks_in_chart) {|i| "#{i}-#{i+1}" } labels[0] = "< 1" labels[total_weeks_in_chart] = "> #{total_weeks_in_chart}" labels end def total_weeks_in_chart [52, total_weeks_of_data].min end def total_weeks_of_data weeks_since(dates.last) end def weeks_since(date) (today - date).to_i / 7 end def cumulative_percentages running_total = 0 percentages_per_week.map {|count| running_total += count} end def percentages_per_week datapoints_per_week_in_chart.map(&percentage) end def percentage Proc.new {|count| (count * 100.0 / dates.count)} end def datapoints_per_week_in_chart frequencies = Array.new(total_weeks_in_chart) {|i| datapoints_per_week[i].to_i } frequencies << datapoints_per_week.inject(:+) - frequencies.inject(:+) end def datapoints_per_week frequencies = Array.new(total_weeks_of_data + 1, 0) dates.each {|date| frequencies[weeks_since(date)] += 1 } frequencies end end end Get practical advice to start your career in programming! Master complex transitions, transformations and animations in CSS!
https://www.sitepoint.com/golden-master-testing-refactoring-understanding-2/
CC-MAIN-2021-17
refinedweb
2,102
51.55
{-# LANGUAGE ScopedTypeVariables, NoMonomorphismRestriction #-} -- |. module Storage.Hashed.Index( readIndex, updateIndexFrom, readOrUpgradeIndex , indexFormatValid ) where import Prelude hiding ( lookup, readFile, writeFile, catch ) import Storage.Hashed.Utils import Storage.Hashed.Tree import Storage.Hashed.AnchoredPath import Data.Int( Int64, Int32 ) import qualified Data.Set as S import qualified Data.Map as M import Bundled.Posix( getFileStatusBS, modificationTime, getFileStatus, fileSize, fileExists, EpochTime ) import System.IO.MMap( mmapFileForeignPtr, Mode(..) ) import System.IO( openBinaryFile, hGetChar, hClose, IOMode(..) ) import System.Directory( removeFile, doesFileExist ) -------------------------- -- Indexed trees -- -- | Description of a a single indexed item. The structure itself does not -- contain any data, just pointers to the underlying mmap (bytestring is a -- pointer + offset + length). -- -- The structure is recursive-ish (as opposed to flat-ish structure, which is -- used by git...) It turns out that it's hard to efficiently read a flat index -- with our internal data structures -- we need to turn the flat index into a -- recursive Tree object, which is rather expensive... As a bonus, we can also -- efficiently implement subtree queries this way (cf. 'readIndex'). data Item = Item {) == '/' -- xlatePeek32 = fmap xlate32 . peek xlatePeek64 :: (Storable a, Bits a) => Ptr a -> IO a xlatePeek64 = fmap xlate64 . peek -- xlatePoke32 ptr v = poke ptr (xlate32 v) xlatePoke64 :: (Storable a, Bits a) => Ptr a -> a -> IO () xlatePoke64 ptr v = poke ptr (xlate64 v) -- | Lay out the basic index item structure in memory. The memory location is -- given by a ForeignPointer () and an offset. The path and type given are -- written out, and a corresponding Item is given back. The remaining bits of -- the item can be filled out using 'update'. createItem :: ItemType -> AnchoredPath -> ForeignPtr () -> Int -> IO Item createItem typ path fp off = do -- | Read the on-disk representation into internal data structure. The Index is -- organised into "lines" where each line describes a single indexed -- item. Cf. 'Item'. -- -- The first word on the index "line" is the length of the file path (which is -- the only variable-length part of the line). Then comes the path itself, then -- fixed-length hash (sha256) of the file in question, then two words, one for -- size and one "aux", which is used differently for directories and for files. -- -- With directories, this aux holds the offset of the next sibling line in the -- index, so we can efficiently skip reading the whole subtree starting at a -- given directory (by just seeking aux bytes forward). The lines are -- pre-ordered with respect to directory structure -- the directory comes first -- and after it come all its items. Cf. 'readIndex''. -- -- For files, the aux field holds a timestamp. peekItem :: ForeignPtr () -> Int -> -- | Gives a ForeignPtr to mmapped index, which can be used for reading and -- updates. mmapIndex :: forall a. FilePath -> Int -> IO (ForeignPtr a, Int) mmapIndex indexpath req_size = do exist <- doesFileExist indexpath act_size <- if exist then fileSize `fmap` getFileStatus indexpath else return 0 let size :: Int size = fromIntegral $ if req_size > 0 then fromIntegral req_size else act_size case size of 0 -> return (castForeignPtr nullForeignPtr, size) _ -> do (x, _) <- mmapFileForeignPtr indexpath ReadWrite (Just (0,BS (iPath 0 tree <- root return (tree { treeHash = h }, item_map) else return (emptyTree, item_map) -- | Read an index and build up a 'Tree' object from it, referring to current -- working directory. Any parts of the index that are out of date are updated -- in-place. The result is always an up-to-date index. Also, the 'Tree' is -- stubby and only the pieces of the index that are expanded will be actually -- updated! To implement a subtree query, you can use 'Tree.filter' and then -- expand the result. Otherwise just expand the whole tree to avoid unexpected -- problems. i subs ((name,x):xs) = do let path' = path `appendPath` name noff <- subs xs create x path' noff lastOff <- subs (listImmediate s) xlatePoke64 (iAux i) (fromIntegral lastOff) return lastOff create (Stub _ _) path _ = fail $ "Cannot create index from stubbed Tree at " ++ show path pokeBS magic (BS.pack "HSI0") create (SubTree reference) (AnchoredPath []) 4 readIndex indexpath hashtree -- | Check that a given file is an index file with a format we can handle. You -- should remove and re-create the index whenever this is not true. indexFormatValid :: FilePath -> IO Bool indexFormatValid path = do fd <- openBinaryFile path ReadMode magic <- sequence [ hGetChar fd | _ <- [1..4] :: [Int] ] hClose fd return $ case magic of "HSI
http://hackage.haskell.org/package/hashed-storage-0.3.5/docs/src/Storage-Hashed-Index.html
CC-MAIN-2018-26
refinedweb
702
55.64
Recently, the open source automated testing framework robot framework has been used in the project. To sum up, I hope it will be helpful to you. install First, make sure that Python is installed on the system. Then you can install using PIP: pip install robotframework After installation, use the following command to view the version: robot –version Then we can create a simple test script: *** Settings *** Documentation Example using the space separated format. Library OperatingSystem *** Variables *** ${MESSAGE} Hello, world! *** Test Cases *** My Test [Documentation] Example test. Log ${MESSAGE} My Keyword ${CURDIR} Another Test Should Be Equal ${MESSAGE} Hello, world! *** Keywords *** My Keyword [Arguments] ${path} Directory Should Exist ${path} You can then run the test script using the robot: robot helloworld.robot The operation results are as follows: You can also install and use ide tool ride, which can be used to easily create automatic test projects and write test scripts. PIP makes it easy to install: pip install robotframework-ride String operation Robot framework string splicing needs to use the cate keyword. The following code combines hello and world ${s}= catenate Hello World The result is hello world. If we want no space in the middle, we need to use the separator parameter: ${s}= catenate SEPARATOR= Hello World The result is HelloWorld. The separator parameter declares the concatenation characters in the splicing. The output result of the following code is hello|world: ${s}= catenate SEPARATOR=| Hello World If the string contains special characters, such as # etc., escape is required. Examples are as follows: ${k}= catenate SEPARATOR= \#val_ ${key} \# Use the split string keyword in the string library to separate strings. For example, V1, V2 and V3 are separated by commas. The processed results are saved in the list. Examples of use are as follows: FOR ${str} IN @{dic} ${ss}= String.Split String ${str} : Set To Dictionary ${data} ${ss}[0] ${ss}[1] END In this example, the keys and values of the incoming parameters are separated by colons, for example: Name:Jone Age:13 Here, split string is used to separate the key and value and save them in the dictionary. Note that using the split string keyword requires a reference to the string library. Test web API Using the requestslibrary of the robot framework can easily test web APIs. First, you need to install requestlibrary: pip install robotframework-requests Then you can write the test script. First, you need to declare the use of Library: *** Settings *** Library RequestsLibrary Next, define the access address: *** Variables *** ${HOST} Then you can write use cases: *** Test Cases *** API Test Example Create Session my_session ${HOST} ${headers}= Create Dictionary Accept=application/json Content-Type=application/json charset=utf-8 # POST request with params ${data}= Create dictionary field1=value1 field2=value2 ${response}= Post Request my_session my-endpoint headers=${headers} data=${data} Should be equal as strings ${response.status_code} 200 Log ${response} The post method is used here. The key value pairs sent to the API are defined in data. The keyword create dictionary for creating the key value pair dictionary is used here. The returned object is defined in the variable ${response} and can be accessed using post request. Database related test During the test, we can directly access the database to check whether the data is correct. At this time, we can use robot framework databaselibrary. First from After downloading and decompressing, execute Python setup Py install. It can be used after installation. Create a new test in ride, introduce databaselibrary into library, press F5 to open the search keywords window, and select databaselibrary to list all relevant keywords: You also need to install a specific database access module. If you access sqlserver, you can install pymssql: pip install pymssql Then you can write test cases, such as: *** Settings *** Library DatabaseLibrary *** Test Cases *** DBTEST Connect To Database Using Custom Params pymssql database='dbname',user='username',password='pwd',host='localhost' Table Must Exist AUTH_USER_TB Disconnect From Database There are two keywords in the database library that can be used to connect to the database: Connect to database and connect to database using custom params. Connect to database needs to declare parameters or save parameters in the configuration file. The parameters are as follows: dbapiModuleName=None, dbName=None, dbUsername=None, dbPassword=None, dbHost=localhost, dbPort=5432, dbConfigFile=./resources/db.cfg It should be noted here that if the declaration dbname, dbusername, dbpassword, dbhost or dbport is not displayed, the corresponding configuration items will be found in the specified configuration file. If the configuration file is not found, an error will be reported. The format of the configuration file is as follows: [default] dbapiModuleName=pymysqlforexample dbName=yourdbname dbUsername=yourusername dbPassword=yourpassword dbHost=yourhost dbPort=yourport The following keywords can be used: - Check if exists in database: the parameter is a select statement. If there is a query result, it returns true; otherwise, it is false. - Check if not exists in database: Contrary to the keyword result above. - Delete all rows from table: the parameter is the table name, which deletes all data in the table. - Description: the parameter is a select statement, which returns the description array of the query result field, such as name =’id ‘, type_ code=1043, display_ size=None, internal_ size=255, precision=None, scale=None, null_ ok=None)。 - Execute SQL script: execute SQL script. Multiple SQL statements are separated by semicolons. - Query: execute query statements. - Row count: returns the number of rows of the query statement. - Row count is 0: Returns whether the number of query rows is 0. - Row count is equal to X: Returns whether the number of query rows is the given number of rows X. - Row Count Is Greater Than X: Returns whether the number of query rows is greater than the given number of rows X. - Row Count Is Less Than X: Returns whether the number of query rows is less than the given number of rows X. - Table must exist: Specifies whether the table exists. Custom keywords Use the robotframework to test the API interface. You need to log in first for each test. In order to reduce repeated login scripts, use the user-defined keywords of the robotframework to simplify the script of the login process. The code is as follows: *** Keywords *** Log in to the platform [Arguments] ${host} ${username} ${password} Create Session my_session ${host} ${data}= Create dictionary UserName=${username} Password=${password} ${response}= POST On Session my_session url=/api/Account/Login json=${data} Log ${response} [Return] my_session For convenience, we use Chinese as the keyword name. Then define three input variables, ${host} is the URL to log in, and ${username} and ${password} are the user name and password. Use post on session to log in. After logging in, return to the session created in the login process. For example, this keyword can be used in the test case: *** Test Cases *** TestLogin ${mysession} = log in to the platform host = ${host} username = saleuser1 password = 1 ${datalist}= Create dictionary WorkFlow_Name=LeaveApply1 ${responselist}= POST On Session ${mysession} url=${GetActivateListUrl} json=${datalist} Log ${responselist} We can save the customized keywords in the resource file, which is convenient for sharing in multiple test cases. The structure of the resource file is basically the same as that of the test file, except that there is no test case, and the variables and user-defined keywords are exactly the same. Reference the resource file in the setting part of the test file, such as: *** Settings *** Library RequestsLibrary Resource ../../Resources/flowresources.robot The location of the resource file is relative to the current path. The path can also contain variables, such as ${resources} / common tsv。 If multiple resource files contain the same user-defined keywords, you need to use the resource file name as a prefix when using these keywords. If multiple resource files contain the same variables, the variables loaded first work. In many cases, we need to customize the keyword to receive multiple parameters, and the number of parameters is uncertain. In this case, we can declare the parameters as a list and use the @ modifier. The example code is as follows: MultiArguments [Arguments] ${par1} @{dic} Log ${par1} FOR ${v} IN @{dic} Log ${v} END In the above example, ${PAR1} is a fixed parameter, followed by @ {DIC} is a variable parameter list, and the parameters in the list are processed circularly in the user-defined keyword. Examples of calls are as follows: MultiArgumetns para1 u1 u2 u3 u4 Develop custom libraries for the robotframework The robotframework is developed based on python. We use Python to develop custom libraries and provide extensions to the robotframework. Python is very simple to implement, because it doesn’t need any text editor and explanation, and so on. Here, the creation and use process is described by implementing a simple requirement. The requirement is very simple. You need to get the value after flowid = from a string. First, we create a python file with the following code: def get_flow_id(url): idx = url.index('flowid=')+7 le = len(url) flow_id = url[idx:le] return flow_id Then you can reference this library in the robot file and add the following in settings: *** Settings *** Library ../pylibs/getflowid.py Here, the location of the library is relative to the location of the current file. In test cases or keywords, you can use the new keywords defined in the user-defined library. Here, get_ flow_ The new keyword corresponding to ID is get flow ID, “” The symbol is replaced by a space and the following characters become uppercase. The following is an example of using: ${fid} Get Flow Id ${rurl}
https://developpaper.com/robot-framework-usage-summary/
CC-MAIN-2022-33
refinedweb
1,586
52.09
Pydev 1.0.8 has ben just released... A major bug triggered this release (that's why it's been issued in less than one day from the previous release). Mainly, if you had a file that had a docstring at the global level with an empty line, it could get to a loop when adding a new-line to the document. This has been fixed and is already available for download. Also, 2 other minors have been done for Pydev Extensions (but they surely would not be worth a release for them). -- Fabio 2 comments: I still have same debug troubles - since 1.0.6 (last 1.0.5 works fine). Trouble point is pydo/utils.py def _import_a_class(fqcn): line 56: return getattr(module, className) - exactly "className" - when i watch it python.exe raise exception and is terminated. Can you report that as a bug in the sf bugtracker? () Also, it would be nice more details, such as when is this called (when you hit a breakpoint? Or any run in debug mode?) It would also be nice that you could change the library to print what exactly are the parameters it is receiving when this happens (module and className)... To me it appears a bug in pydo that is being triggered by the debugger rather than the other way (so, it might be nice for you to report that to the pydo guys too) Cheers, Fabio
http://pydev.blogspot.com/2006/05/pydev-release-108.html?showComment=1148628780000
CC-MAIN-2015-22
refinedweb
239
82.54
I'll add a brief explanation of setup first. We have 4 sites. Site1 - Server1, Server2, Server3, Server4 Site2 - Server5 Site3 - Server6 Site4 - Server7 Server1,2,5,6,7 are GC , DNS Server 1,5,6,7 are File servers Server 1 is Server 2003 Server 5,6,7 are Server 2008 Unfortunately at the moment I am unable to upgrade Server1 to 2008 so i'm stuck with FRS. Originally all laptops personal storage was accessed from \\Server1\PersonalStore\User when accessing from the other sites connection was slow. Since the company has become quite a bit bigger and staff are now moving from one site to another. 99% of the time its between Site1 and 1 other Site. So i figured to improve performance. I set up FRS namespaces \\domain.local\Site1\User \\domain.local\Site2\User \\domain.local\Site3\User \\domain.local\Site4\User Each of these have a root on the associated Sites Server and Server1 on Site1 I then went through all staff profiles removing old syncs \\Server1\PersonalStore\User replacing with \\domain.local\SiteX\User (X being site they use) The storage synced between sites fine, and staff could access the shares with no loss of data. Seems now I am getting a strange problem with laptops using "Always Avaialble Offline" periodically show the "bubble" indicating that they've been disconnected from the server and are now working offline. While on Wireless. and still in the offices. Right Clicking the Icon bottom right and selecting synchronise fixes it. I have checked and the internet connection stays active. I have tested the dns servers and they appear to be resovling correctly. I have done some reading and found a possible solution but did not work. net config server /autodisconnect:-1 on the server to prevent disconnection of idle clients. Any other suggestions or anyone with experience with this would be brilliant. 4 Replies Mar 14, 2013 at 12:43 UTC I would suggest centralizing your servers and have users from other sites access the data remotely. We are using Citrix presentation server to do just that on our own site which lets our other offices connect to the data through a remote session. The other option is to use terminal services, i know at the time when we made this move that terminal services was not a great option because you needed lots of bandwidth and hardware to sufficiently run the system and also when we were testing these systems we noticed that Microsoft were actually using citrix themselves for test driving applications online, great way to promote your own product Microsoft :-) Since Server 2008 i have heard good things about terminal services finally catching up with the market so it could be an option for you Mar 14, 2013 at 12:51 UTC I would love to do as you suggest. Our biggest problem and the reason for the new setup is Internet Connectivity. We have 4 sites all in the worst possible places. Best connection I can get is an 8mb connection. Remotely accessing the files is a like pulling teeth slow and painful. We are also unable to offer vpn services due to the speeds. Mar 14, 2013 at 1:00 UTC Again that sounds like the exact problem we had with our remote sites not being able to get a good connection. An 8mb connection should be perfect for Citrix, the way citrix works is that it send keystrokes to the server rather than sending loads of data across the network Normal use is about 20kb per session, and 30kb is even better So 100 users on 30Kb sessions is only 3Mb connection. The one thing you will have to consider is printing because printing will take everything it can get, and must be cropped to a few kb to ensure a stable connection between the sites Mar 18, 2013 at 7:50 UTC Thank you for the advice Barry but still doesnt really answer my question. I want the files to be available offline. Staff dont always have an internet connection available when they are out of the office so a Terminal Server or Citrix would not be my ideal solution. I want to keep Files available offline but also improve speed of synchronising. Which works minus the error messages.
https://community.spiceworks.com/topic/313294-frs-and-offline-file-sharing
CC-MAIN-2016-50
refinedweb
720
59.53
Let’s consider a simple example program that uses mmap() to print a file chosen by the user to standard out: #include <stdio.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <unistd.h> #include <sys/mman.h> int main (int argc, char *argv[]) { struct stat sb; off_t len; char *p; int fd; if (argc < 2) { fprintf (stderr, "usage: %s <file>n", argv[0]); return 1; } fd = open (argv[1], O_RDONLY); if (fd == -1) { perror ("open"); return 1; } if (fstat (fd, &sb) == -1) { perror ("fstat"); return 1; } if (!S_ISREG (sb.st_mode)) { fprintf (stderr, "%s is not a filen", argv[1]); return 1; } p = mmap (0, sb.st_size, PROT_READ, MAP_SHARED, fd, 0); if (p == MAP_FAILED) { perror ("mmap"); return 1; } if (close (fd) == -1) { perror ("close"); return 1; } for (len = 0; len < sb.st_size; len++) putchar (p[len]); if (munmap (p, sb.st_size) == -1) { perror ("munmap"); return 1; } return 0; } The only unfamiliar system call in this example should be fstat() , which we will cover in Chapter 7. All you need to know at this point is that fstat() returns infor mation about a given file. The S_ISREG() macro can check some of this information, so that we can ensure that the given file is a regular file (as opposed to a device file or a directory) before we map it. The behavior of nonregular files when mapped depends on the backing device. Some device files are mmap-able; other nonregular files are not mmap-able, and will set errno to EACCESS . The rest of the example should be straightforward. The program is passed a filename as an argument. It opens the file, ensures it is a regular file, maps it, closes it, prints the file byte-by-byte to standard out, and then unmaps the file from memory. Advantages of mmap() Manipulating files via mmap() has a handful of advantages over the standard read() and write() system calls. Among them are: - Reading from and writing to a memory-mapped file avoids the extraneous copy that occurs when using the read() or write() system calls, where the data must be copied to and from a user-space buffer. - Aside from any potential page faults, reading from and writing to a memory-mapped file does not incur any system call or context switch overhead. It is as simple as accessing memory. - When multiple processes map the same object into memory, the data is shared among all the processes. Read-only and shared writable mappings are shared in their entirety; private writable mappings have their not-yet-COW (copy-on-write) pages shared. - Seeking around the mapping involves trivial pointer manipulations. There is no need for the lseek() system call. For these reasons, mmap() is a smart choice for many applications. Disadvantages of mmap() There are a few points to keep in mind when using mmap(): - Memory mappings are always an integer number of pages in size. Thus, the difference between the size of the backing file and an integer number of pages is “wasted” as slack space. For small files, a significant percentage of the mapping may be wasted. For example, with 4 KB pages, a 7 byte mapping wastes 4,089 bytes. - The memory mappings must fit into the process’ address space. With a 32-bit address space, a very large number of various-sized mappings can result in fragmentation of the address space, making it hard to find large free contiguous regions. This problem, of course, is much less apparent with a 64-bit address space. - There is overhead in creating and maintaining the memory mappings and associated data structures inside the kernel. This overhead is generally obviated by the elimination of the double copy mentioned in the previous section, particularly for larger and frequently accessed files. For these reasons, the benefits of mmap() are most greatly realized when the mapped file is large (and thus any wasted space is a small percentage of the total mapping), or when the total size of the mapped file is evenly divisible by the page size (and thus there is no wasted space). {mospagebreak title=Resizing a Mapping} Linux provides the mremap() system call for expanding or shrinking the size of a given mapping. This function is Linux-specific: #define _GNU_SOURCE #include <unistd.h #include <sys/mman.h> void * mremap (void *addr, size_t old_size, size_t new_size, unsigned long flags); A call to mremap() expands or shrinks mapping in the region [addr,addr+old_size) to the new size new_size. The kernel can potentially move the mapping at the same time, depending on the availability of space in the process’ address space and the value of flags. The opening [ in [addr,addr+old_size) indicates that the region starts with (and includes) the low address, whereas the closing ) indicates that the region stops just before (does not include) the high address. This convention is known as interval notation. The flags parameter can be either 0 or MREMAP_MAYMOVE , which specifies that the kernel is free to move the mapping, if required, in order to perform the requested resizing. A large resizing is more likely to succeed if the kernel can move the mapping. On success, mremap() returns a pointer to the newly resized memory mapping. On failure, it returns MAP_FAILED, and sets errno to one of the following: EAGAIN The memory region is locked, and cannot be resized. EFAULT Some pages in the given range are not valid pages in the process’ address space, or there was a problem remapping the given pages. EINVAL An argument was invalid. ENOMEM The given range cannot be expanded without moving (and MREMAP_MAYMOVE was not given), or there is not enough free space in the process’ address space. Libraries such as glibc often use mremap() to implement an efficient realloc() , which is an interface for resizing a block of memory originally obtained via malloc() . For example: void * realloc (void *addr, size_t len) { size_t old_size = look_up_mapping_size (addr); void *p; p = mremap (addr, old_size, len, MREMAP_MAYMOVE) ; if (p == MAP_FAILED) return NULL; return p; } This would only work if all malloc() allocations were unique anonymous mappings; nonetheless, it stands as a useful example of the performance gains to be had. The example assumes the programmer has written a look_up_mapping_size() function. The GNU C library does use mmap() and family for performing some memory alloca tions. We will look that topic in depth in Chapter 8. {mospagebreak title=Changing the Protection of a Mapping} POSIX defines the mprotect() interface to allow programs to change the permissions of existing regions of memory: #include <sys/mman.h> int mprotect (const void *addr, size_t len, int prot); A call to mprotect() will change the protection mode for the memory pages contained in [addr,addr+len), where addr is page-aligned. The prot parameter accepts the same values as the prot given to mmap() : PROT_NONE , PROT_READ , PROT_WRITE , and PROT_EXEC . These values are not additive; if a region of memory is readable, and prot is set to only PROT_WRITE , the call will make the region only writable. On some systems, mprotect() may operate only on memory mappings previously created via mmap() . On Linux, mprotect() can operate on any region of memory. On success, mprotect() returns 0. On failure, it returns -1 , and sets errno to one of the following: EACCESS The memory cannot be given the permissions requested by prot . This can happen, for example, if you attempt to set the mapping of a file opened read-only to writable. EINVAL The parameter addr is invalid or not page-aligned. ENOMEM Insufficient kernel memory is available to satisfy the request, or one or more pages in the given memory region are not a valid part of the process’ address space. {mospagebreak title=Synchronizing a File with a Mapping} POSIX provides a memory-mapped equivalent of the fsync() system call that we discussed in Chapter 2: #include <sys/mman.h> int msync (void *addr, size_t len, int flags); A call to msync() flushes back to disk any changes made to a file mapped via mmap(), synchronizing the mapped file with the mapping. Specifically, the file or subset of a file associated with the mapping starting at memory address addr and continuing for len bytes is synchronized to disk. The addr argument must be page-aligned; it is generally the return value from a previous mmap() invocation. Without invocation of msync() , there is no guarantee that a dirty mapping will be written back to disk until the file is unmapped. This is different from the behavior of write() , where a buffer is dirtied as part of the writing process, and queued for writeback to disk. When writing into a memory mapping, the process directly modifies the file’s pages in the kernel’s page cache, without kernel involvement. The kernel may not synchronize the page cache and the disk anytime soon. The flags parameter controls the behavior of the synchronizing operation. It is a bitwise OR of the following values: MS_ASYNC Specifies that synchronization should occur asynchronously. The update is scheduled, but the msync() call returns immediately without waiting for the writes to take place. MS_INVALIDATE Specifies that all other cached copies of the mapping be invalidated. Any future access to any mappings of this file will reflect the newly synchronized on-disk contents. MS_SYNC Specifies that synchronization should occur synchronously. The msync() call will not return until all pages are written back to disk. Either MS_ASYNC or MS_SYNC must be specified, but not both. Usage is simple: if (msync (addr, len, MS_ASYNC) == -1 ) perror ("msync"); This example asynchronously synchronizes (say that 10 times fast) to disk the file mapped in the region [addr,addr+len) . On success, msync() returns 0. On failure, the call returns -1 , and sets errno appro priately. The following are valid errno values: EINVAL The flags parameter has both MS_SYNC and MS_ASYNC set, a bit other than one of the three valid flags is set, or addr is not page-aligned. ENOMEM The given memory region (or part of it) is not mapped. Note that Linux will return ENOMEM , as POSIX dictates, when asked to synchronize a region that is only partly unmapped, but it will still synchronize any valid mappings in the region. Before version 2.4.19 of the Linux kernel, msync() returned EFAULT in place of ENOMEM . {mospagebreak title=Giving Advice on a Mapping} Linux provides a system call named madvise() to let processes give the kernel advice and hints on how they intend to use a mapping. The kernel can then optimize its behavior to take advantage of the mapping’s intended use. While the Linux kernel dynamically tunes its behavior, and generally provides optimal performance without explicit advice, providing such advice can ensure the desired caching and readahead behavior for some workloads. A call to madvise() advises the kernel on how to behave with respect to the pages in the memory map starting at addr , and extending for len bytes: #include <sys/mman.h> int madvise (void *addr, size_t len, int advice); If len is 0, the kernel will apply the advice to the entire mapping that starts at addr . The parameter advice delineates the advice, which can be one of: MADV_NORMAL The application has no specific advice to give on this range of memory. It should be treated as normal. MADV_RANDOM The application intends to access the pages in the specified range in a random (nonsequential) order. MADV_SEQUENTIAL The application intends to access the pages in the specified range sequentially, from lower to higher addresses. MADV_WILLNEED The application intends to access the pages in the specified range in the near future. MADV_DONTNEED The application does not intend to access the pages in the specified range in the near future. The actual behavior modifications that the kernel takes in response to this advice are implementation-specific: POSIX dictates only the meaning of the advice, not any potential consequences. The current 2.6 kernel behaves as follows in response to the advice values: MADV_NORMAL The kernel behaves as usual, performing a moderate amount of readahead. MADV_RANDOM The kernel disables readahead, reading only the minimal amount of data on each physical read operation. MADV_SEQUENTIAL The kernel performs aggressive readahead. MADV_WILLNEED The kernel initiates readahead, reading the given pages into memory. MADV_DONTNEED The kernel frees any resources associated with the given pages, and discards any dirty and not-yet- synchronized pages. Subsequent accesses to the mapped data will cause the data to be paged in from the backing file. Typical usage is: int ret; ret = madvise (addr, len, MADV_SEQUENTIAL) ; if (ret < 0) perror ("madvise"); This call instructs the kernel that the process intends to access the memory region [addr,addr+len) sequentially. Readahead When the Linux kernel reads files off the disk, it performs an optimization known as readahead. That is, when a request is made for a given chunk of a file, the kernel also reads the following chunk of the file. If a request is subsequently made for that chunk—as is the case when reading a file sequentially—the kernel can return the requested data immediately. Because disks have track buffers (basically, hard disks perform their own readahead internally), and because files are generally laid out sequentially on disk, this optimization is low-cost. Some readahead is usually advantageous, but optimal results depend on the question of how much readahead to perform. A sequentially accessed file may benefit from a larger readahead window, while a randomly accessed file may find readahead to be worthless overhead. As discussed in “Kernel Internals” in Chapter 2, the kernel dynamically tunes the size of the readahead window in response to the hit rate inside that window. More hits imply that a larger window would be advantageous; fewer hits suggest a smaller win dow. The madvise() system call allows applications to influence the window size right off the bat. On success, madvise() returns 0. On failure, it returns -1 , and errno is set appropriately. The following are valid errors: EAGAIN An internal kernel resource (probably memory) was unavailable. The process can try again. EBADF The region exists, but does not map a file. EINVAL The parameter len is negative, addr is not page- aligned, the advice parameter is invalid, or the pages were locked or shared with MADV_DONTNEED . EIO An internal I/O error occurred with MADV_WILLNEED . ENOMEM The given region is not a valid mapping in this process’ address space, or MADV_WILLNEED was given, but there is insufficient memory to page in the given regions. Please check back next week for the continuation of this article.
http://www.devshed.com/c/a/braindump/using-mmap-for-advanced-file-io/2/
CC-MAIN-2015-11
refinedweb
2,423
61.46
Used for general purpose programming, data science, website backends, GUIs, and pretty much everything else; the first programming language for many, and claimed to be the fastest growing in the world, is of course Python. The newest version 3.7.0 has just recently been released. Naturally any release of Python, no matter how small, undergoes meticulous planning and design before any development is started at all. In fact, you can read the PEP (Python Enhancement Proposal) for Python 3.7, which was created back in 2016. What’s new in 3.7? Why should you upgrade? Is there anything new that’s actually useful? I’ll answer these questions for you by walking through some examples of the new features. Whilst there’s not much in this release that will make a difference to the Python beginner, there’s plenty of small changes for seasoned coders and a few headline features you’ll want to know about. Breakpoints Are Now Builtins Anyone who has used the pdb (Python debugger) knows how powerful it is. It gives you the ability to pause the execution of your script, allowing you to manually roam around the internals of the program and step over individual lines. But, up until now, it required some setup when writing a program. Sure, it takes practically no time at all for you to import pdb and set_trace(), but it’s not on the same level of convenience as chucking in a quick debug print() or log. As of Python 3.7, breakpoint() is a built-in, making it super easy to drop into a debugger anytime you like. It’s also worth noting that the pdb is just one of many debuggers available, and you can configure which one you’d like to use by setting the new PYTHONBREAKPOINT environment variable. Here’s a quick example of a program that we’re having trouble with. The user is asked for a string, and we compare it to see if it matches a value. "") Aha! It looks like favourite_ic is an integer, whilst user_guess is a string. Since in Python comparing a string to an int is a perfectly valid comparison, no exception was thrown (but the comparison doesn’t do what we want). favourite_ic should have been declared as a string. This is arguably one of the dangers of Python’s dynamic typing — there’s no way of catching this error until runtime. Unless, of course, you use type annotations… Annotations and Typing Since Python 3.5, type annotations have been gaining traction. For those unfamiliar with type hinting, it’s a completely optional way of annotating your code to specify the types of variables. Type hints are just one application of annotations (albeit the main one). What are annotations? They’re syntactic support for associating metadata with variables. They can be considered to be arbitrary expressions which are evaluated but ignored by Python at runtime. An annotation can be any valid Python expression. Here’s an example of an annotated function where we’ve gone bananas with useless information. # Without annotation def foo(bar, baz): # Annotated def foo(bar: 'Describe the bar', baz: print('random')) -> 'return thingy': This is all very cool, but a bit meaningless unless annotations are used in standard ways. The syntax for using annotations for typing became standardised in Python 3.5 (PEP 484), and since then type hints have become widely used by the Python community. They’re purely a development aid, which can be checked using an IDE like PyCharm or a third party tool such as Mypy. If our string comparison program had been written with type annotations, it would have looked like this: "")) You can see that PyCharm has alerted me to the error here, which would have prevented it going un-noticed until runtime. If your project is using CI (Continuous Integration), you could even configure your pipeline to run Mypy or a similar third party tool on your code. So that’s the basics of annotations and type hinting. What’s changing in Python 3.7? As the official Python docs point out, two main issues arose when people began to start using annotations for type hints: startup performance and forward references. - Unsurprisingly, evaluating tons of arbitrary expressions at definition time was quite costly for startup performance, as well as the fact that the typingmodule was extremely slow - You couldn’t annotate with types that weren’t declared yet This lack of forward reference seems reasonable, but becomes quite a nuisance in practice. class User: def __init__(self, name: str, prev_user: User) -> None: pass This fails, as prev_user cannot be defined as type User, given that User is not defined yet. To fix both of these issues, evaluation of annotations gets postponed. Annotations simply get stored as a string, and optionally evaluated if you really need them to be. To implement this behaviour, a __future__ import must be used, since this change can’t be made whilst remaining compatible with previous versions. from __future__ import annotations class User: def __init__(self, name: str, prev_user: User) -> None: pass This now executes without a problem, since the User type is simply not evaluated. Part of the reason the typing module was so slow was that there was an initial design goal to implement the typing module without modifying the core CPython interpreter. However, now that the use of type hints is becoming more popular, this restriction has been removed, meaning that there is now core support for typing, which enables several optimisations. Timing The time module has some new kids on the block: existing timer functions are getting a corresponding nanosecond flavour, meaning greater precision is on tap if required. Some benchmarks show that the resolution of time.time() is more than three times exceeded by that of time.time_ns(). Talking of timing, Python itself is getting a minor speed boost in 3.7. This is low level stuff so we won’t go into it right now, but here’s the full list of optimisations. All you need to know is that the startup time is 10% faster on Linux, 30% faster on MacOS, and a large number of method calls are getting zippier by up to 20%. Dataclasses We’re willing to bet that if you’ve ever written object-oriented Python, you’ll have made a class that ended up looking something like this:>" A ton of different arguments are received in __init__ when the class gets initialised. These are simply set as attributes of the class instance straight away, ready for later use. This is a pretty common pattern when writing these kind of classes — but this is Python, and if tedium can be avoided, it should be. As of 3.7, we have dataclasses, which will make this type of class easier to declare, and more readable. Simply decorate a class with @dataclass, and the assignment to self will be taken care of automatically. Variables are declared as shown below, and type annotations are compulsory (though you can still use the Any type if you want to be flexible).')" Not only was the class much easier to setup, but it also produced a lovely string when we created an instance and printed it out. It would also behave properly when being compared to other class instances. This is because, as well as auto-generating the __init__ method, other special methods were generated too, such as __repr__, __eq__ and __hash__. These vastly reduce the amount of overhead needed when properly defining a class like this. Dataclasses use fields to do what they do, and manually constructing a field() gives access to additional options which aren’t the defaults. For example, here the default_factory of the field has been set to a lambda function which prompts the user to enter their name. from dataclasses import dataclass, field class User: name: str = field(default_factory=lambda: input("enter name")) (We wouldn’t recommend piping input into an attribute directly like this – it’s just a demo of what fields are capable of.) Other There are other miscellaneous changes aplenty in this release; we’ll just list a few of the most significant here: - Dictionaries are now guaranteed to preserve insertion order. This was informally implemented in 3.6, but is now an official language specification. The normal dictshould now be able to replace collections.OrderedDictin most cases. - New documentation translations into French, Japanese and Korean. - Controlling access to module attributes is now much easier, as __getattr__can now be defined at a module level. This makes it far easier to customise import behaviour, and implement features such as deprecation warnings. - A new developer mode for CPython. - .pyc files have the option to be deterministic, enabling reproducible builds — that is, the same byte-for-byte output is always produced for the same input file. Conclusion There are some really neat syntactic shortcuts and performance improvements to be had, but it might not be enough to encourage everyone to upgrade. Overall, Python 3.7 implements features that will genuinely lead to less hacky solutions, and produce cleaner code. We certainly look forward to using it, and can’t wait for 3.8!:
https://hackaday.com/2018/07/23/hands-on-with-python-3-7-whats-new-in-the-latest-release/?hmsr=pycourses.com&utm_source=pycourses.com&utm_medium=pycourses.com&replytocom=4770447
CC-MAIN-2019-43
refinedweb
1,532
62.17
Introduction HTML files are often used to display help for desktop applications. For example, the help files that accompany many Windows applications are usually made up of compiled HTML files. Using HTML with Java is relatively easy because Java has built-in HTML capabilities. You can control the way text appears in components by using HTML coding. For example, if you want to bold the text of a JButton the following code will do this for you: JButton mybutton = new JButton(); mybuttom.setText(“<html><b>Press Me</b></html>”); Given that Java has this innate capability it is not difficult to develop a class that will display help files in HTML format. In this article we will take you through the steps necessary to create such a class and, because the actual help files will be independent of the application, you will be able to use this class with any Java application. Before looking at the code, let’s clearly state what we want our class to do. It will provide help information for any application by browsing local HTML files. To keep things simple we will navigate using a home page that is a list of hyperlinks to specific help files. Pages may exceed the size of the screen so our application will need to be able to scroll. {mospagebreak title=The Code} Find below a complete listing of the code in our class. Have a quick look at it now, but don’t worry if you don’t understand all the details. Some basic knowledge of Java is assumed so not all the lines of code will be discussed. Relevant sections will be explained in detail. 1:////////////////////////////////////////////////////////////////// 2:/** 3:* This class creates a frame with a JEditorPane for loading HTML 4:* help files 5:*/ 6://package goes here 7:import java.io.*; 8:import javax.swing.event.*; 9:import javax.swing.*; 10:import java.net.*; 11:import java.awt.event.*; 12:import java.awt.*; 13: 14:public class HelpWindow extends JFrame implements ActionListener{ 15: private final int WIDTH = 600; 16: private final int HEIGHT = 400; 17: private JEditorPane editorpane; 18: private URL helpURL; 19:////////////////////////////////////////////////////////////////// 20:/** 21: * HelpWindow constructor 22: * @param String and URL 23: */ 24:public HelpWindow(String title, URL hlpURL) { 25: super(title); 26: helpURL = hlpURL; 27: editorpane = new JEditorPane(); 28: editorpane.setEditable(false); 29: try { 30: editorpane.setPage(helpURL); 31: } catch (Exception ex) { 32: ex.printStackTrace(); 33: } 34: //anonymous inner listener 35: editorpane.addHyperlinkListener(new HyperlinkListener() { 36: public void hyperlinkUpdate(HyperlinkEvent ev) { 37: try { 38: if (ev.getEventType() == HyperlinkEvent.EventType.ACTIVATED) { 39: editorpane.setPage(ev.getURL()); 40: } 41: } catch (IOException ex) { 42: //put message in window 43: ex.printStackTrace(); 44: } 45: } 46: }); 47: getContentPane().add(new JScrollPane(editorpane)); 48: addButtons(); 49: // no need for listener just dispose 50: setDefaultCloseOperation(DISPOSE_ON_CLOSE); 51: // dynamically set location 52: calculateLocation(); 53: setVisible(true); 54: // end constructor 55:} 56:/** 57: * An Actionlistener so must implement this method 58: * 59: */ 60:public void actionPerformed(ActionEvent e) { 61: String strAction = e.getActionCommand(); 62: URL tempURL; 63: try { 64: if (strAction == “Contents”) { 65: tempURL = editorpane.getPage(); 66: editorpane.setPage(helpURL); 67: } 68: if (strAction == “Close”) { 69: // more portable if delegated 70: processWindowEvent(new WindowEvent(this, 71: WindowEvent.WINDOW_CLOSING)); 72: } 73: } catch (IOException ex) { 74: ex.printStackTrace(); 75: } 76:} 77:/** 78: * add buttons at the south 79: */ 80:private void addButtons() { 81: JButton btncontents = new JButton(“Contents”); 82: btncontents.addActionListener(this); 83: JButton btnclose = new JButton(“Close”); 84: btnclose.addActionListener(this); 85: //put into JPanel 86: JPanel panebuttons = new JPanel(); 87: panebuttons.add(btncontents); 88: panebuttons.add(btnclose); 89: //add panel south 90: getContentPane().add(panebuttons, BorderLayout.SOUTH); 91:} 92:/** 93: * locate in middle of screen 94: */ 95:private void calculateLocation() { 96: Dimension screendim = Toolkit.getDefaultToolkit().getScreenSize(); 97: setSize(new Dimension(WIDTH, HEIGHT)); 98: int locationx = (screendim.width – WIDTH) / 2; 99: int locationy = (screendim.height – HEIGHT) / 2; 100: setLocation(locationx, locationy); 101:} 102:public static void main(String [] args){ 103: URL index = ClassLoader.getSystemResource(“index.html”); 104: new HelpWindow(“Test”, index); 105: 106:} 107:}//end HelpWindow class 108://////////////////////////////////////////////////////////////// {mospagebreak title=The HelpWindow Class} The first thing to notice about our class is that it extends JFrame (line 14). We need not have done things this way. We could simply have included an instance of the JFrame class as a data member. However, we want our class to have all the functionality of the JFrame class. By using inheritance we will be able to directly change the title or the size of our frame by using the parent methods, “setTitle” and “setSize”. After all, this is the whole point of object-oriented languages. ActionListener Our class will also implement the interface ActionListener (line 14) so that it can easily react to events, principally mouse clicks. This is the most commonly used listener and requires that we implement the “actionPerformed” method. Lines 60 through 76 implement this method and process button clicks. This method will be dealt with in detail shortly. Data Members & Constructor Only four data members are included in our class (Lines 15 through 18). Two are simple integers used to set the size of the frame. These variables are declared as “final” and will be used as default values. Using variables instead of literals makes for easier code maintenance and declaring them “final” means that they cannot be changed. The two remaining data members are URL and JEditorPane objects respectively. An URL is fairly self-explanatory, it is constructed from an HTML page in the same directory as our application, but the JEditorPane is a bit more interesting. However, before moving on to a discussion of JEditorPane a few comments about the constructor (lines 24 – 55). are in order. The constructor accepts two arguments, a title for the application and a home page for our help files. The first line of the constructor code (line 25) is a call to the parent constructor in order to set the title of our application. The home page should be a list of hyperlinks to specific help files, although for testing purposes any HTML page will do. This URL is assigned to a class variable. JEditorPane Class This is the class that will be our help file browser. It can handle content formatted as text, rich text or HTML. We don’t have to do anything special to enable it to handle HTML files. As the Sun online tutorial says, “It effectively morphs into the proper kind of text editor for the kind of content it is given”. This happens on line 30 where the content is set to an URL. One important thing that we do need to do with this component is add a HyperTextListener. Lines 35 – 46 create an anonymous inner listener of this class so that information is updated if a hyperlink is clicked. This listener simply calls the “setPage” method to change URLs. If you need more information about listeners see the article “Listeners In Java” also found on this website. The JEditorPane class is a component that gets added to our main, JFrame-derived class. This happens on line 47. Because our parent class is a JFrame it comes with a BorderLayout as its default layout manager. When an item is added to a JFrame’s contentpane its default location will be at the centre of the JFrame and this is exactly what we want. The actionPerformed Method Buttons are added to our application in the addButtons method (starting on line 80), and the application itself is added as an ActionListener to both buttons. All this means is that our buttons are able to react to events. The code that processes events is the “actionPerformed” method (line 60 and following). Our “Contents” button functions as a Home Page button by returning to the list of hyperlinks to specific help files. The “Close” button simply disposes of our help window but perhaps a few comments are in order. The line: processWindowEvent(new WindowEvent(this, WindowEvent.WINDOW_CLOSING));, could just as easily have been replaced with, dispose(); However, creating a window closing event makes for more robust code. If we decide to add a window listener that handles the window closing event then we will ensure that the same code will execute regardless of whether the user shuts down the application from the title bar or by pressing the “Close” button. In other words we will handle the closing event in one location only. It is also worth noting that our help window is disposed of when it is closed. This behaviour is set on line 50. It is important that the default close operation not be set to “EXIT_ON_CLOSE” because the class described here is an ancillary class and closing it should not end the application. Also, with this in mind, don’t forget to remove the “main” method before you incorporate this class into another class. The “main” method is included here simply for testing purposes. {mospagebreak title=Enhancements} The class we’ve created is a very basic use of JEditorPane to display HTML help files. There are numerous ways in which it could be improved and I will make a few suggestions here. Buttons to navigate to previously visited URLs might be helpful. A menubar might provide a more elegant means of navigation. Remember also that this class retains all the functionality of its parent. If you are not satisfied with the size of the window, you can simply use the “setSize” method inherited from the JFrame class. Likewise with methods such as “setBackground”. Furthermore, the functionality of this class could be extended by deriving other classes from it. That said, it is best to remember that Java only supports HTML version 3.2, and this fact will certainly restrict what can be achieved. Finally, the way in which exceptions are handled could be improved. In a follow-up article we will show how to construct a generic class to handle exceptions thrown by any Java application. To summarize, we have presented a class that creates a basic help window and can be incorporated into any Java application. In so doing we have achieved one of the goals of object-oriented programming – namely creating a reusable class.
http://www.devshed.com/c/a/Java/Java-Help-Files/
CC-MAIN-2018-13
refinedweb
1,696
55.44
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Hi, You may download the following module & make a few changes, it will work on Odoo 8. And you will get a map button on partner form. 1. Download 2. Rename __manifest__.py to __openerp__.py 3. open google_map_launch.py, remove all library imports and paste the following: from openerp.tools.translate import _ from openerp import models, fields, api, exceptions, tools 4. open google_map_view.xml, remove the following xpath: <xpath expr="/form/sheet/notebook/page/field[@name='child_ids']/form/sheet/group/group/div/div[@name='div_address']/field[@name='street']" position="before"> <button name="open_map" string="Map" type="object" class="oe_link or_right" /> </xpath> 5. Restart & upgrade the module. You will get like this: Thank you for your help! I installed the website_google_map module. I didn't see any way to use this module on frontend. I don't plan to use the website at all, just frontend. How can I access to the map from frontend? Is it same as Google Maps module for OpenERP 7 where it adds Map button on the partner form. There is a new tab labelled "Geo Localization" in partner form. I click on the "Geo Localize" button and it just fills out the geo coordinates but not taking me to the map... if you have coorddinates, you can easily do a link to{latitude},{longitude} If you prefer you can use address easily:ée+de+Namur,+1367+Ramillies Else you can follow tuto here : to create a quick google map widget ! ok I will look into this when I have time. thanks! i was hoping there is a module to place a map button or something like that without coding.
https://www.odoo.com/forum/help-1/question/is-there-any-google-map-module-for-odoo-8-81619
CC-MAIN-2017-09
refinedweb
305
66.54
If you have just taken a look at NumPy’s arrays, then Pandas’ series will be really easy to pick up. The key difference between these two data types is that series allow us to label our axes, making our grids a lot easier to read, index and utilise. Let’s fire up NumPy and Pandas and create some series. Remember to install these modules if you haven’t already. import numpy as np import pandas as pd Capacity = pd.Series(data=[60432,55097,39460]) Capacity 0 60432 1 55097 2 39460 dtype: int64 So there we have our first series, created from a list of [100,200,300]. You’ll notice that this looks quite different from our previous lists and arrays because we have an index running alongside it. What is really cool about series, is that they allow us to change these index labels: Capacity = pd.Series(data=[60432,55097,39460], index=["Emirates Stadium","Etihad Stadium","Elland Road"]) Capacity Emirates Stadium 60432 Etihad Stadium 55097 Elland Road 39460 dtype: int64 Passing an index argument changes the index labels – our data is now so much easier to read when we need to. Easier to select, too: Capacity["Elland Road"] 39460 In this example, our stadium capacities and labels were in two separate lists. We can do the same thing with a dictionary: CapacityDict = {'Ewood Park':31367, 'Liberty Stadium':20937, 'Portman Road':30311} Capacity = pd.Series(CapacityDict) Capacity Ewood Park 31367 Liberty Stadium 20937 Portman Road 30311 dtype: int64 Summary Told you series would be easy to understand. A simple concept, but one that makes our data a bit more comfortable to use – we can now understand data by labels, not just index numbers. Pandas’ data frame builds on this further to create labelled grids. Once we understand these we can really get started with data analysis in Python.
http://fcpython.com/data-analysis/series
CC-MAIN-2018-51
refinedweb
310
52.19
1.1 anton 1: \ report stack depth changes in source code in various (optional) ways 2: 1.3 ! anton 3: \ Copyright (C) 2004: 20: 21: \ Use this program like this: 22: \ include it, then the program you want to check 23: \ e.g., start it with 24: \ gforth depth-changes.fs myprog.fs 25: 26: \ By default this will report stack depth changes at every empty line 27: \ in interpret state. You can vary this by using 28: 29: \ gforth depth-changes.fs -e "' <word> IS depth-changes-filter" myprog.fs 30: 31: \ with the following values for <word>: 32: 33: \ <word> meaning 34: \ all-lines every line in interpret state 35: \ most-lines every line in interpret state not ending with "\" 36: 37: 2variable last-depths 38: 39: defer depth-changes-filter ( -- f ) 40: \G true if the line should be checked for depth changes 41: 42: : all-lines ( -- f ) 43: state @ 0= ; 44: 45: : empty-lines ( -- f ) 46: source (parse-white) nip 0= all-lines and ; 47: 48: : most-lines ( -- f ) 49: source dup if 50: 1- chars + c@ '\ <> 51: else 52: 2drop true 53: endif 54: all-lines and ; 55: 56: ' empty-lines is depth-changes-filter 57: 58: : check-line ( -- ) 59: depth-changes-filter if 60: sp@ fp@ last-depths 2@ 61: 2over last-depths 2! 62: d<> if 63: ['] ~~ execute 64: endif 65: endif ; 66: 67: sp@ fp@ last-depths 2! 68: 69: ' check-line is line-end-hook
https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/depth-changes.fs?annotate=1.3;hideattic=0;sortby=log;f=h;only_with_tag=HEAD;ln=1
CC-MAIN-2022-21
refinedweb
244
73.21
media-video/mpeg4ip-1.5.0.1-r4 fails to compile with gcc-4.3 Created attachment 148190 [details, diff] Fix includes, stream usage, and array bounds for gcc-4.3 Three types of errors were necessary to fix to compile with gcc-4.3: 1. For #include <iostream.h> (and similar things) the ".h" was removed. 2. "cout" and similar things were used without "std::" 3. gcc spits an "out of array bounds" warning which by -Werror was converted to an error. Since I don't know how to avoid this warning, -Werror was removed from a Makefile.ac My fix for 2. was simply to add "using namespace std;" which is certainly not the cleanest the solution, but works well and - hey, it's a patch! Because of 3. it is necessary to apply the patch before "autoreconf". Thanks for the patch, fixed.
https://bugs.gentoo.org/show_bug.cgi?id=216008
CC-MAIN-2021-17
refinedweb
145
79.16
I was reading the Oracle tutorial about generics and could not figure out what is wrong with the example provided. (The tutorial can be found @) Quoting the tutorial: Object, resulting in a runtime error.", resulting in a runtime error." String public class Box { private Object ob; public Object getObject(){ return ob; } public void setObject(Object ob){ this.ob = ob; } } public class BoxTest { public static void run(){ String sPar = "hello"; Integer iPar = 45; Box box = new Box(); box.setObject(iPar); System.out.println(box.getObject()); box.setObject(sPar); System.out.println(box.getObject()); } } It is referring to the fact that you can write: Box box = new Box(); box.setObject(Integer.valueOf(1)); String value = (String) box.getObject(); This code compiles fine, but gives an error at runtime. In general, there is no way of knowing - short of testing every possible type - what type Box.getObject() will return. The use of the cast may seem a bit unusual - surely you would always know what you'd put into the box. It would not be possible to know, if, say, you have a method which expects a Box parameter containing a String: void doSomethingToStringBox(Box box) { String value = (String) box.getValue(); } You can call this from anywhere in your code (at least from where it is visible), so you can't be 100% sure that the person writing that code knew that Box had to contain a String. If you had correctly used the generic version of the class: Box<Integer> box = new Box<>(); box.setObject(Integer.valueOf(1)); // Compiler error - Integer cannot be cast to String. String value = (String) box.getObject(); or Box<String> box = new Box<>(); // Compiler error - Integer cannot be converted to String box.setObject(Integer.valueOf(1)); String value = box.getObject(); I emphasize the word "correctly" above, since it is still possible to use genericized Box without the generics: Box box = new Box(); and then you're back in the same type-unsafe situation as the very first example. This is called a raw type, and you should not use them in any code you write now. They are permitted for backwards compatibility with legacy code, and may be unsupported in the future.
https://codedump.io/share/kwl3t3vZCpc6/1/oracle-java-generics-tutorial-box-class-explanation
CC-MAIN-2018-26
refinedweb
364
55.74
{-# LANGUAGE ScopedTypeVariables #-} {-# OPTIONS_GHC -Wall #-} ---------------------------------------------------------------------- -- | -- Module : FRP.Reactive.Internal.TVal -- Copyright : (c) Conal Elliott 2008 -- License : BSD3 -- -- Maintainer : conal@conal.net -- Stability : experimental -- -- Timed values. A primitive interface for futures. ---------------------------------------------------------------------- module FRP.Reactive.Internal.TVal ( makeEvent, Fed, MkFed ) where -- import Control.Arrow (first) import Control.Applicative ((<$>)) import Control.Monad (forever) import Control.Concurrent (forkIO,yield,ThreadId) import Control.Concurrent.Chan -- import System.Mem.Weak (mkWeakPtr,deRefWeak) import System.IO.Unsafe (unsafePerformIO) import Data.Unamb (unamb,assuming) import FRP.Reactive.Improving (Improving(..)) import FRP.Reactive.Future (FutureG,future) import FRP.Reactive.Reactive (Event,TimeT) import FRP.Reactive.PrimReactive (futuresE) import FRP.Reactive.Internal.Misc (Sink) import FRP.Reactive.Internal.Clock import FRP.Reactive.Internal.Timing (sleepPast) import FRP.Reactive.Internal.IVar -- | A value that becomes defined at some time. 'timeVal' may block if -- forced before the time & value are knowable. 'undefinedAt' says -- whether the value is still undefined at a given time and likely blocks -- until the earlier of the query time and the value's actual time. data TVal t a = TVal { timeVal :: (t,a), definedAt :: t -> Bool } makeTVal :: Clock TimeT -> MkFed (TVal TimeT a) a makeTVal (Clock getT _) = f <$> newEmptyIVar where f v = (TVal (readIVar v) (unsafePerformIO . undefAt), sink) where undefAt t = -- Read v after time t. If it's undefined, then it wasn't defined -- at t. If it is defined, then see whether it was defined before t. do -- ser $ putStrLn $ "sleepPast " ++ show t sleepPast getT t -- maybe True ((> t) . fst) <$> tryReadIVar v value <- tryReadIVar v case value of -- We're past t, if it's not defined now, it wasn't at t. Nothing -> return False -- If it became defined before t, then it's defined now. Just (t',_) -> return (t' < t) sink a = do t <- getT writeIVar v (t,a) -- sink a = getT >>= writeIVar v . flip (,) a -- TODO: oops - the undefAt in makeTVal always waits until the given time. -- It could also grab the time and compare with t. Currently that -- comparison is done in tValImp. How can we avoid the redundant test? -- We don't really have to avoid it, since makeTVal isn't exported. -- | 'TVal' as 'Future' tValFuture :: Ord t => TVal t a -> FutureG (Improving t) a tValFuture v = future (tValImp v) (snd (timeVal v)) -- | 'TVal' as 'Improving' tValImp :: Ord t => TVal t a -> Improving t tValImp v = Imp ta (\ t' -> assuming (not (definedAt v t')) GT `unamb` (ta `compare` t')) where ta = fst (timeVal v) -- | An @a@ that's fed by a @b@ type Fed a b = (a, Sink b) -- | Make a 'Fed'. type MkFed a b = IO (Fed a b) -- The 'listSink' version of 'makeEvent' is not revealing the finiteness -- of future times until those times are known exactly. Since many -- 'Event' operations (including 'mappend' and 'join') check for infinite -- time (Max MaxBound) before anything else, they'll get stuck immediately. -- | Make a new event and a sink that writes to it. Uses the given -- clock to serialize and time-stamp. makeEvent :: Clock TimeT -> MkFed (Event a) a makeEvent clock = do chanA <- newChan chanF <- newChan spin $ do (tval,snka) <- makeTVal clock writeChan chanF (tValFuture tval) readChan chanA >>= snka futs <- getChanContents chanF return (futuresE futs, writeChanY chanA) -- makeTVal :: Clock TimeT -> MkFed (TVal TimeT a) a {- -- | Make a connected sink/future pair. The sink may only be written to once. makeFuture :: Clock TimeT -> MkFed (FutureG ITime a) a makeFuture = (fmap.fmap.first) tValFuture makeTVal -- | Make a new event and a sink that writes to it. Uses the given -- clock to serialize and time-stamp. makeEvent :: Clock TimeT -> MkFed (Event a) a makeEvent clock = (fmap.first) futuresE (listSink (makeFuture clock)) -- Turn a single-feedable into a multi-feedable listSink :: MkFed a b -> MkFed [a] b listSink mk = do chanA <- newChan chanB <- newChan spin $ do (a,snk) <- mk writeChan chanA a readChan chanB >>= snk as <- getChanContents chanA return (as, writeChanY chanB) -} spin :: IO a -> IO ThreadId spin = forkIO . forever -- Yield control after channel write. Helps responsiveness -- tremendously. writeChanY :: Chan a -> Sink a writeChanY ch x = writeChan ch x >> yield -- Equivalently: -- writeChanY = (fmap.fmap) (>> yield) writeChan -- I want to quit gathing input when no one is listening, to eliminate a -- space leak. Here's my first attempt: {- listSink :: MkFed a b -> MkFed [a] b listSink mk = do chanA <- newChan chanB <- newChan wchanA <- mkWeakPtr chanA Nothing let loop = do mbch <- deRefWeak wchanA case mbch of Nothing -> do putStrLn "qutting" return () Just ch -> do putStrLn "something" (a,snk) <- mk writeChan ch a readChan chanB >>= snk loop forkIO loop as <- getChanContents chanA return (as, writeChanY chanB) -} -- This attempt fails. The weak reference gets lost almost immediately. -- My hunch: ghc optimizes away the Chan representation when compiling -- getChanContents, and just holds onto the read and write ends (mvars), -- via a technique described at ICFP 07. I don't know how to get a -- reliable weak reference, without altering Control.Concurrent.Chan. -- -- Apparently this problem has popped up before. See --
http://hackage.haskell.org/package/reactive-0.9.7/docs/src/FRP-Reactive-Internal-TVal.html
CC-MAIN-2014-35
refinedweb
810
58.18
Simple wrapper to add "dynamic" (sets of) fields to an already instantiated WTForms form. Project description WTForms Dynamic Fields Simple wrapper to add “dynamic” (sets of) fields to an already instantiated WTForms form. Installation Simply use pip to install: pip install wtforms-dynamic-fields --pre The ‘–pre’ flag is necessary until this module has an official release. Then include it in your project: A few notes before using this module If you simply want to add one field to an already existing form, it may be less overhead to simply use setattr: setattr(Form, field_name, TextField(field_label, validators=[InputRequired()])) Doing so will attach a text field, with one validator to the “Form” object. This module is intended for slightly more complex scenarios and to offer an easier way of configuration. Also, this module, in its current state, is developed to scratch a personal itch - simple server side validation of dynamic fields (through WTForms itself). It is most likely missing some needed flexibility and/or features, so do not hesitate to pinch in or drop me a line! Quick overview Adding a field The method add_field() is used to add a field to the modules configuration. Usage: add_field(‘machine name’, ‘label name’, WTFormField, *args, **kwargs) Adding a validator The method add_validator() is used to add a validator to an added field configuration. Usage: add_validator(‘field_machine_name’, WTFormValidator, *args, **kwargs) - Decorate field machine name arguments with %’s (%some_field_machine_name%) to have them automatically suffixed with a set number if applicable. More on this below. Apply the configuration to a form Once you have setup your configuration using the above methods, you can apply it to any valid WTForm instance. Usage: process(ValidFormClass, POST) Note that POST has to be a MultiDict, which is already the case with most frameworks like Flask, Django, … Basic usage The idea behind this module is that you can add “dynamic” fields to a form that has already been created. “Dynamic” here means fields that are not rendered (nor present in the original form object) initially, but get injected into the DOM afterwards. The module uses the POST variables together with a user defined configuration to determine which fields are new and are allowed to be processed. The first thing you need, obviously, is a valid WTForms instance to put the new fields on. Say, for example, we have a form that contains a first name and a last name field, this would be declared as follows: from wtforms import Form, TextField from wtforms.validators import InputRequired class PersonalFile(Form): """ A personal file form. """ first_name = TextField('First name, validators=[InputRequired()]) last_name = TextField('Last name, validators=[InputRequired()]) When we present this form to our user, we wish to have the ability to optionally add an email address and make it required once added. In most cases, to make for a nice user experience, we go ahead an create a button that has some JavaScript bound to it which will inject the new email input field. Also, because we all like instant feedback glory, we could add some client side validation in our JavaScript to catch mistakes early and prevent a round trip to the server. However, we also want this new email address field to be correctly validated on the server and, in case when validation fails, be rendered back to the user for inspection. We do not want to write our own validation code for this field, but leverage the power of the already present, full-blown WTForms form library to do the heavy lifting. This is where this module steps in. First you will need an instance of the module: dynamic = WTFormsDynamicFields() Next you will need to build the configuration which will hold the allowed, dynamic fields (and their validators). To do this, you use the “add_field” method: define the fields machine name, the label and finally a WTForms field type: dynamic.add_field('email', 'Email address', TextField) Optionally, you can pass *args and **kwargs to the field as well. Of course, the machine name of the field needs to correspond with the input’s “name” attribute as injected by JavaScript. Also notice we do not add any parenthesis after the WTForms field type (TextField). If needed, you can also apply optional validators by using the “add_validator” method. You define on which field you wish to apply the validator and you pass in a WTForms validator: dynamic.add_validator('email', InputRequired, message='This field is required') Here too you have the ability to pass in optional *args and **kwargs to the validator. Again, no parenthesis after InputRequired, its arguments will be bound by the module later on. Now that you have added this email field and pushed a validator on it, you are ready to process your form. For the form to be processed, you will need your original form (PersonalFile in our case) and the POST that comes back from the server. Normally, you would bind your form variable directly to the WTForm instance: form = PersonalFile() To enable this module to process you form, however, you simply need to wrap its “process” method around it and add the incoming POST: form = dynamic.process(PersonalFile, request.post) Now the form will pick up the optional email field when injected and make the validation fail server side if the field is left empty. Removing the field from the DOM will make your form pass validation again (given that you filled in the first_name and last_name fields, that is). Usage with sets Now imagine the use case where you which to capture not one, but an undefined amount of email address in that same form and have them all validated correctly. With WTForms Dynamic Fields, this is trivial as the module supports sets - multiple fields of the same kind. To support these sets in your forms, you only need to uphold a simple naming convention: “_X” where X is a number. If we would add, say, four email fields, these HTML inputs would look like this: <input type="text" name="email_1" /> <input type="text" name="email_2" /> <input type="text" name="email_3" /> <input type="text" name="email_4" /> The fun fact is, you would not have to change anything to the code we used in the previous example. The module will derive the canonical name of each field (“email” in this case) and apply the user defined configuration for the email field to each individually. Advanced usage with sets A more complex scenario could occur when you would have a set comprised out of two or more fields that are dependent on one another. For example, to elaborate on our email scenario from above, imagine we wish to also capture a telephone number with each email. But, to up the stakes, we only allow one of the two fields to be filled in. This would require a dependency between the two fields - a validator which checks if its field is filled in and the other one is not. Such a validator would take the other field’s name as an argument: RequiredIfEmpty('email') The above (fictional) validator would be put on the “telephone” field to check if the email field was left empty. Now if you have multiple sets of these fields, each field name will be suffixed with a number, like we have seen before: <div><input type="text" name="email_1" /><input type="text" name="telephone_1" /></div> <div><input type="text" name="email_2" /><input type="text" name="telephone_2" /></div> <div><input type="text" name="email_3" /><input type="text" name="telephone_3" /></div> <div><input type="text" name="email_4" /><input type="text" name="telephone_4" /></div> So which field machine name would you have to pass to the validator in such a use case? For this, the WTForms Dynamic Fields module provides the ability to wrap a field name argument with % signs. dynamic.add_field('telephone, 'Telephone number, TextField) dynamic.add_validator('telephone, RequiredIfEmpty, '%email%') The module detects when it is processing a set of fields (derived from the “X” naming convention) and as such, when wrapping your field name with % signs, will append the correct suffix to the field when binding the arguments to the validator. So if we would be looking at email4, once expanded, the above code will translate to: telephone_4 = TextField('Telephone number', validators=[RequiredIfEmpty('email_4')]) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/WTForms-Dynamic-Fields/
CC-MAIN-2018-22
refinedweb
1,398
54.86
Jeremiah Foster wrote: > > On Sep 23, 2009, at 1:22, Raphael Geissert wrote: > >> Frank Habermann wrote: >>> >>> I also want to rename the package to libphp-zendframework. >>> >> >> biased answer: ugh, why? >> That reminds me some of the libfoo-bar-moo-invent-something-else-here >> packages we have in the archive. > > One possible answer to "why?" is that libfoo-bar-baz allows users easy > access to a debian package that directly corresponds to the upstream > software. Yes, but the Zend Framework is "the" Zend Framework. Zend being the company behind its development and behind the engine used in PHP it is somewhat explicit and obvious. The risk for a namespace collision in this case is minimal (if not zero), and prefixing it with libphp- just makes the name uselessly longer. Not to mention that it is a framework, not just a library. Btw, some stats: 8089 packages with 0 dashes 10243 packages with 1 dashes 6068 packages with 2 dashes 1547 packages with 3 dashes 230 packages with 4 dashes 48 packages with 5 dashes 14 packages with 6 dashes 1 packages with 7 dashes (sid/main/i386) Cheers, Raphael Geissert
https://lists.debian.org/debian-devel/2009/09/msg00831.html
CC-MAIN-2017-26
refinedweb
189
61.97
Pandas DataFrame loc[] allows us to access a group of rows and columns. We can pass labels as well as boolean values to select the rows and columns. Table of Contents - 1 DataFrame loc[] inputs - 2 DataFrame loc[] Examples - 3 Setting DataFrame Values using loc[] attribute - 4 Conclusion - 5 References: DataFrame loc[] inputs Some of the allowed inputs are: - A Single Label – returning the row as Series object. - A list of Labels – returns a DataFrame of selected rows. - A Slice with Labels – returns a Series with the specified rows, including start and stop labels. - A boolean array – returns a DataFrame for True labels, the length of the array must be the same as the axis being selected. - A conditional statement or callable function – must return a valid value to select the rows and columns to return. DataFrame loc[] Examples Let’s look into some examples of using the loc attribute of the DataFrame object. But, first, we will create a sample DataFrame for us to use. import pandas as pd d1 = {'Name': ['John', 'Jane', 'Mary'], 'ID': [1, 2, 3], 'Role': ['CEO', 'CTO', 'CFO']} df = pd.DataFrame(d1) print('DataFrame:\n', df) Output: DataFrame: Name ID Role 0 John 1 CEO 1 Jane 2 CTO 2 Mary 3 CFO 1. loc[] with a single label row_1_series = df.loc[1] print(type(row_1_series)) print(df.loc[1]) Output: <class 'pandas.core.series.Series'> Name Jane ID 2 Role CTO Name: 1, dtype: object 2. loc[] with a list of label row_0_2_df = df.loc[[0, 2]] print(type(row_0_2_df)) print(row_0_2_df) Output: <class 'pandas.core.frame.DataFrame'> Name ID Role 0 John 1 CEO 2 Mary 3 CFO 3. Getting a Single Value We can specify the row and column labels to get the single value from the DataFrame object. jane_role = df.loc[1, 'Role'] print(jane_role) # CTO 4. Slice with loc[] We can pass a slice of labels too, in that case, the start and stop labels will be included in the result Series object. roles = df.loc[0:1, 'Role'] print(roles) Output: 0 CEO 1 CTO Name: Role, dtype: object 5. loc[] with an array of Boolean values row_1_series = df.loc[[False, True, False]] print(row_1_series) Output: Name ID Role 1 Jane 2 CTO Since the DataFrame has 3 rows, the array length should be 3. If the argument boolean array length doesn’t match with the length of the axis, IndexError: Item wrong length is raised. 6. loc[] with Conditional Statements data = df.loc[df['ID'] > 1] print(data) Output: A DataFrame of the rows where the ID is greater than 1. Name ID Role 1 Jane 2 CTO 2 Mary 3 CFO 7. DataFrame loc[] with Callable Function We can also use a lambda function with the DataFrame loc[] attribute. id_2_row = df.loc[lambda df1: df1['ID'] == 2] print(id_2_row) Output: Name ID Role 1 Jane 2 CTO Setting DataFrame Values using loc[] attribute One of the special features of loc[] is that we can use it to set the DataFrame values. Let’s look at some examples to set DataFrame values using the loc[] attribute. 1. Setting a Single Value We can specify the row and column labels to set the value of a specific index. import pandas as pd d1 = {'Name': ['John', 'Jane', 'Mary'], 'ID': [1, 2, 3], 'Role': ['CEO', 'CTO', 'CFO']} df = pd.DataFrame(d1, index=['A', 'B', 'C']) print('Original DataFrame:\n', df) # set a single value df.loc['B', 'Role'] = 'Editor' print('Updated DataFrame:\n', df) Output: Original DataFrame: Name ID Role A John 1 CEO B Jane 2 CTO C Mary 3 CFO Updated DataFrame: Name ID Role A John 1 CEO B Jane 2 Editor C Mary 3 CFO 2. Setting values of an entire row If we specify only a single label, all the values in that row will be set to the specified one. df.loc['B'] = None print('Updated DataFrame with None:\n', df) Output: Updated DataFrame with None: Name ID Role A John 1.0 CEO B None NaN None C Mary 3.0 CFO 3. Setting values of an entire column We can use a slice to select all the rows and specify a column to set its values to the specified one. df.loc[:, 'Role'] = 'Employee' print('Updated DataFrame Role to Employee:\n', df) Output: Updated DataFrame Role to Employee: Name ID Role A John 1.0 Employee B None NaN Employee C Mary 3.0 Employee 4. Setting Value based on a Condition df.loc[df['ID'] == 1, 'Role'] = 'CEO' print(df) Output: Name ID Role A John 1.0 CEO B None NaN Employee C Mary 3.0 Employee Conclusion Python DataFrame loc[] attribute is very useful because we can get specific values as well as set the values. The support for conditional parameters and lambda expressions with the loc[] attribute makes it a very powerful resource.
https://www.journaldev.com/36384/pandas-dataframe-loc
CC-MAIN-2021-21
refinedweb
814
72.66
Opened 8 years ago Closed 5 years ago Last modified 4 years ago #14051 closed New feature (fixed) Signals for transaction commit/rollback Description Some users of django-celery have the problem of publishing references to database state that has not yet been commited. E.g: def add_user(request): user = User.objects.create(...) # Import the users address book contacts asynchronously using the worker pool. import_contacts.delay(user.pk) The proposed solution is to add a way to delay these calls until the transaction is committed: from djcelery import on_transaction_commit def add_user(request): user = User.objects.create(...) on_transaction_commit(import_contacts.delay, user.pk) I can't see any mechanism to hook into commit/rollback, so it doesn't seem easy to accomplish. Do you think it could be possible to add new signals for transaction commit/rollback? Attachments (1) Change History (41) Changed 7 years ago by comment:1 Changed 7 years ago by I have created an initial patch with tests and docs. Let me know if there is anything that could be improved! A patch is attached, the changes are also available in git: comment:2 Changed 7 years ago by comment:3 Changed 7 years ago by @peritus, Patch looks good to me. Do you think the handlers should be able to interrupt commit/rollback by raising an exception, or should send_robust be used to ensure the action is completed? comment:4 Changed 7 years ago by comment:5 Changed 7 years ago by +1 for this, it would be really useful in my use case (sending tasks to celery on a post_save signal). comment:6 Changed 7 years ago by comment:7 Changed 7 years ago by Some thoughts: At first glance, the API seems kind of unwieldy. I think it'd be better to either have a set of on-commit signals for models, or to change post_* to only fire when a transaction is successful, and to include some kind of transaction ID in dispatch_uid. Regarding the example, how would you make the API proposed in the patch work in that case? How would user.pk be passed to the signal handler? I brought this ticket up to Russell at the Sydney sprint the other day, and he suggested implementing this functionality through a context manager. That might be something else to consider. comment:8 Changed 7 years ago by comment:9 Changed 7 years ago by comment:10 Changed 6 years ago by comment:11 Changed 6 years ago by comment:12 Changed 6 years ago by IIUC, the OP's original problem could be solved by committing the transaction manually, as described for example here: However, if you need to start a celery task from a signal (like post_save), you cannot use this method anymore, it seems. There was a discussion about this also here: comment:13 Changed 6 years ago by Committing the transaction is a wildly different solution, as that may not be what the user wants. Having the act of applying a task commit the transaction would be a very unexpected side-effect, and usually if there is an error or another scenario where the transaction should be rolled back, then you don't want to apply the task either. comment:14 Changed 6 years ago by Isn't the request_finished sufficient for this use case? By the time the request is finished, the transaction must be either committed or rolled back. Signals aren't free; I'd prefer to avoid multiplying them if possible. comment:15 Changed 6 years ago by request_finished doesn't tell you whether the transaction was committed or not. You don't want a task to be applied if the transaction was aborted, also the semantics of 'request_finished' is very different to 'transaction_committed', who knows what can possibly happen in between. I don't see how signals can pose that much overhead, especially if not connected to anything. If that is the case then maybe it should be optimized or another method of augmenting functionality could be used. comment:16 Changed 6 years ago by The anonymous has a valid point. Re-opening as DDN. comment:17 Changed 6 years ago by +1 for this. It would be useful for any kind of after-the commit processing that is too large to be done in-band but even more importantly the 'post save' will become much less of a gotcha because with a 'post commit' people will realize that 'post save' does not occur after a finalized transaction. Just knowing that upfront would have saved me a lot of time! comment:18 Changed 6 years ago by If I understand correctly what is wanted is not a traditional pre_commit/post_commit signal. Signals are global. What you would want is pre_commit/post_commit _hooks_. The difference is that the pro_commit/post_commit hooks are a per-transaction thing, not anything global. So, you could add callbacks which should be called when the current transaction commits, like proposed in the original description. You could of course implement hooks using pre/post commit signals, but I am not sure if it would be better to skip the signals and only implement the hooks. FWIW I do like the idea of pre_commit/post_commit hooks. However there is a small problem: there can be no guarantee that post_commit things are actually done (crash at inconvenient time). Or that there really is going to be a commit after pre_commit hooks are done. These are unlikely failure conditions, but important to understand for those cases where you absolutely need the post_commit things done. comment:19 Changed 6 years ago by How would you identify the sender(s), given that a transaction can contain changes to all kinds of models? I'm currently connecting to post_save for sender=MyModel - when MyModel is changed I want to reindex it in the background using celery. Unfortunately, celery tries to do that before the transaction is committed, and indexes the old data (finally I know what's been happening). I don't see how I could solve this if there was a simple post_commit signal - I need to identify the instance that was saved. All save()'s and delete()'s would have to collect their instances so that at the end post_commit can be emitted for each one, no? Regarding manual committing: I have lots of models with a FK and some with m2m to the model I want to index (so my signal handler first figures out which ones are affected, then spawns an indexation task each), and this needs to work in admin. Looks like 100 overrides of save() and delete() would be needed, and I better not miss any... comment:20 Changed 6 years ago by Sorry scratch that previous comment of mine - the existing handlers would just need to be wrapped in a defer() like Dave Hughes is doing: comment:21 Changed 6 years ago by comment:22 Changed 6 years ago by comment:23 Changed 5 years ago by comment:24 Changed 5 years ago by If implementing post_commit and post_rollback signals, I think one also has to think about savepoints. pre_rollback signals connected during rolled back savepoints should still be called at commit time, whilst analogously connected post_commit signals should not, potentially ending up with both post_commit and post_rollback signals being called during a transaction.commit() call, but only post_rollback signals during a transaction.rollback() call. Note this behavior is not yet implemented in django-transaction-signals. comment:25 Changed 5 years ago by Hi, As an alternative design, we can take inspiration from transaction package. DataManagers are exactly what we need. They are useful to spread the transaction calls to every backend that needs to be tight to the current transaction. By Backend I mean, it can be django.db.connection, The celery broker, or even a simple transaction safe dictionary object. comment:26 Changed 5 years ago by comment:27 Changed 5 years ago by comment:28 Changed 5 years ago by With the new transaction management introduced in Django 1.6, this should be much less of a problem in some cases, and untractable in other cases. If you're using Django's default transaction management: you're in autocommit, problem solved. Make sure you aren't in an atomic block before sending the task to celery. If you're using ATOMIC_REQUESTS, or you're within an atomic block created by code outside of your control, it becomes very hard to determine if and when a given set of changes will actually be committed to the database. Every time you exit an atomic level, some changes may be rolled back, but not necessarily all of them. Since the problem is solved in the default case, I'm marking this as fixed. It's wontfix for the non-default case, unless someone comes up with a really nice patch! comment:29 Changed 5 years ago by Calling this problem intractable is a bit of an overstatement. I think trying to implement this in terms of signals doesn't make a whole lot of sense, but a decorator approach would work quite well. For example: from django.db import transaction def view(request): thing = Thing() thing.save() @transaction.post_commit def queue_task(): ThingTask.delay(thing.id) return HttpResponse('...') @post_commit would have the following semantics: - When called in a transaction, it queues the function passed to it into a list of post-commit callbacks on the database connection. - When called outside of a transaction (in autocommit mode), it executes the function immediately. After the database connection wrapper commits, it calls each queued post-commit function. After each invocation, it commits/rolls back. If the function raises an exception, it rolls back, logs the exception, and continues executing the remaining post-commit callbacks. This system would require the Django app to be configured to commit/roll back after every request. When the database connection is closed, if there are callbacks that haven't been executed, an error would be logged (or perhaps an exception should be raised?) --- Thoughts? If you're interested, I can provide a patch that implements the above system. comment:30 Changed 5 years ago by The complexity is in handling partial rollbacks performed with savepoints. You need to keep track of which post_commit tasks must be cancelled at each level. It's doable but it's going to make the implementation of transactions more complicated than what I'm comfortable with. (It's already a bit more complicated than I'd like, since it handles joining an inner transaction with an outer one.) comment:31 Changed 5 years ago by Though, with simple {transaction_start, transaction_commit, transaction_cancel} signals, keeping track of what to call and when would be Celery's problem and not Django's. While it would be nice, it was never my intention that Django would implement this logic. comment:32 Changed 5 years ago by These signals aren't enough. You also need to track savepoint rollbacks, otherwise you could still attempt to run a task on data that wasn't inserted to the database. comment:33 Changed 5 years ago by right, I guess with autocommit an operation may easily consist of multiple transactions. I guess the only way to do this properly is by using manual transaction management. comment:34 Changed 5 years ago by "Manual transaction management" no longer exists in master. (That's why I closed this ticket.) For your use case, I recommend staying in autocommit mode (ie. avoid ATOMIC_REQUESTS), and not enqueuing tasks within an atomic block. comment:35 Changed 5 years ago by Ok, I see and agree, with the new autocommit behavior this is unlikely to catch users by surprise anymore, and race conditions will be more obvious if you're in an atomic block. Thanks! comment:36 Changed 5 years ago by I don't actually understand why this got closed. Even though manual transaction management no longer exists in master, I would still find it very useful to be able to register on_commit and on_rollback hooks for the atomic block. You might be having django models that also depend on external resources. In case of a rollback you might want to run a clean-up action. Don't actually care about savepoints... I just want to make sure I don't leave any junk behind :) Take for example the admin's change_view. If an exception is being raised while changing an object, the only way I can run any clean-up code is by overriding change_view. This doesn't seem very elegant. comment:37 Changed 4 years ago by kux, I won't include in Django partial features that vaguely cover some needs of some users. I either do things correctly or not at all. Transactions guarantee a very high level of reliability, enforced by the database. I refuse to mix them with a less reliable system that could leave data in an inconsistent state — especially signals, which are the most misused feature of Django. Your use case is easily implemented by adding cleanup tasks to a request.cleanup_on_errors list, and running them with a middleware in process_exception when ATOMIC_REQUESTS triggers a rollback. Besides, this allows you to register specific tasks rather than having to write and register a global signal handler that can cleanup anything left behind by any part of your code. comment:38 Changed 4 years ago by I explained the wontfix in greater detail: - on the mailing list: - in this project: comment:39 Changed 4 years ago by comment:40 Changed 4 years ago by I learned that get_or_create operates in a transaction even when in autocommit mode Initial patch (with tests+docs)
https://code.djangoproject.com/ticket/14051
CC-MAIN-2018-09
refinedweb
2,278
60.55
CFD Online Discussion Forums ( ) - FLUENT ( ) - - Help! Compiled UDF problem 4 Wave tank tutorial ( ) Shane April 17, 2006 22:50 Help! Compiled UDF problem 4 Wave tank tutorial Gday, FOr my university final year project I'm trying to solve the tutorial which i will later modify to analyse the breaking behaviour of a tsunami on a beach. I have a problem which involves compiling a simple UDF function in FLUENT which is as follows and is the udf given in the example. #include "udf.h" DEFINE_CG_MOTION(wave, dt, vel, omega, time, dtime) { vel[0]=(0.037098)*((1 - exp(-2.303*time))*7.7528*cos(7.7528*time) + ((2.303*exp(-2.303*time))*sin(7.7528*time))); } For the DEFINE_CG_MOTION macro, a compiled function can only be used over an interpreted function. When i build the libudf folder in FLUENT, it seems to build in that I get the following with no errors. Make sure that UDF source files are in the directory that contains your case and data files. If you have an existing libudf directory, please remove this directory to ensure that the latest files are used.(system "move user_nt.udf libudf\ntx86\2d")-1 (system "copy C:\Fluent.Inc\fluent6.1.22\src\makefile_nt.udf libudf\ntx86\2d\makefile")-1 (chdir "libudf")2 (chdir "ntx86\2d")2 Done. However, when i try to load the libudf folder after I have made it, I get the following. Opening library "libudf"... Error: open_udf_library: The system cannot find the path specified. Error Object: () Also after the build, the folder it said it would build is not present in the directory I specified neither anywhere on the system even though it seems to have built without error. I've attempted to read many forums on similar topics at without success of fixing the problem. I have both Microsoft Visual C++ 6.0 and studio.net 2003 installed on the univercity computers. I'm not sure if they are installed on each computer individually or networked, though I do not know much about networking. It seems that there are two ways to compile the UDF function; in FLUENT using the method specified or by creating your own directories and running NMAKE. When i make my own directories outside FLUENT i get NMAKE : fatal error U1045: spawn failed : Invalid argument I'm so confused. Many thanks in advance Shane freeday September 3, 2010 02:32 I have same problem with U. Tb4 All times are GMT -4. The time now is 23:20 .
http://www.cfd-online.com/Forums/fluent/40614-help-compiled-udf-problem-4-wave-tank-tutorial-print.html
CC-MAIN-2016-40
refinedweb
420
65.62
More like this - MaxMind(R) GeoIP Lite CSV Import by jbronn 4 years, 10 months ago - GoogleAdmin: GMaps base layer in Geographic Admin (GeoDjango) by jbronn 3 years, 7 months ago - WorldIP - access to IP database over API by Alrond 3 years, 8 months ago - Map GPX files to 3D GeoDjango Models by jbronn 2 years, 6 months ago - GeoJSON Serializer for GeoDjango (gis) by danielsokolowski 1 year ago The ip2long(ip) can be done more effeciently via the Python struct & socket modules. They use the base C nhtoh() family. I also provided the inverse function. I tested these on i686 hardware to be sure the unsigned integers matched the MaxMind CSV data sets. Other hardware may need the endian flag '>' switched around (See struct module docs). import socket, struct def ip4_to_int(ip): "Converts an IPv4 address, e.g. '208.0.1.4', into an unsigned long integer." return struct.unpack('>L',socket.inet_aton(ip))[0] def int_to_ip4(num): "Converts an unsigned long integer into an IPv4 address." return socket.inet_ntoa(struct.pack('>L',num)) # IPManager does not use GIS to Find Country or Location The IPManager class does not take advantage of GeoDjango. It could create a 2D map of IP addresses, and query it with a spatial index. There is a Blog on making a 2D map of IP addresses, with MySQL. It does not use Django to query the DB. I am working on modifying the snippet module to use a 2D POLYGON map on the CountryBlocks & LocationBlocks models. It is a rectangle of IP (latitudes) & (-1, 0, 1) longitudes. The new field is: ip_range = models.PolygonField() When it's ready I'll post it with reference to this. # MySQL Users Limitations GeoDjango's MySQL support has odd limitations. Here are some that I've found: 1) Only MyISAM Tables support a SPATIAL INDEX 2) GIS Fields, e.g. PointField(), must be non-NULL. Use: my_point = PointField(null=False) 3) Every GIS field gets a SPATIAL INDEX by default! # adroffner, This was more of a "proof of concept" rather than for doing anything production. In fact, it's much faster to use GeoDjango's GeoIP support for the MaxMind binary databases. "GeoDjango's MySQL support" has nothing to do with the limitations you gave -- those are limitations inherent to MySQL. Finally, in the typical use case, spatial indexes are desirable for every GIS field. If this behavior annoys you, set spatial_index=Falsein your field definition. #
http://djangosnippets.org/snippets/327/
crawl-003
refinedweb
408
64.71
v4l2-read — Read from a V4L2 device #include <unistd.h> read() attempts to read up to count bytes from file descriptor fd into the buffer starting at buf. The layout of the data in the buffer is discussed in the respective device interface section, see ##. If count is zero, read() returns zero and has no other results. If count is greater than SSIZE_MAX, the result is unspecified. Regardless of the count value each read() call will provide at most one frame (two fields) worth of data. By default read() blocks until data becomes available. When the O_NONBLOCK flag was given to the open() function it returns immediately with an EAGAIN error code when no data is available. The select() or poll() functions can always be used to suspend execution until data becomes available. All drivers supporting the read() function must also support select() and poll(). Drivers can implement read functionality in different ways, using a single or multiple buffers and discarding the oldest or newest frames once the internal buffers are filled. read() never returns a "snapshot" of a buffer being filled. Using a single buffer the driver will stop capturing when the application starts reading the buffer until the read is finished. Thus only the period of the vertical blanking interval is available for reading, or the capture rate must fall below the nominal frame rate of the video standard. The behavior of read() when called during the active picture period or the vertical blanking separating the top and bottom field depends on the discarding policy. A driver discarding the oldest frames keeps capturing into an internal buffer, continuously overwriting the previously, not read frame, and returns the frame being received at the time of the read() call as soon as it is complete. A driver discarding the newest frames stops capturing until the next read() call. The frame being received at read() time is discarded, returning the following frame instead. Again this implies a reduction of the capture rate to one half or less of the nominal frame rate. An example of this model is the video read mode of the bttv driver, initiating a DMA to user memory when read() is called and returning when the DMA finished. In the multiple buffer model drivers maintain a ring of internal buffers, automatically advancing to the next free buffer. This allows continuous capturing when the application can empty the buffers fast enough. Again, the behavior when the driver runs out of free buffers depends on the discarding policy. Applications can get and set the number of buffers used internally by the driver with the VIDIOC_G_PARM and VIDIOC_S_PARM ioctls. They are optional, however. The discarding policy is not reported and cannot be changed. For minimum requirements see Chapter 4, Interfaces. On success, the number of bytes read is returned. It is not an error if this number is smaller than the number of bytes requested, or the amount of data required for one frame. This may happen for example because read() was interrupted by a signal. On error, -1 is returned, and the errno variable is set appropriately. In this case the next read will start at the beginning of a new frame. Possible error codes are: Non-blocking I/O has been selected using O_NONBLOCK and no data was immediately available for reading. fd is not a valid file descriptor or is not open for reading, or the process already has the maximum number of files open. The driver does not support multiple read streams and the device is already in use. buf references an inaccessible memory area. The call was interrupted by a signal before any data was read. I/O error. This indicates some hardware problem or a failure to communicate with a remote device (USB camera etc.). The read() function is not supported by this driver, not on this device, or generally not on this type of device.
https://www.linuxtv.org/downloads/v4l-dvb-apis/func-read.html
CC-MAIN-2016-30
refinedweb
652
54.63
This is my analysis about how Google map works, and specially how the tiles are encoded. Google map uses pre-rendered tiles that can be obtained with a simple URL. This article explains how to build the URL for a tile from its geo coordinates (latitude/longitude). Google map uses two different algorithms to encode the location of the tiles. For Google map, the URL of a tile looks like using x and y for the tile coordinates, and a zoom factor. The zoom factor goes from 17 (fully zoomed out) to 0 (maximum definition). At a factor 17, the whole earth is in one tile where x=0 and y=0. At a factor 16, the earth is divided in 2x2 parts, where 0<=x<=1 and 0<=y<=1, and at each zoom step, each tile is divided into 4 parts. So at a zoom factor Z, the number of horizontal and vertical tiles is 2^(17-z) 17 0 16 //correct the latitude to go from 0 (north) to 180 (south), // instead of 90(north) to -90(south) latitude=90-latitude; //correct the longitude to go from 0 to 360 longitude=180+longitude; //find tile size from zoom level double latTileSize=180/(pow(2,(17-zoom))); double longTileSize=360/(pow(2,(17-zoom))); //find the tile coordinates int tilex=(int)(longitude/longTileSize); int tiley=(int)(latitude/latTileSize); In fact this algorithm is theoretical as the covered zone doesn't match the whole globe. Google uses four servers to balance the load. These are mt0, mt1, mt2 and mt3. Each tile is a 256x256 PNG picture. The URL looks like where the 't' parameters encode the image location. The length of the parameter indicates a zoom level. To see the whole globe, just use 't=t'. This gives a single tile representing the earth. For the next zoom level, this tile is divided into 4 quadrants, called, clockwise from top left : 'q' 'r' 's' and 't'. To see a quadrant, just append the letter of that quadrant to the image you are viewing. For example :'t=tq' will give the upper left quadrant of the 't' image. And so on at each zoom level... //initialise the variables; double xmin=-180; double xmax=180; double ymin=-90; double ymax=90; double xmid=0; double ymid=0; string location="t"; //Google uses a latitude divided by 2; double halflat = latitude / 2; for (int i = 0; i < zoom; i++) { xmoy = (xmax + xmin) / 2; ymoy = (ymax + ymin) / 2; if (halflat > ymoy) //upper part (q or r) { ymin = ymoy; if (longitude < xmoy) { /*q*/ location+= "q"; xmax = xmoy; } else {/*r*/ location+= "r"; xmin = xmoy; } } else //lower part (t or s) { ymax = ymoy; if (longitude < xmoy) { /*t*/ location+= "t"; xmax = xmoy; } else {/*s*/ location+= "s"; xmin = xmoy; } } } //here, the location should contain the string corresponding to the tile... Again, this algorithm is quite theoretical, as the covered zone doesn't match the full globe. Google uses four servers to balance the load. These are kh0, kh1, kh2 and kh3. Each tile is a 256x256 JPG picture. Due to the Mercator projection, the above algorithm has to be modified. In Mercator projection, the spacing between two parallels is not constant. So the angle described by a tile depends on its vertical position. Here comes a piece of code to compute a tile's vertical number from its latitude. /**<summary>Get the vertical tile number from a latitude using Mercator projection formula</summary>*/ private int getMercatorLatitude(double lati) { double maxlat = Math.PI; double lat = lati; if (lat > 90) lat = lat - 180; if (lat < -90) lat = lat + 180; // conversion degre=>radians double phi = Math.PI * lat / 180; double res; //double temp = Math.Tan(Math.PI / 4 - phi / 2); //res = Math.Log(temp); res = 0.5 * Math.Log((1 + Math.Sin(phi)) / (1 - Math.Sin(phi))); double maxTileY = Math.Pow(2, zoom); int result = (int)(((1 - res / maxlat) / 2) * (maxTileY)); return (result); } Theoretically, latitude should go from -90 to 90, but in fact due to the Mercator projection which sends the poles to the infinites, the covered zone is a bit less than -90 to 90. In fact the maximum latitude is the one that gives PI (3.1415926) on the Mercator projection, using the formula Y = 1/2((1+sin(lat))/(1-sin(lat))) (see the link in the Mercator paragraph). -90 90 PI Y = 1/2((1+sin(lat))/(1-sin(lat))) Google map uses a protection mechanism to keep a good quality of service. If one makes too many requests, Google map will add its IP address to a blacklist, and send a nice avoid being blacklisted, developers should use a caching mechanism if possible... See the whole globe at. And the four corresponding quadrants: (note the 4 servers name to balance the load) See the whole globe at. And the four corresponding quadrants: Nice, isn't it? For a sample code written in C#, see the download at the top.
https://www.codeproject.com/Articles/14793/How-Google-Map-Works?msg=2834003
CC-MAIN-2021-21
refinedweb
825
63.19
Dear Reader, In ruby as we know, the ctor like concept is actually called initializers. Hence there is no really a constructor in ruby (perhaps i have not found even after googling for hours so far). So to initialize all your local class data in ruby, you have to do some thing like this: class Myclass def initialize(firstArgument, secondArgument) @firstArg = firstArgument @secondArg = secondArgument end def SomeMethod puts @firstArg + @secondArg end end instance = Myclass.new(“hello”,” world”) instance.SomeMethod As per the above code, initialize() method is the initializer for this class. So the ruby interpreter automatically calls this method (initializer) when you use new keyword over the class name as per above code. A point worth mentioning here is that, ruby does not support method overloading like other languages, same is true for ctor. But be aware here that the initializer does not always call new to create an object here, it may some times calls allocate to create classes. What actually it means is, if an object is already in the database (created earlier or some thing) then the same object is just allocated memory and all its internal data will be null on other hand by saying new, it creates a fresh memory for the object (brand new instance) with all its internal data being set or defined. Now as per the above code, i have not specified any access modifiers viz private, public, etc. for the method initialize in the above code. So as per the ruby doc, by default all methods are public for a class but not for this initialize method though. Even if you explicitly provide a modifier to this method as public, still ruby treats it as private method. So we can see that, the ruby interpreter do actually finds initialize method on a type on which new is used even though this method is private. So it is suggested by ruby experts/geeks that to not rely on initializer method in case your overriding this method to do some custom initialization mechanism. At first, i got a bit curious that even specifying a modifier to this method still i was not able to call it explicitly using an instance. To find out if whats said every where is really true (you can call me crazy), i used Object.respond_to?() method to check if the method exists in this type i.e MyClass in above code. So i did some modifications to the above code: if instance.respond_to?(“initialize”) instance.initialize(“a”,”b”) end The if condition never gets passed, because respond_to? method can not find specified method in the MyClass instance. So i provided “include_private=true” as a second argument for this respond_to?() method. Then it was able to find the specified method, and thus the condition passed, but hey the call inside the if body do throw exception/error saying that private method is called. Other way to prove this method is private is by using Module.private_methods() API on MyClass type. It shall lists what all the private methods MyClass has and up the root hierarchy. So as per its output, this initialize method gets listed. So thats also proves that ruby no matter what i do, still treats initialize method as private only. Thanks P.S: Your comments/votes are much appreciated.
https://adventurouszen.wordpress.com/2011/10/18/crazy-ctor-concept-in-ruby-tip/
CC-MAIN-2018-05
refinedweb
554
61.77
A survey builder component for vue.js applications vue-survey-builder This is a survey builder component for vue.js applications. How to install You can install the component using npm i -S vue-survey-builder Steps to use Step 1: Once you install it, you can import the SurveyBuilder as shown below import { SurveyBuilder, SurveyBuilderJson } from 'vue-survey-builder'; Step 2: You can use it in your vue component, as shown below <SurveyBuilder : Here SurveyBuilderJson is the json, which is used to form question object. Please take a look at it here Depending on the type of question, only few keys are used in the whole JSON. Step 3: SurveyBuilder emits an event called add-update-question with a question object this.$root.$emit('add-update-question', question); In your component, keep track of this event to capture the question which is added or updated mounted() { this.$root.$on('add-update-question', question => { window.console.log(question); }); }, Each question will have an id which is a UUID field. Once you get the question object form the above event, you can check the id of with with the list of questions you have. If the id exists, then it means there is an update to the question, if the id doesn't exist, then you can directly add that question to the list of questions. You can refer the sample code in the demo repository Step 4: You can add your own logic in your component to show the list of question in read only and edit mode. There is a component called QuestionsView, to show the list of questions, which is available here. Please use this component QuestionsView in case, you want to show the list of questions added. You can import this component as shown below import { QuestionsView } from 'vue-survey-builder'; Once you import it, you can use it in your component as shown below <QuestionsView : questionsis a property which takes an array of questions. readOnlyis used to make the whole component editable or non editable, based on the value we pass. It takes true or false. Supported Question types - BOOLEAN - SINGLE_CHOICE - MULTI_CHOICE - SCALE - NUMBER - TEXT - DATE - TIME Keys of the JSON - id : This is a unique field, which will be created dynamically for every qiestion. This field is required for all type of questions. - type : This represents the type of the question. The supported types are mentioned here. This field is required for all type of questions. - multiSelect : This represents whether the question is multi select question or not. This is is falseby default and will be trueonly for MULTI_CHOICEquestion. - characterLimited : It represents the limit for characters, the user can enter. This is used for text type of questions. - hasMinMax : This represents whether the question has any min and max values or not. This will be used by NUMBERtype of questions only. - allowDecimals : This represents whether we need to allow the decimals or not. This will be used by NUMBERtype only. - sequence : This represents the sequence of the question. This field will be used by all the question types. - minValue : This represents the min value. This field will be used by NUMBERtype of questions only. - maxValue : This represents the max value. This field will be used by NUMBERtype of questions only. - labels : This represents the labels for scale type of question. This field is required for scale type only. - dateFormat : This represents the date format to be shown. This field is required for DATEtype of questions. - timeFormat : This represents the date format to be shown. This field is required for TIMEtype of questions. - intervals : This represents the number of intervals used for SCALEtype of questions. - textLimit : This represents the number character limit for TEXTtype of questions. - units : This represents the units to be shown. This field will be used for NUMBERtype of questions. - options : This represents the options of question. This field is used by SINGLE_CHOICEand MULTI_CHOICEquestions. Versions 0.1.0 This version is the initial release of this open source project. It has all the required functionalities to build the surveys using vue.js 0.2.0 This version exports SurveyBuilder, QuestionsView and SurveyBuilderJson from index.js file. To Do - Support for rating question - Introduce drag and drop GitHub Get the latest posts delivered right to your inbox
https://vuejsexamples.com/a-survey-builder-component-for-vue-js-applications/
CC-MAIN-2020-29
refinedweb
713
66.33
Hi Everyone, I have a MasterDetailPage setup where I can load a detail page which presents a webview - in that webview I sign in to a secure website. That works fine. I then change the menu bar (Master) in my MasterDetailPage and allow access to lists of pages, including a RSS list which onwardly loads (via Naviation.PushAsync(new page1())) web views that are behind the same authentication that I passed a few seconds ago. This time however I am pushed back to the sign in page?? Any know how I can force or keep the authentication session live between web views? I'm sure this has been tackled before, fixed before or solutions created before - it must be a common issue but I just can't seem to find a solution. Any and all help is greatly appreciated. Thank you Answers Bump Try to customize your own webview , create cookies function ,reset the cookies when relaunch webview . check Thanks for taking time to answer ColeX. Sadly that didn't work. The cookies already exist in the new web view but it seems to simply ignore them. V.strange. I was using Xamarin.Form 4.5 - nothing worked with that. I have rolled back to Xamarin.Forms 4.2 and magically it works now. I'm yet to do any more research as to why but potentially a bug? If I get time which I hope I will I'll do some more tests and try to narrow down why it's not working but for now, my fix is to roll back - now it's working. Xamarin.forms 4.5 removes WebViewRenderer and use WKWebViewRenderer now, which means the webview we use in forms project would be rendered into WKWebview on iOS , maybe it is the cause in your situation . Check. Good point. I suppose this change is mandatory by Dec 2020 when apple decommission UIWebView. For now I stick with Xamarin.Forms 4.2 as I've got a deployment to do, then spend more time when there is more time to figure out how to share cookies between SharedCookieStore and the WKWebViewRenderer. Appreciate all your help. Revisiting this issue with shared cookies as it's causing a me sleepless nights! I've moved to Xamarin.Forms 4.5 - WKWebView as standard - but the for life of me I can't get a signed in session state to work across all my views in iOS. literally nothing is working. I have to missing something and I need some help please forum. Every time I use <WebView.../> or <local:HybridWebView.../> I seem to get a brand new instance of a WkWebView and no cookies exist so I'm always pushed to a login page. My goal is to load a WebView from a page containing a ListView, just a RSS reader and show a news article. This news article is behind a login which I've passed previously within the app on a webview successfully to authenticate. Again here I've seen the cookies come back and exist. My HybridWebView: `public class HybridWebViewRenderer : ViewRenderer<HybridWebView, WKWebView> { WKWebView wkWebView; protected override void OnElementChanged(ElementChangedEventArgs e) { base.OnElementChanged(e); var h = e.OldElement as HybridWebView; Literally nothing is helping me ensure the cookies that show a logged in state or session are transferred over to the new WebView I'm either doing something very wrong, missing something obvious or it's not possible - whatever the answer I would LOVE some guidance please! I couldn't publish my app because of this problem for 2 months. There is no solution yet But the problem is in the new WkWebView (iOS 13 and above) not about Xamarin. Meanwhile, you should test your app on the real device when you should test the cookies, not in the simulator. I found this issue when searching for a solution to the same problem. I actually create my cookie container by doing a separate call to the server using an HttpClient beforehand, then I assign the same cookie container to my WebViews when creating them. For some reason this would only work for the first WebView I create and not subsequent ones, despite the fact that I can see that the cookie in the store is still there and is still the same. The solution for me was to not re-use the same cookie container object. Instead, I store the cookie container from my HttpClient call, and every time I create a WebView, I make a copy of that container and use it. Then all of a sudden it worked for me. I found code on Stack Overflow to make a copy of the cookie container. I would link to that answer but apparently I'm not allowed to post links yet, so here's the code: private CookieContainer CopyContainer(CookieContainer container) { using(MemoryStream stream = new MemoryStream()) { BinaryFormatter formatter = new BinaryFormatter(); formatter.Serialize(stream, container); stream.Seek(0, SeekOrigin.Begin); return (CookieContainer)formatter.Deserialize(stream); } } Thanks mate - I ended up doing exactly this and it all works now. I did simulator and device testing. Now apple turn down the app for other more "business" reasons not code - more frustraiting!
https://forums.xamarin.com/discussion/179509/shared-auth-across-multiple-webviews
CC-MAIN-2021-25
refinedweb
863
64
Recently there have been an exhausting number of posts on the elixir-lang mailing list about the shortcomings of the pipe operator. What is the pipe operator? For those not familiar, the pipe operator is |> and is used like so: a = foo(5) b = bar(a, 4) c = qux(b, 3) # or qux(bar(foo(5), 4), 3) # or foo(5) |> bar(4) |> qux(3) It's a macro that adds the expression on the left as the first argument to the call on the right. Let's look at the AST: iex(1)> quote do: foo(5) {:foo, [], [5]} iex(2)> quote do: bar(4) {:bar, [], [4]} iex(3)> quote do: bar(foo(5), 4) {:bar, [], [{:foo, [], [5]}, 4]} Pretty straightforward. We see what the AST looks like for each of these calls. iex(4)> quote do: foo(5) |> bar(4) {:|>, [context: Elixir, import: Kernel], [{:foo, [], [5]}, {:bar, [], [4]}]} Here we have a bigger tree in place, but if we expand the |> macro, we get a familiar result. iex(5)> Macro.expand((quote do: foo(5) |> bar(4)), __ENV__) {:bar, [], [{:foo, [], [5]}, 4]} So again, nothing special. Just syntactic sugar. Why is the pipe operator useful? There is a technique present in most OO languages called method chaining. Consider the following Ruby code: list.uniq.count Look at this example in Elixir without pipes: uniq_list = Enum.uniq(list) Enum.count(uniq_list) That's fine. Nothing wrong with that. With the pipe operator it cleans up a bit. list |> Enum.uniq |> Enum.count Sometimes we break it up into multiline statements. This makes it easier to add or remove code (as well as making for easier to follow diffs). # Ruby list .uniq .count # Elixir list |> Enum.uniq |> Enum.count And that's about it. That's the pipe operator. When not to use the pipe operator Here's where the confusion is coming. Elixir developers love the pipe operator because it makes a lot of common situations easier to follow. Many Elixir functions end up looking a bit like this (pulled from a production app I'm running): def get_people(params) do params |> group_params |> filter_params |> transform_params |> build_queries |> validate_query |> make_request("people_current", "people", limit, offset) |> produce_result end This captures the flow of data pretty well. However, let's say validate_query doesn't always return a successful value. It doesn't make sense to always call make_request after validate_query. So what's the solution? Less pipes. query = params |> group_params |> filter_params |> transform_params |> build_queries |> validate_query case query do {:ok, valid_query} -> query |> make_request("people_current", "people", limit, offset) |> produce_result {:error, errors} -> produce_error_result(errors) end Be explicit about branching in your application. If you need to branch on the result of a function, don't use pipe. It's perfectly fine to break up a series of pipe calls. Idiomatic Pipe Usage For people new to Elixir, the pipe operator is usually met with a positive reaction. There's an impression they get that the pipe operator is central to programming in Elixir. To see if this is truly the case,I took a look at some of the most popular Elixir libraries to see how they use pipes. - Total Lines - Total lines in lib - Pipes - Total number of pipes in lib - Pipe Frequency - How pipes are chained together ("4: 5" means there are 5 counts of using 4 pipes in a single statement) While pipes aren't uncommon, they aren't used as often as I expected. Phoenix uses them the most with 3% of lines using a pipe operator. Additionally, long pipelines are quite rare. Usually only one or two pipes is used in a single statement. Phoenix pushes it a bit and has quite a few cases of three pipes in a statement. It even has a pipeline that is 8 calls long! But overall the usage of the pipe operator is pretty conservative. Conclusion Pipes are there to make code more readable, but they don't always make sense to use. In practice, the largest elixir projects use pipes sparingly and avoid large pipelines. That doesn't mean you should avoid using pipes. Just recognize that they aren't central to most Elixir applications.comments powered by Disqus
http://undiscoveredfeatures.com/thinking-outside-the-pipe/
CC-MAIN-2018-39
refinedweb
694
65.32
] SETBUF(3) OpenBSD Programmer's Manual SETBUF(3) NAME setbuf, setbuffer, setlinebuf, setvbuf - stream buffering operations SYNOPSIS #include <stdio.h> void setbuf(FILE *stream, char *buf); void setbuffer(FILE *stream, char *buf, size_t size); int setlinebuf(FILE *stream); int setvbuf(FILE *stream, char *buf, int mode, size_t size); DESCRIPTION The three types of buffering available are unbuffered, block buffered, and line buffered. When an output stream is unbuffered, information ap- pears ANSI X3.159-1989 (``ANSI C''). BUGS The setbuffer() and setlinebuf() functions are not portable to versions of BSD before 4.2BSD. On 4.2BSD and 4.3BSD systems, setbuf() always uses a suboptimal buffer size and should be avoided. OpenBSD 2.6 June 4, 1993 2
http://www.rocketaware.com/man/man3/setbuf.3.htm
crawl-002
refinedweb
118
58.28
The QTextFragment class holds a piece of text in a QTextDocument with a single QTextCharFormat. More... #include <QTextFragment> Note: All functions in this class are reentrant.. Creates a new empty text fragment. Copies the content (text and format) of the other text fragment to this text fragment. Returns the text fragment's character format. Returns an index into the document's internal list of character formats for the text fragment's character format. See also QTextDocument::allFormats(). Returns true if the text fragment contains the text at the given position in the document; otherwise returns false. Returns true if this is a valid text fragment (i.e. has a valid position in a document); otherwise returns false. Returns the number of characters in the text fragment. Returns the position of this text fragment in the document. Returns the text fragment's as plain text. See also length() and charFormat(). Returns true if this text fragment is different (at a different position) from the other text fragment; otherwise returns false. Returns true if this text fragment appears earlier in the document than the other text fragment; otherwise returns false. Assigns the content (text and format) of the other text fragment to this text fragment. Returns true if this text fragment is the same (at the same position) as the other text fragment; otherwise returns false.
https://doc.qt.io/archives/qt-4.7/qtextfragment.html
CC-MAIN-2021-17
refinedweb
224
67.65
Mono: A Developer's Handbook 301 When learning a new language such as C#, or working with a new development environment such as Mono, it usually takes some time before you get up to speed in developing programs. Wading through the reference documentation and reading other people's source code often provides much-needed information on how to do certain things. Both, however, are very time consuming and tedious. Enter Mono: A Developer's Notebook. This book provides a series of task-driven chapters which are thin on theory, but rich on practical content and example code. The featured code snippets are, in contrast to ones in books that teach theory and concepts, not solely designed to illustrate a specific theoretical aspect of programming. Each one is designed to perform a useful task that is essential in day-to-day application programming. What sets this book apart from the multitude of .NET books already available on the market? In order to answer this question it is neccesary to provide a short introduction on Mono. Mono is essentially an open source cross-platform implementation of Microsoft's .NET development framework and implements the API's which are standardized by ECMA. It is, however, not an exact clone. Besides providing a (partially implemented) stack that provides compatibility with Microsoft's .NET API's, Mono adds a whole new API-stack of its own, consisting of open source technologies such as the Gtk+ toolkit and the Gecko HTML rendering engine. This makes it possible to develop cross-platform applications based on open source technology while (mostly) compiling from a single code-base. In contrast to most .NET books available on the market, which focus primarily on Microsoft's API's in the context of Visual Studio.NET, this book concentrates on the basic ECMA API's and Mono's own open source stack. A complete coverage of .NET and the Mono architecture is outside of this review's scope, so for more information you are advised to check the Mono Project's website. Before we dive deeper into the content of the book, a short introduction on the Developer's Notebook series by O'Reilly may be useful. The books in this series are styled to resemble the kind of notebooks college students carry around during their classes in which to take notes or, more commonly, draw caricatures of their teachers. The 'notebook' theme persists throughout the look-and-feel of the book. The 278-page thick paperback has a glossy blue cover, complete with faux post-it note and coffee-stains. Inside, the pages are not clean white but lined like the pages found in math notebooks. In the margin, useful comments are scribbled in a font that resembles handwriting. At first I suspected that the 'busy' look would distract from the content, but in practice this was no problem, thanks to the thick black typewriter font in which the bulk of the text is printed. The chapters in this book are referred to as labs. Each of them focuses on a specific set of tasks and/or features and is divided into several paragraphs. Most paragraphs consist of a number of standard sections following a rigid formula that help you understand a certain aspect of working with Mono. The most common sections are: - How do I do that?: Often using a liberal amount of practical code, this section shows how to accomplish the task at hand, for example working with files. - How it works: In this section, the code and concepts involved in the previous section are explained more in depth, step by step. - What about...: Offers a short focus on more advanced topics or pitfalls. - Where to learn more: If you are craving more information after reading the previous sections, you are often offered a helping hand on where to find more information, providing url's to relevant documentation such as MSDN and other websites. The first chapter, Getting Mono Running, describes how to get Mono up and running on Linux, Windows or Mac OS X, and how to compile from source on other platforms. The installation instructions for Windows only describe how to install Mono and Gtk#. Integration of Gtk# only in an existing Visual Studio.Net installation falls outside of the scope of the book, but a recent blog entry offers some hints on how to accomplish this. Besides installation, the first chapter offers a short description of the individual tools that make up the mono development. After installation, you will want some kind of editor or IDE to work with. Both the MonoDevelop IDE and several other ways of integrating Mono into your existing environment as a Java or Windows developer are covered. Finally, the community is an important aspect of every open source project. Ways of interacting with the community as well as a guide on how to submit bugs and links to some working Mono/C# applications are part of this chapter. The C# introduction in the second chapter, Getting Started with C#, is tailored towards people who have at least some proficiency in using an object-oriented language such as C++ or Java. Some differences between C#, Java and C++ are discussed, as well as the differences between value- and reference types, the basics of error handling, working with assemblies and more. Concepts such as classes, methods, inheritance and namespaces are assumed to be known territory. If you have no previous programming experience, Mono: A Developer's Notebook is only useful in combination with a book that teaches programming with C# such as The C# Programming Language by Anders Hejlsberg. An important part of any modern language is its class libraries. The third chapter, Core .NET, provides an introduction to the standard Framework Library Classes, which describes essential everyday tasks that are part of every program, such as working with files, strings, searching for text patterns and handling collections of data. Besides those basic functions, the chapter also dives deeper into the internals of a compiled assembly, the handling of processes and easy multitasking using threads. Finally, the last paragraph explains how to use a .NET version of the JUnit Java Unit testing framework, Nunit, to test your code. Developing Gtk-applications with Mono and C# is remarkably easy. Chapter 4, Gtk#, describes the basics of writing Gtk# applications. First, it's neccesary to remark that Gtk# might be a bit of a misnomer. Besides the raw Gtk+ toolkit functionality, Gtk# also includes most of the Gnome libraries like gconf, the gnome canvas, libglade and more. Chapter 4 describes functionality available in the Gtk namespace, the basic Gtk+ toolkit. Gtk+ is a constraints-based toolkit, which means that widgets are not positioned using absolute pixel coordinates but rather on basis of their logical relation to each other. This can be a bit confusing for novices, but this chapter provides a good introduction to the basic principles of writing layouts using Gtk#. The authors provide descriptions of essential operations that almost every application needs, such as creating menus and drawing pixmaps (or more advanced things like using the treeview widget and drag-and-drop), assisted by easy-to-read code snippets. While chapter 4 introduces basic Gtk# functionality, chapter 5, Advanced Gtk#, delves deeper into more advanced features of the Gtk# library which also include functionality outside of the basic Gtk-namespace, such as the Gnome libraries. Working with Gnome button toolbars, the Glade user interface designer, storing your application settings in Gconf, setting up some preferences through the use of a wizard/druid, asynchronous operations and threading to increase responsiveness of your application while performing background tasks, rendering HTML in your application using the Gecko rendering engine and internationalisation and translation of applications are all described in this chapter. The use of XML is tightly integrated throughout the Mono framework. It is, for example, the underlying format of the messages that web services use to communicate using the SOAP and XML-RPC protocols. The 6th chapter, Processing XML, describes the XML functionality available in Mono. It starts off by simple operations, reading and writing to an XML-file using relevant examples such as RSS and Dashboard clue-packets. It then proceeds to describe how to modify XML in memory, how to navigate and transform XML using Xpath and XSLT, how to constrain XML in several ways and how to serialize and deserialize objects into and from their XML representation. As in previous chapters, the information density is very high so it might take several reads to grok everything explained. The code examples and accompanying text however are very clear and concise. The 7th chapter called Networking, Remoting, and Web Services describes the networking functionality available in Mono. The chapter starts off with ASP.NET. Mono's stand-alone XSP webserver and Apache integration with mod_mono are discussed, as well as the basics of writing a web application using ASP.NET's code-behind functionality which enables web applications to completely seperate presentation from the underlying code. Communication using plain tcp/ip, remoting using binary serialized objects and invoking remote procedures using XML-RPC as an alternative to SOAP are also described in this chapter. You might want to encrypt the data you send over the network, so a basic description of the Mono cryptographic API is provided. Finally, a short introduction to database handling using ADO.NET concludes chapter 7. The 8th and last chapter titled Cutting Edge Mono starts off with an introduction on how to use the GNU Automake, Autoconf and the pkg-config tools to create an easy to build source package of your project. It then proceeds to describe various pitfalls and considerations in case you want to write cross-platform applications using Mono, such as filesystem layout, configuration storage and the calling of native code using p/invoke. A particularly cool project is IKVM, which translates Java bytecode into the Common Intermediate Language bytecode Mono uses. This enables Mono to run Java applications and allows Java and Mono code to inter-operate. A short introduction on the use of IKVM is provided, as well as some code examples on how to call Mono assemblies from Java and use the Java class libraries from within Mono applications. The chapter ends with some other cutting-edge functionality, like how to run a development version of Mono, a preview of the Generics (templates in c++) implementation available as featured in C# 2.0 and how to write Mono programs in Basic. What is missing? The book doesn't contain a reference section on any of the described API's. If you need detailed information on the C# language specification or an API reference you will need to consult external resources such as the documentation provided with Mono, MSDN, or a separate book covering the topic to make optimal use of the information contained in this book. Fortunately, the book kindly provides pointers on where to find those. The information-density is much higher than you would expect from a book this size. This means the information contained in it is terse. Many topics are treated in a only a couple of pages and the book doesn't take time to explain a lot of programming concepts. The information gets you 'on the road' quickly however, which is exactly what this book is supposed to do. The strength of this book is that it fills the gap between the earlier-mentioned reference documentation and the need to go out and try to read sourcecode to find out how a particular thing is done. The writing style is clear, concise and neutral. Some topics are clarified by the use of screenshots, which is especially useful in the chapters dealing with Gtk# widgets. All in all, if you are a developer with previous experience in object-oriented programming, Mono: A Developer's Notebook will provide you with an excellent introduction into many of the aspects of working with Mono, its associated libraries and programs. More information and a sample chapter can be found at the book's homepage. You can purchase Mono: A Developer's Handbook from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page. No animal on the cover? (Score:5, Funny) Re:No animal on the cover? (Score:5, Funny) Unless you count the rare and beautiful stickynote species, of which there are colors, shapes, and sizes. I have a small herd of sticky notes that surround my monitor and desk. My co-worker swears that he has heard me talking to them. I don't understand why this is strange, considering that many people talk to their pets. These are no ordinary pets, however... I was warned by the old Chinese man at Office Depot that I should not feed them after midnight, expose them to bright sunlight or let them touch water. He didn't explain what time after midnight it would OK to feed them, how bright the light can be, or how they are able to survive in moist environments and be made up of so much water but not touch it. Weird. Beware the stickynote, for it is a mighty foe when angered. Re:No animal on the cover? (Score:2) Re:No animal on the cover? (Score:2, Funny) No animal on the cover? Since the book deals with Mono, might I suggest a gourami [thetropicaltank.co.uk]? mono (Score:4, Funny) Re:No animal on the cover? (Score:2) which language gets the dung beetle? Cobol Summary of the next 100 posts (Score:5, Funny) Re:Summary of the next 100 posts (Score:2, Funny) Re:Summary of the next 100 posts (Score:2) Why not use Java? Probably didn't feel like it. Does it run Linux? Yes, where else would you expect GTK# to run? I will ignore a few here... Re:Summary of the next 100 posts (Score:2, Informative) Re:Summary of the next 100 posts (Score:3, Interesting) Ruby's GTK and Gnome or QT and KDE bindings. While C# and dotnet get enormous amounts of PR hype, they really don't amount to much in the real world, where the platform lock-in and enormous bloat that dotnet entails spell doom. Re:Summary of the next 100 posts (Score:5, Insightful) The big advantage to As for the patent stuff, I'm not aware of what patents cover Re:Summary of the next 100 posts (Score:3, Informative) That's assuming you don't count Rotor [c-sharpcorner.com], Microsoft's reference implementation of the ECMA-standardize bits of .NET (the C# language, the CLR, etc, but not bits like WinForms), which at least works on various BSD platforms, and is licensed such that you can safely port it to other platforms, or use it as the basis for a reimplimentation (think BSD's network stack, for Re:Summary of the next 100 posts (Score:3, Informative) Personally, I like it. Throwing everything together in a huge registry is a can of worms just asking for trouble. Re:Summary of the next 100 posts (Score:2, Funny) It's about time! Everyday when I come into work, I have all these software maintainers waiting for me with a common plea: please code my application with ten different languages! It's nice to know that I can know fulfill their request and provide them with unending job security. Re:Summary of the next 100 posts (Score:3, Interesting) Assuming that is true, the only real worry for Mono is that Microsoft would change .NET enough to break what compatibility they have, but the Mono project has already explained why this doesn't worry them. I just wanted to clarify this part in case people are wondering why. Basically Microsoft can't break .NET compatibility in old versions of the Framework because it'll break all the programs that use those versions. So versions 1.0 and 1.1 can be considered unchangeable right now (except for fixes probab Re:Summary of the next 100 posts (Score:2) Let's get a few things straight about Mono: Re:Summary of the next 100 posts (Score:2) Wrong. Just because something is standardized does NOT mean it isn't patented or patent pending. The two are orthogonal issues, there are many patented standards out there. With a patented standard, you ned to license the patent to legally use the standard. # To th Re:Summary of the next 100 posts (Score:2) Can you name a case where something similar (let's say an API or language spec) was submitted as, let's say, an ECMA spec and later the implementor(s) was sued into oblivion? I find it highly unlikely that Microsoft would legally be able to make public Re:Summary of the next 100 posts (Score:2) I can think of several ISO and ANSI specs where this happened. I don't know about EMCA in particular, in fact before MS used them, I'd never heard of EMCA. You can also look back at the W3C proceedings a few years back- they were deciding wether or not to allow patented technology in standard. The no patent groups barely won. But t Re:Summary of the next 100 posts (Score:5, Funny) Re:Summary of the next 100 posts (Score:3, Insightful) If you want cross-platform and (fairly) strong typing, use Java, if you want loose typing and want Linux or OS X, use Objective-C ( GNUStep/Cocoa respectively for UIs ), if you want M$, use Visual Studio C++ or flavor-of-the-moment C#, or flavor-of-the-last-moment VisualBasic, or ( somebody's favorite Wxyz windows-centric development platform here )... But seriously ( for a moment ), without asking why not use Java, why use C# ? What's the benefit Because You Like It? (Score:2) Re:Forgot one... (Score:2) It isn't important if C++ or any other language really did solve the same problems. Why should people use C++ if they don't like and and no one is paying them to use it? Are you also deadset against every other language, too? Considering the fact that almost everything in Linux and the opn source arena is a knock-off of something else, accusations of "copying" Re:Forgot one... (Score:2) So let me get this straight: If I wanted to use Mono, I'd have to: relearn how to do everything I already know how to do in C++... rewrite all of my personal libraries... learn a completely new set of bugs in a new API.... Dude. If you have portable a)threading b)database connectivity c)GUI d)App servers working in you precious little C++ personal libraries setup you are either bullshitting us or smoking dope or do not know what you are talking about. I have spent many years developing enterprise ap Re:Forgot one... (Score:2) If you have portable a)threading b)database connectivity c)GUI d)App servers working in you precious little C++ personal libraries setup you are either bullshitting us or smoking dope or do not know what you are talking about. Well ... Most of these are admittedly C based, or available for C, but then real programmers don't bother with falsh in the pan stuff l Re:Forgot one... (Score:2) but point out that it won't buy us anything, such as additional capabilities. What do you do? What threading/GUI/networking/DB/utilities (such as regex) libraries do you use? What development environment / automated testing /build environment? In each of this areas Java will bring rather larger productivity gains. It is not just code. It is things like automated testing frameworks, ANT, 3rd party libraries working much more reliably. I am not a big fan of Java, but our team migrated to it and ad Re:What's funny (Score:2) What's funny is that Slashdotters criticize Microsoft constantly for not innovating and for ripping others off. Meanwhile, we're discussing C# and a .NET clone >C# is a Java clone, while .NET is a JFC clone. Of course, Java and the JFC was inpsired by Objective C, NeXTstep and Smalltalk, but at least they weren't a blatant attempt to make an incompatible version of something just for monpololy preservation. Objective-C (Score:4, Interesting) Re:Objective-C (Score:3, Informative) Sidenote #2, IronPython, which runs on Mono, has been shown to perform better (on average, 1.7x better) under most performance tests than standard Python v2.3. (this is not a troll or flame-incitation, just a FYI). See IronPython.com [ironpython.com], or this paper from PyCon 2004 [python.org]. Re:Objective-C (Score:5, Informative) It was a little less than a year ago that I first started investigating the Common Language Runtime (CLR). My plan was to do a little work and then write a short pithy article called, "Why@ironpython.com. Lesson to be learned, if you think something from MS sucks, only to find out it doesn't, you might just get hired. Re:Objective-C (Score:4, Informative) To each their own, but I suspect you are in a small minority. Garbage collection, safe modules, type-safe linkage, and runtime code generation are all important modern language features that C# has and Objective C lacks. Re:Objective-C (Score:3) Perhaps for a lot of apps, but not for me. I had a close look at Objective-C about a year ago. I started reading up on it and found that it missed operator overloading. I tend to do a lot of graphics and physics programming (nothing professional of course) and the thought of doing vector, matrix, and quarternion maths without operator overloading was just unacceptable. But I notice that ARToolkit [artoolkit.org] is written in Obj-C (whenever it's released). Could a seasoned Obj-C coder explain whether there's a way to get Re:Objective-C (Score:2, Informative) Re:Objective-C (Score:2) I had a close look at Objective-C about a year ago. I started reading up on it and found that it missed operator overloading. Operatror overloading? OPERATOR OVERLOADING??? One of th momst abused features of C++ and you miss it? You clearly haven't worked in the industry long enough to seen the abortions people produce with misconceived features like these. Re:Objective-C (Score:3, Insightful) Object-orientation is just a language feature, it's not a true indicator of the level at whi Mono (Score:4, Funny) Re:Mono (Score:2) - So what do you do for a living? - I develop Mono - Er.. Look at the time, I've got to be going now... Re:Mono (Score:2, Funny) There are always books explaining how to accomplish difficult tasks. Most people here would be hard pressed to contract mono the way you described. Mono (Score:2, Funny) Re:Mono (Score:2) Re:Mono (Score:2) Well, if you hadn't been sucking face... Mono vs .NET Framework (Score:3, Interesting) Re:Mono vs .NET Framework (Score:4, Informative) The mono c# compiler allows you to create CIL (common intermediate language) code, which is analygous to java byte-code, except for just-in-time compilation. The mono implementation of the The whole point about the When Mono's Windows.Forms implementation is complete, you should be able to do this same thing with complete GUI applications, however in the mean time that's what GTK# and other linux-based APIs are for. Re:Mono vs .NET Framework (Score:5, Informative) Personally, I disagree. Visual Studio is still a phenomenal IDE, and its GUI tools are some of the best on the market. But I think as we see Mono stabilize and mature past 1.0, the GUI tools for Gtk#, ASP.net, and the new Managed.Windows.Forms implemenation will be quite impressive. (Disclaimer: This isn't a knock at Glade. Glade is good at what it does, but with the advent of Mono it's time for a replacement.) Re:Mono vs .NET Framework (Score:3, Informative) But, in addition, Mono also offers bindings to the Gtk+/Gnome APIs, and that makes software development a lot easier for Gnome developers. There are some GUI builders you can already use with those Mono bindings, and a new GUI builder will be Re:Mono vs .NET Framework (Score:2) I suppose if I develop in Mono for Windows I need to be able to make an I guess I can always use Visual Studio to create nice installer that would include gtk/glade dlls and whatever else I may need. But can I do the same without VS? Sweet Spot? (Score:4, Interesting) 1) operating system kernel or core library, embedded or high-performance programming. This niche only finished moving from assembly to C a few years ago. C++ is usually too slow & big & unwieldy for this niche, let alone C# or Java, although we may be ready for it in 5 years or so. 2) application programming. Here development speed is more important than execution speed. Python and kin provide 'good enough' execution speed when coupled with proper libries (QT, etc) with the fastest development speed. What kind of code falls between the 2? Sure there is some, but is it interesting? Bryan Re:Sweet Spot? (Score:5, Interesting) > than execution speed. Python and kin Right on. And with Ruby/Python/etc you can always dip down into a C library for bits that turn out to be performance-critical. With Ruby, this is usually as simple as something like: Hard to beat... Re:Sweet Spot? (Score:2, Informative) The same is true for C#, using the platform invoke mechanism, it's even simpler. Re:Sweet Spot? (Score:2) The same is true for C#, using the platform invoke mechanism, it's even simpler. [DllImport("myclib.dll")] static extern void DoSomething(); DoSomething(); The point isn't that lower level languages can call libraries -- I'd *hope* that any non-toy language can do it -- the point is if a higher level scripting language can do it, why *bother* with all the extra bother of low level languages? Non-Trivial Case Please? (Score:2) How does Ruby know whether or not it has to free() the string returned from Curl.curl_version? These things may look nice at first, but trying to automagically use low-level languages as if they were high is doomed to a painful failure You may be aware of this, but your post is misleading in that it makes the reader think the programmer is freed from dealing with the pesky details of using a low-level language. Re:Sweet Spot? (Score:3, Insightful) Re:Sweet Spot? (Score:5, Insightful) It's a myth that C/C++ is particularly fast or efficient for those applications: in the absence of language-provided features like garbage collection, runtime safety, or dynamic typing, people end up reinventing those features over time, badly and less efficiently. Both Gtk and Qt are actually sad examples of this: not only does their functionality suffer from their choice of language (each has invented their own object models), their resource requirements are embarrassingly bad. application programming. Here development speed is more important than execution speed. Python and kin provide 'good enough' execution speed when coupled with proper libries (QT, etc) with the fastest development speed. Languages like Python have other problems for the development of large systems, like the lack of static type checking. Python is great, however, for prototyping, extensions language uses, and for single programmer projects. But, in any case, there is a lot of application software that requires much better performance than languages like Python can deliver: CAD systems, graphics systems, image encoders/decoders/editors, vector graphics renderers, typesetting and layout software (including web browsers and editors), audio encoders/decoders, GIS systems and mapping programs, speech recognition engines, and lots more. No, application developers have neither the time nor the resources to turn all the compute intensive core functionality in C/C++ code and then link that into Python. C# is a good middle ground. let alone C# or Java, although we may be ready for it in 5 years or so. The performance of Sun's Java implementation is excellent (although Java sucks for other reasons). The performance of C# implementations is quickly catching up with Java implementations. Re:Sweet Spot? (Score:3, Interesting) That depends. For single processor 32bit x86 environments, I've found .NET to be significantly faster than Java. It helps to be able to inline short non-virtual methods, perform allocations for simple non-native-typed objects on the stack, etc. The gap will probably only widen as the two camps release their very different implementations of generics. Java clobbers .NET in the 64 bit world and in the multi-processor Re:Sweet Spot? (Score:2) This is a baseless argument. What evidence do you have that GTK (I have no experience with QT) is a poorly designed object framework? In actuality I find it a superbly well-designed framework that translates *very* well into these other object-oriented languages like C#, Java, and C++. Py Re:Sweet Spot? (Score:2) Nowhere did I say it was "poorly designed". Gtk+ is probably about as good a toolkit as you can design with C. Python and Perl also have bindings that are very comfortable within their own object framework. Yes, after a lot of work has been spent on fixing the bindings and the bugs. And even then PyGtk applications written in pure Python still can crash with memory management and pointer errors. And Re:Sweet Spot? (Score:2) The performance of Sun's Java implementation is excellent But not near as good as JRockit(BEA) on Intel. Re:Sweet Spot? (Score:4, Insightful) Huh? I typically find that I don't have the time not to do this. Programming in Python takes me about 5 to 10 less time than programming the same functionality in C, and in the rare cases something is too slow even with Psyco [sourceforge.net], I use Pyrex [canterbury.ac.nz] for the inner loop, typically a single function or class. CAD systems: I am not familiar with those, what exactly is too performance-critical for Python in CAD systems? graphics systems: Huh? image encoders/decoders/editors: Image encoders/decoders are typically very small projects - small enough to write in C or other low-level languages. vector graphics renderers: Probably true typesetting and layout software (including web browsers and editors): Python is fast enough for these, on non-antique hardware. audio encoders/decoders: Similar to image encoders/decoders, these are small and should be implemented in a low-level language. GIS systems and mapping programs: What is time-critical about these? speech recognition engines: I suspect there's a small algorithm running in an inner loop and a lot of higher-order code. Only the inner loop needs C, and that only if you want Real-Time behavior. Ever since I turned to write nearly all my code in Python, my productivity was boosted by hundreds of percents, and I am less surprised with time that Python is fast enough in almost all cases when it is used right. Re:Sweet Spot? (Score:3, Insightful) This is a debatable point. On the one hand, yes, it is dynamically typed. On the other hand, having easy access to an interpreter and zero compilation times leads to interactive testing and a more incremental approach to building large applications. In my experience, the latter far outweighs the former (as long as raw performance isn't the issue). Re:Sweet Spot? (Score:2, Interesting) Projects like IronPython give you all the advantages of working in Python, all the advantages of working on top of the CLR frame Re:Sweet Spot? (Score:2) What kind of code falls between the 2? Sure there is some, but is it interesting? The whole enterprise middle-tier application world. The one where they spend billions of dollars on software, you know? Where hordes of mindless drones whip out custom code that actually has to run reasonably fast and interoperate with a variety of systems. In short - everything you want to run in an app server to do business. I thought... (Score:3, Funny) Before anyone starts trolling... (Score:5, Informative) Mono with the Mono with GTK#, Gnome, Mozilla and other libraries doesn't have that problem because the only thing that it uses from Microsoft is the ECMA standard C# language implementation. Why Mono and not Java? Mono is 100% open source. Why Mono and not Python? Mono uses a virtual machine environment that is faster than an interpreted language. Some people prefer the Java and C++ similarities that C# offers. Mono is cuasi language independent. You can use Python in Mono (See Iron Python). "Miguel de Icaza is wasting his time..." Miguel works on Mono because he likes it, he is not employed by you (except if you are Novell) so he spends his time as he sees fit. He owes you nothing. Cheers, Adolfo Re:Before anyone starts trolling... (Score:3, Informative) Well.. if you're going to be "educated and informed", you should at least trouble yourself with considering the fact that there are open-source implementations of Java. JBoss, gcj, kaffe... And it's also fair to remember that there are indeed two different patented libs in different part of build tree (Score:3, Informative) Mono with GTK#, Gnome, Mozilla and other libraries doesn't have that problem because the only thing that it uses from Microsoft is the ECMA standard C# language implementation. And the beaut think about that is the *potentially encumbered* libraries are in a different part of the build tree, ready to be pulled if a MS (FUD) patent does get served. Re:Before anyone starts trolling... (Score:2) Ever wonder how Miguel got into this position of being more convincing than you are? Shit happens. (That is actually a perfectly serious answer). For the uninitiated (Score:4, Informative) There's an RSS feed [mono-project.com] for the newest news, updates, etc. on Mono, too. Developing Mono. (Score:4, Funny) Who needs a book? (Score:3, Funny) Argh! (Score:5, Informative) Java/C# = Strongly statically typed Python/Ruby = Strongly dynamically typed "Loose" typing is another way of saying "weak" typing. Meaning the system doesn't enforce type safety. In almost all scripting languages, type safety is strongly enforced. Re:Argh! (Score:3, Informative) Re:Argh! (Score:2) I keep trying to forget TCL... the drugs are helping... aaahhh!! Vignette!! Tk applications that don't scale!! burning! death!.... Ok, I'm ok... it's ok... please don't mention that language again.... PLEASE. Re:Argh! (Score:2) C/C++ = Weakly statically typed Java/C# = Strongly statically typed Why? Because you can cast to a void *? You can cast to Object in Java and it's not checked until run time. The typing of Java is no stronger than C++. At least we have strongly-typed collections in C++. How many times have you pulled the wrong thing out of a Java collection and not found this out until run-time... The silly things they print in textbooks... Re:Argh! (Score:2) Re:Argh! (Score:3, Informative) You *don't* have strongly-typed collections in C++. You've got *statically* typed collections in C++. They are, however, still *weakly* typed. The difference between weak and strong typing is in that a strongly typed language, the compiler enforces the type system. The difference between static and dynamic typing is that in a dynamically typed language, some type information may not be known until runtime. Maybe you should read one of those textbooks... Re:Argh! (Score:2) Here is somthing funny (Score:3, Informative) C# without .NET? (Score:3, Interesting) Oh hell, I'll bite (Score:5, Insightful) Re:Oh hell, I'll bite (Score:2) Where did you get the idea that Firefox was written in Java? It's derived from Mozilla (which is C++) and makes heavy use of XUL which uses JavaSCRIPT. That has nothing to do with Sun's Java one single bit. Re:Oh hell, I'll bite (Score:2) Thats odd.. (Score:2) Every try pascal or Object Pascal (Delphi)? Re:Thats odd.. (Score:2) Hehe... (Score:3, Funny) Yeah, of course now that there's an open source implementation, C# is a good language... No advantage of C# over Java (Score:2, Offtopic) But Java has better support on a bigger variety of systems. And it does not rely on the good will of Microsoft to allow it to run on Linux. And when using JRockit JVM on INtel harware - it is fast. Faster then equivalent C# code in my applications (prototyped certain numerical algorithms) And it has a better development environment: Eclipse, IntelliJ etc. Why C#. Why make it sound that this is some revelation of a language? T Re:No advantage of C# over Java (Score:2) Re:No advantage of C# over Java (Score:2) The 32 bit space & allocation limits of the JVM I must have been dreaming running JRockit on Itanium. Or, much more likely, you do not know what you are talking about. Java beats pants off C# on 64 bit. Java does not depend on the good will of Sun as long as folks like IBM, Oracle, BEA etc are around. Re:No advantage of C# over Java (Score:2) Re:No advantage of C# over Java (Score:3, Interesting) Re:No advantage of C# over Java (Score:3, Insightful) To date Java has produced exactly two good desktop applications, Eclipse and Azureus. It's an abysmal failure, and the associated stigma won't disappear anytime soon. Telling users to go to Windows update and pull down C# Rocks (Score:3, Interesting) There are mistakes in how the C# language has been designed that really bug me at times, but I've been developping in C# for about 2 years now and I've never been so productive in my life. It has a lot of the advantages of Java, but with a better UI (and by better, I mean better looking AND better performance. The Java ones seem to be one or the other: Fast and Ugly or Slow and Pretty, and maybe even some slow and ugly ones). I really hope Mono can keep up with the Longhorn development because I really expect the library design to be better. There are a lot of aspects of the But complaints aside, overall it's excellent and as I said, I've never been so productive! As an independent contractor, that means a lot to me. Re:What about java? (Score:5, Informative) Mono is Novell's implementation of this standard [ecma-international.org]. On the other hand, Same standard, 2 different implementations of the standard. Re:What about java? (Score:3, Informative) Re:What, no VB? (Score:2) Re:What, no VB? (Score:2, Informative) Re:Cobol (Score:4, Informative) Re:Cobol (Score:2) Re:It's not the language that counts... (Score:2) And frigging HASHMAPS. And regular expressions. And.. People who claim C++ does everything have not done commercial development. And the hell of thrid party libraries (RogueWave anyone?) Re:Notebook or Handbook? (Score:2) I'm reminded of a series of booklets Radio Shack used to sell, back before it sucked, called "Engineer's Mini-Notebook". Written in a draftsman's block-print hand, they contained schematics and explanations of fun electronics projects to try -- all of the parts, of course, available from Radio Shack. I still have two or three of these. These days you can't buy anything much more wireheadish than gold-plated audiowank cable.
http://news.slashdot.org/story/04/09/28/1856257/mono-a-developers-handbook
CC-MAIN-2015-40
refinedweb
6,658
63.29
Reflect as You Work: My Python Project Workflow Flo Updated on ・12 min read One of the apprenticeship patterns in Apprenticeship Patterns is Reflect as You Work. This pattern is about introspecting on how you work regularly. Doing this often allows developers to notice how their practices have changed and even how they haven't. This isn't just about observing yourself. As the book says, "Unobtrusively watch the journeymen and master craftsmen on your team. Reflect on the practices, processes, and techniques they use to see if they can be connected to other parts of your experiences. Even as an apprentice, you can discover novel ideas simply by closely observing more experienced craftsmen as they go about their work." p. 36 I have been thinking about my own practices and those of others around me. The workflow I follow when I create new Python projects particularly stands out because I learned it from sitting with another engineer. I noted what they did and asked questions. Then, I went back to my desk and tried it myself while taking more notes. I followed the resulting workflow so many times that the steps now flow from my fingertips with ease. I think there could be ways to optimize even this workflow but first I am going to note it down here for the potential future reader and for future me to look back on! P.S. Many of the extra details I included here I learned from my colleagues. A big thank you to them for sharing what they know with me 💓 Prerequisites pyenvis installed. New Python Project Checklist - Install a specific Python version. - Create a project directory. Go to the directory. - Set the Python version for the project. - Create a virtual environment. - Activate the virtual environment. - Install dependencies. - Save packages. - Run the code. Note: this workflow should work on MacOS. PREREQUISITE: Install pyenv. Mac OS X comes with Python 2.7 out of the box. If you haven't fiddled with anything, you should be able to open up a Terminal window and type in python --version and get some 2.7 variant. You probably don't want to use the version of Python that comes shipped with your OS (Operating System) though. There are many reasons for this like that the version may be out of date. I have even come across an important library that was missing. Not only do you want to avoid using the version of Python that is shipped with your machine, in your work you will need to have several different versions of Python installed at once. For example, perhaps one codebase is using an older version of Python due to some library dependency. Upgrading the version of Python you are using for that project could require some refactoring of that project that you haven't prioritized. At the same time, you may be using a newer Python version on other projects because you want to take advantage of shiny new features. Having several Python versions installed on your machine is a realistic scenario for a Python developer. Managing these versions effectively is important. There are instructions on how to install pyenv here. When you run a command like pythonor pip, your operating system searches through a list of directories to find an executable file with that name. This list of directories lives in an environment variable called PATH, with each directory in the list separated by a colon... pyenv works by inserting a directory of shims at the front of your PATH so that when you call python or pip these shims are the first thing your OS finds. The commands you enter are, then, intercepted and sent to pyenv which decides which version of Python to use for your command based on some rules. Follow the instructions to install pyenv. Make sure you follow the rest of the post-installation steps under Basic GitHub Checkout even if you use Homebrew to install. When I was installing I found that I had a .bashrc AND .bash_profile. Here is an article on the difference between them and when either file is used. If after following the instructions, you type in pyenv and do not get something like the following, go back and make sure you set the other bash file: flo at MacBook-Pro in ~ $ pyenv pyenv 1.2.8 Usage: pyenv <command> [<args>] Some useful pyenv commands are: commands List all available pyenv commands local Set or show the local application-specific Python version global Set or show the global Python version shell Set or show the shell-specific Python version install Install a Python version using python-build uninstall Uninstall a specific Python version rehash Rehash pyenv shims (run this after installing executables) version Show the current Python version and its origin versions List all Python versions available to pyenv which Display the full path to an executable whence List all Python versions that contain the given executable See `pyenv help <command>' for information on a specific command. For full documentation, see: Step 1: Install a specific Python version. Suppose I'm creating a script that will open the latest xkcd comic in a web browser. I'm going to run it with Python 3.7.0. flo at MacBook-Pro in ~ $ pyenv install 3.7.0 python-build: use openssl from homebrew python-build: use readline from homebrew Downloading Python-3.7.0.tar.xz... -> Installing Python-3.7.0... python-build: use readline from homebrew Installed Python-3.7.0 to /Users/flo/.pyenv/versions/3.7.0 Step 2: Create a project directory. Go to the directory. flo at MacBook-Pro in ~ $ mkdir Documents/comic-creator flo at MacBook-Pro in ~ $ cd Documents/comic-creator/ Step 3: Set the Python version for the project. First, look at the files in the folder, even the hidden files ( -la will show hidden files). flo at MacBook-Pro in .../comic-creator $ ls -la total 0 drwxr-xr-x 2 flo staff 64 Apr 12 21:12 . drwx------+ 33 flo staff 1056 Apr 12 21:12 .. Now, set the Python version for the project. Now you can see a hidden file (hidden files start with a dot). When you look inside .python-version, you can see the version we set. flo at MacBook-Pro in .../comic-creator $ pyenv local 3.7.0 flo at MacBook-Pro in .../comic-creator $ ls -la total 8 drwxr-xr-x 3 flo staff 96 Apr 12 21:16 . drwx------+ 33 flo staff 1056 Apr 12 21:12 .. -rw-r--r-- 1 flo staff 6 Apr 12 21:16 .python-version flo at MacBook-Pro in .../comic-creator $ cat .python-version 3.7.0 Step 4: Create a virtual environment. Just as you may have several Python versions installed on your machine, you may also have different versions of Python packages installed. Imagine the dependency graph for one of your projects looks like this: requests==2.21.0 - certifi [required: >=2017.4.17, installed: 2019.3.9] - chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4] - idna [required: >=2.5,<2.9, installed: 2.8] - urllib3 [required: >=1.21.1,<1.25, installed: 1.24.1] In another project, you may be using a different version of requests which depends on a different version of certifi. By using virtual environments, we can keep package installations isolated by project. A virtual environment is a Python environment such that the Python interpreter, libraries and scripts installed into it are isolated from those installed in other virtual environments, and (by default) any libraries installed in a “system” Python, i.e., one which is installed as part of your operating system. Python venv docs So, first, you can verify (again) that we correctly set the Python version for the project. Now, create a virtual environment by calling venv and call that new environment venv. You can now see the environment is created. flo at MacBook-Pro in .../comic-creator $ python --version Python 3.7.0 flo at MacBook-Pro in .../comic-creator $ python -m venv venv flo at MacBook-Pro in .../comic-creator $ ls venv Step 5: Activate the virtual environment. Look inside venv. Then, look inside venv/bin. bin stands for binary. In Linux/Unix-like systems, executable programs needed to run the system are found in /bin. Similarly, Python executable programs are stored in bin. Activate the virtual environment with source. sourceis a Unix command that evaluates the file following the command executed in the current context... Frequently the "current context" is a terminal window into which the user is typing commands during an interactive session. The sourcecommand can be abbreviated as just a dot (.) in Bash and similar POSIX-ish shells. Wikipedia This means that if you open a new Terminal window, you will need to source the activate file again to activate the virtual environment in that window! Also note that you can type in . venv/bin/activate and it will do the exact same thing as source venv/bin/activate. flo at MacBook-Pro in .../comic-creator $ ls venv/ bin include lib pyvenv.cfg flo at MacBook-Pro in .../comic-creator $ ls venv/bin/ activate activate.csh activate.fish easy_install easy_install-3.7 pip pip3 pip3.7 python python3 flo at MacBook-Pro in .../comic-creator $ source venv/bin/activate Let's look at activate: flo at MacBook-Pro in .../comic-creator using virtualenv: venv $ cat venv/bin/activate # This file must be used with "source bin/activate" *from bash* # you cannot run it directly deactivate () { # reset old environment variables if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then PATH="${_OLD_VIRTUAL_PATH:-}" export PATH unset _OLD_VIRTUAL_PATH fi if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}" export PYTHONHOME unset _OLD_VIRTUAL_PYTHONHOME fi # This should detect bash and zsh, which have a hash command that must # be called to get it to forget past commands. Without forgetting # past commands the $PATH changes we made may not be respected if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then hash -r fi if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then PS1="${_OLD_VIRTUAL_PS1:-}" export PS1 unset _OLD_VIRTUAL_PS1 fi unset VIRTUAL_ENV if [ ! "$1" = "nondestructive" ] ; then # Self destruct! unset -f deactivate fi } # unset irrelevant variables deactivate nondestructive VIRTUAL_ENV="/Users/flo/Documents/comic-creator/venv" export VIRTUAL_ENV _OLD_VIRTUAL_PATH="$PATH" PATH="$VIRTUAL_ENV/bin:$PATH" export PATH # unset PYTHONHOME if set # this will fail if PYTHONHOME is set to the empty string (which is bad anyway) # could use `if (set -u; : $PYTHONHOME) ;` in bash if [ -n "${PYTHONHOME:-}" ] ; then _OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}" unset PYTHONHOME fi if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then _OLD_VIRTUAL_PS1="${PS1:-}" if [ "x(venv) " != x ] ; then PS1="(venv) ${PS1:-}" else if [ "`basename \"$VIRTUAL_ENV\"`" = "__" ] ; then # special case for Aspen magic directories # see PS1="[`basename \`dirname \"$VIRTUAL_ENV\"\``] $PS1" else PS1="(`basename \"$VIRTUAL_ENV\"`)$PS1" fi fi export PS1 fi # This should detect bash and zsh, which have a hash command that must # be called to get it to forget past commands. Without forgetting # past commands the $PATH changes we made may not be respected if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then hash -r fi Step 6: Install dependencies. If you haven't come across "dependencies", this word is often used to say that something is dependent on something else... Makes sense. In our case, our Python project will depend on installing various libraries that don't come already bundled with Python 3.7.0. This is what our code looks like: import json import sys import webbrowser import requests # url of latest xkcd comic URL = '' if __name__ == '__main__': response = requests.get(URL) if response.status_code == requests.codes.ok: content = json.loads(response.text) print('Comic is located at {}'.format(content['img'])) webbrowser.open(content['img']) else: print('Error: \n {}'.format(response.text)) sys.exit() Create a file comic_popup.py in the project and add this code. If you try to run the code you will get an error. requests module isn't installed. Let's install it. flo at MacBook-Pro in .../comic-creator using virtualenv: venv $ touch comic_popup.py flo at MacBook-Pro in .../comic-creator using virtualenv: venv $ ls comic_popup.py venv flo at MacBook-Pro in .../comic-creator using virtualenv: venv $ pip install requests Collecting requests Using cached Collecting chardet<3.1.0,>=3.0.2 (from requests) Using cached Collecting urllib3<1.25,>=1.21.1 (from requests) Using cached Collecting idna<2.9,>=2.5 (from requests) Using cached Collecting certifi>=2017.4.17 (from requests) Using cached Installing collected packages: chardet, urllib3, idna, certifi, requests Successfully installed. Step 7: Save packages. Notice what is printed when you enter pip freeze. This command outputs installed packages in requirements format ({library-name}={version}). In the next line, redirect that output to a file called requirements.txt using >. A single > will overwrite the contents of the file if the file already existed. Using >> would append to the contents of an already existing file. You don't have to call the file requirements.txt but that is what most Python developers use so follow the convention! More on requirements files here. You may also notice that requests isn't the only library outputted by pip freeze. The other libraries are libraries that requests depends on so when you install requests you must install the others for requests to work. These other libraries are referred to as transitive dependencies. flo at MacBook-Pro in .../comic-creator using virtualenv: venv $ pip freeze. flo at MacBook-Pro in .../comic-creator using virtualenv: venv $ pip freeze > requirements.txt You are using pip version 10.0.1, however version 19.0.3 is available. You should consider upgrading via the 'pip install --upgrade pip' command. flo at MacBook-Pro in .../comic-creator using virtualenv: venv $ cat requirements.txt certifi==2019.3.9 chardet==3.0.4 idna==2.8 requests==2.21.0 urllib3==1.24.1 flo at MacBook-Pro in .../comic-creator using virtualenv: venv $ pip --help Usage: pip <command> [options] Commands: install Install packages. download Download packages. uninstall Uninstall packages. freeze Output installed packages in requirements format. list List installed packages. show Show information about installed packages. check Verify installed packages have compatible dependencies. config Manage local and global configuration.. --no-color Suppress colored output Step 8: Run the code. That's it. You should be able to run the code now. You would be able to run it as soon as you install the dependencies it needs but don't forget to save your requirements! flo at MacBook-Pro in .../comic-creator using virtualenv: venv $ python comic_popup.py Comic is located at Now, if you save your code to a repo, anyone can pull the code and run it. Add a README.md and include which version of Python to use to run the code. The next developer will set up the right Python version and install the requirements by running pip install -r requirements.txt. Don't include the .python-version file in the repo because the file is pyenv specific and other developers may have their own way to manage Python versions. As a rule of thumb, I don't include files that are specific to me like configuration files for various IDEs (Integrated Development Environments) because they clutter up the repository. Ignore these files in your repository by adding and configuring a .gitignorefile. That's my development workflow when I start a Python project! I included some developer best practices where I felt it fit. I also explained as much context as I felt appropriate. I encourage you to try out different ways of doing the same thing to see the pros and cons of each. Feel free to ask any questions! I'd love to chat about best practices and what works for you as well. So many parts of our workflows are by convention or because that's the way we first learned it or we don't know any better. I'd love to hear from you! Personal and Professional Growth Through Constructive Feedback If you are constantly questioning yourself about your personal and professional growth that means you are already off to a great start! I think you've described the workflow well! I don't have experience with the builting venv, I'd always use virtualenv. I honestly don't know which is the difference. I've since moved on from pip freezeto Pipenv mostly because it integrates pip, pyenvand virtualenv/ venv, it provides a lock file for dependencies and can easily separate runtime dependencies from those in use only in development. I have the feeling there are as many combination of "managing Python packages" as there are stars. I've heard about anaconda/conda also but I've never used it. Thanks for sharing! I had heard of Pipenv but hadn't looked into it further. Will start playing with it now! Ugh, pipenv is soooo sweet. Loving the dependency graph feature too! Glad you're liking it! That's really neat! These two aliases I set in my shell might be of help (still hope they'll add them at some point): Thanks for the aliases! Thanks for the aliases!!!! I am seconding pipenv! I was introduced recently, and it solves a problem that I will eventually have to deal with: getting my project to run on other peoples' machines easily during development! I tried all weekend to get this working and finally gave up and went w/ venv. But ultimately if it works for you, that's all that matters. Hi John, you tried getting it working with pipenvand then ended up using venv? Yes. Pipenv gave me all kinds of fits - I needed it to just work but it didn't so I went with venv
https://dev.to/floinnyc_/reflect-as-you-work-my-python-project-workflow-49he
CC-MAIN-2019-35
refinedweb
2,936
67.55
04 October 2010 18:44 [Source: ICIS news] WASHINGTON (ICIS)--The pace of US home buying rose by 4.3% in August from July, the National Association of Realtors (NAR) said on Monday, noting that contracts signed for residential properties have risen for two consecutive months. The association said that its pending home sales index rose by 3.4 points or 4.3% in August to 82.3 from the revised 78.9 index level reported for July. A home purchase is listed as a pending sale when a contract has been signed but the transaction has not closed, although the deal usually is completed and funded within a month or two. The pace of pending home sale contracts is seen as a reliable forward-looking indicator for the housing market. The index was launched in 2001 with the baseline figure set at 100. The pending sales index had been in decline since its recent peak of 112.4 in October 2009 before seeing an upturn in March and April this year when a federal tax credit for home buyers provided a stimulus for sales. But when the tax credit programme expired at the end of April, the index fell sharply to 77.7 in May and 75.5 in June. The housing market is a key downstream consumer sector for the chemicals industry, driving demand for a wide variety of chemicals, resins and derivative products such as plastic pipe, insulation, paints and coatings, adhesives and synthetic fibres among many others. The renewed upturns in July and August are seen as possible evidence that housing demand may be picking up, even without the stimulus of the expired federal tax credit. NAR chief economist Lawrence Yun said the improving pending sales data for July and August “are consistent with a gradual improvement in home sales in upcoming months”. “Attractive affordability conditions from very low mortgage interest rates appear to be bringing buyers back to the market,” Yun said. However, Yun cautioned that a sustained recovery in the crucial ?xml:namespace> The With the unemployment rate still high, even those Yun also cautioned that housing affordability could shift quickly if mortgage loan rates should begin to rise. He noted that commodity and wholesale prices recently have begun to edge upward, raising concerns about future inflation pressures that in turn could bump mortgage rates higher.
http://www.icis.com/Articles/2010/10/04/9398567/us-home-buying-contracts-rise-in-august-for-second-month.html
CC-MAIN-2014-52
refinedweb
392
62.38
#include <db.h> int db_env_set_func_seek(int (*func_seek)(int fd, off_t offset, int whence)); The Berkeley DB library requires the ability to specify that a subsequent read from or write to a file will occur at a specific location in that file. The db_env_set_func_seek() function configures all operations performed by a process and all of its threads of control, not operations confined to a single database environment. Although the db_env_set_func_seek() function may be called at any time during the life of the application, it should normally be called before making calls to the db_env_create or db_create methods. The db_env_set_func_seek() function returns a non-zero error value on failure and 0 on success. The func_seek parameter is the function which seeks to a specific location in a file. The fd parameter is an open file descriptor on the file. The seek function must cause a subsequent read from or write to the file to occur at the byte offset specified by the offset parameter. The whence parameter specifies where in the file the byte offset is relative to, as described by the IEEE/ANSI Std 1003.1 (POSIX) lseek system call. The func_seek function must return the value of errno on failure and 0 on success.
http://idlebox.net/2011/apidocs/db-5.2.28.zip/api_reference/C/db_env_set_func_seek.html
CC-MAIN-2014-10
refinedweb
204
59.84
Seam Book (Yuan & Heute) Hello World Example Annotation John Peters Greenhorn Joined: May 25, 2007 Posts: 18 posted Feb 11, 2008 13:27:00 0 In Michael & Thomas' Seam book, in the chapter 2 Hello World example, why is the "person" variable outjected? Doesn't the "@Name" annotation on the Person entity already make the "person" available to Seam? Thanks! SLSB: @Stateless @Name("manager") public class ManagerAction implements Manager { @In @Out private Person person; @Out private List<Person> fans; @PersistenceContext private EntityManager em; public String sayHello(){ em.persist(person); person = new Person(); fans = em.createQuery("select p from Person p").getResultList(); return null; } Entity: @Entity @Name("person") public class Person implements Serializable { private long id; private String name; @Id @GeneratedValue public long getId(){ return id; } public void setId(long id){ this.id = id; } public String getName(){ return name; } public void setName(String name){ this.name = name; } } XHTML: <body> <h:form> Please Enter your name:<br /> <h:inputText<br /> <h:commandButton </h:form> <h :D ataTable <h:column> <h :o utputText </h:column> </h :D ataTable> </body> [ February 11, 2008: Message edited by: John Peters ] Hussein Baghdadi clojure forum advocate Bartender Joined: Nov 08, 2003 Posts: 3479 I like... posted Feb 11, 2008 23:46:00 0 You use @Name to tell Seam that this is a Seam component and the component will be bijected under this name (you can override it however). To actually outject a component you need to use @Out. HTH. John Peters Greenhorn Joined: May 25, 2007 Posts: 18 posted Feb 12, 2008 06:39:00 0 Hey John, thanks for the reply. I think I'm following you, but I'd like some clarification, please: I was playing around with the SLSB and removed the "@Out" annotation on the "person" variable, so the SLSB looks like this: @Stateless @Name("manager") public class ManagerActionBean implements ManagerAction { @In private Person person; @Out private List<Person> fans; @PersistenceContext private EntityManager em; public String sayHello(){ em.persist(person); person = new Person(); fans = em.createQuery("select p from Person p").getResultList(); return null; } } Afer redeploying the EAR, everything still worked without any problems. My confusion is why did the authors annotate the "person" class (that is injected in the SLSB) with "@Out" to begin with? It appears Seam was able to create an entity bean just by using the "@Name" annotation on the entity bean itself, and the "@Out" annotation on the SLSB for the "person" class was redundant. Is this correct or am I missing something? Thanks again, for your help. I've been reading everything I can on bijection and still haven't had the epiphany I need. Hussein Baghdadi clojure forum advocate Bartender Joined: Nov 08, 2003 Posts: 3479 I like... posted Feb 13, 2008 01:40:00 0 Think of it in this way: When you annotate a Seam component with @Out, you are telling Seam that you want to store this component under some scope. Remember those lines: PurchaseOrder po = new PurchaseOrder(); session.setAttribute("purchaseOrder", po); When you are using @Out, you are doing the same thing. Alternativley, when you are using @In, you are telling Seam you want to get the object from a desired scope (context in Seam parlance) Remember this line: session.getAttribute("purchaseOrder"); ? In your case, yes sure, you can remove @Out from person declaration but you can't write this in your view page: <h utputText Because it is not stored under a scope, you have to write: <h utputText Actually, you have to avoid the excessive use bijection as it could harm the performance. John Peters Greenhorn Joined: May 25, 2007 Posts: 18 posted Feb 15, 2008 14:01:00 0 Sorry for the late reply. I think I'm getting it now, thanks to your help. One thing I noticed that I think helped me finally get it was that when the @Out annotation was removed from: private Person person; the text box no longer was blank after a submit. I figured out that it wasn't blank because after the "person" variable was set to a new blank person, (person = new Person() )it wasn't "pushed" back out to Seam to pick up. Let me restate it so I make sure I get it: You're using the @In and @Out to inject and outject items from the Seam contexts which are held in different scopes (session,conversation,page,etc) under names defined by the @Name annotation. When the xhtml form is submitted, it creates an entity bean (in this case, a Person)based off of how the "value" tags are set up on the "inputText". The @In annotation allows the SLSB to bring in that entity bean from the Seam context into the variable named "person" and persist it. After the entity is persisted, the "person" variable is set to a new, empty person, outjected back into the Seam context where it is rerendered on the xhtml page as a blank Person entity. I agree. Here's the link: subject: Seam Book (Yuan & Heute) Hello World Example Annotation Similar Threads datatables and selectOneMenu default Value Problem Input in a Nested table Problem with "rendered" Flag Problem with getRowData() method Datatable with scrollbar in JSF All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/60852/oa/Seam-Book-Yuan-Heute-World
CC-MAIN-2014-42
refinedweb
884
55.98
Front-End Web & Mobile AWS Amplify Studio – Figma to Fullstack React App With Minimal Programming AWS Amplify announces AWS Amplify Studio, a visual development environment that offers frontend developers new features (public preview) to accelerate UI development with minimal coding, while integrating Amplify’s powerful backend configuration and management capabilities. Amplify Studio automatically translates designs made in Figma to human-readable React UI component code. Within Amplify Studio, developers can visually connect the UI components to app backend data. For configuring and managing backends, Amplify Admin UI’s existing capabilities (such as “data”, “authentication”, “storage”, and more) will be part of Amplify Studio going forward, providing a unified interface to enable developers to build full-stack apps faster. To show case the new UI capabilities, let’s build a “Home Listing” application that shows the most recently added homes. Build your backend and frontend in one visual development environment First, let’s deploy this starter React app which includes a GraphQL API backend and a React frontend with the libraries. Click on the button below to deploy: Deploy with Amplify Hosting This workflow forks a GitHub repository and deploys a new Amplify app with a pre-configured backend. Once deployed, click “Open Studio” to browse your Amplify app configurations. Explore the data model in the “Data model” section. In this case we have “Home” model with just a few fields to represent a listing. Next, let’s use the content tab to auto-generate some random seed data for your app. Click on “Content” and then select to “Auto-generate seed data” under the “Actions” menu to seed your database with sample data. Let’s create 5 records for now and also introduce the constraint of “Street address” to the address field to make the auto-generated seed data more semantically accurate. Next, add in some image URLs to into the record’s “image_url” field. (Tip: if you don’t have a image URL handy, try using Unsplash’s random photo generator.) Your content tab should look something like this: Time to build the frontend. Let’s explore the new “UI Library” preview feature. Seamless designer-to-developer hand-off With Studio’s new “UI Library (Preview)”, you can sync components from Figma to Amplify Studio. Amplify also provides you a handy Figma file to get started faster. The Amplify Figma file includes both UI primitives and pre-built components. Studio can also sync any new components created in Figma as well! Let’s clone the Figma file as Amplify Studio suggests. Note: this step will require you to have a free Figma account. Explore the Amplify UI component primitives in the “Primitives” Figma page. You can also find a range of pre-built UI components in the “My Components” page. Optionally, you can also create your new Figma component. You can skip this step and use the pre-built “CardB” component instead if you want. Design a new component in Figma like you’d normally do! I’ll create a new component with auto-layout support an image on top and a couple of Text elements below. Back in Studio, paste in the Figma file link to import all the components from your Figma file. You can either choose individual components to import or import all of them via the “Accept all” button on the top right corner. Bind UI components to data With your UI library populated, let’s bind some of these components to data. Select the component and click “Configure”. In the UI component editor, you can define component properties and then bind them to different UI elements. For our app, let’s add a new property called “home” and select the “Home” type. To bind images, you can simply select the “src” property and bind it to the home’s “image_url” field. Next, select the child elements and bind their “label” to a value from the listing property. We can bind the text element to the home’s address. For price, we can even use some lightweight syntax expression to have the UI render “Price: $” + “home.price” + “/night”. To see how your component scales with different information and data, click on “Shuffle preview data” and Studio will shuffle through your app data and populate the component with live data. Create a collection Individual components are great but most of the time you’ll want to show a “collection” of a component. With Amplify Studio, you can make a collection of any component by clicking the “Create collection” button on the top right corner. Configure all your layout options on the left hand side and then configure your data sources on the right hand side. Let’s use a grid layout for this collection and set the column count to 3. In addition, let’s add 10px padding to all sides of the element to additional spacing. Next, let’s modify the data set used to populate the collection and add a new sort condition to sort by the newest homes to oldest homes. Choose “Add sort”, then select “createdAt” and “Descending” as the sort condition. Pull into your React app Let’s get the component code into our React app! To get the starter React code, all you have to do is clone your fork of the GitHub repository, the command should look something like this: git clone git@github.com:<GITHUB_USERNAME>/amplify-homes.git Then change into the “amplify-homes” directory: cd amplify-homes Note, if this is your first time setting up an Amplify project locally, you’ll need to install the Amplify CLI by running: npm install -g @aws-amplify/cli Then, click on “Get component code” to retrieve the component code. In this sample repository, the “initial project setup”, which includes adding the correct dependencies and importing the default CSS styles, is mostly completed already. You only need to install all npm dependencies locally: npm install Next, follow the instructions in the “Get component code” modal. Pull your UI components into your app’s code base: amplify pull --appId <YOUR_APP_ID> --envName <YOUR_ENV> After the amplify pull, a few key files have been added to your React code base: - “ui-components/” contains all UI components from your Figma file as React code - “models/” contains the local representation of your data model, allowing you to use it with DataStore to fetch, update, and subscribe to your app data - “aws-exports.js” defines all backend connection details such as API endpoints, API keys, or Amazon Cognito user pool ids Now, let’s add the UI components to your app. Go to your App.js and import the UI component. Then, place them in the render function. Your App.js file should look something like this: import './App.css'; import { NewHomes, NavBar, MarketingFooter } from './ui-components' function App() { return ( <div className="App"> <NavBar /> <NewHomes /> <MarketingFooter /> </div> ); } export default App; We also imported the “NavBar” element and the “MarketingFooter” as well to make the app more delightful faster. To test your app, run: yarn start You should see something like this in your browser: Extend in Code The generated UI components accept properties available on the “Flex” component or properties available on the “Collection” component. For example, to make a component go full width you can use all the properties available on an Amplify UI “Flex” component. In this case, I’ve set width={“100vw”} for the NavBar and the MarketingFooter, so it scales with my browser window size. We can also enable pagination for NewHomes by setting the isPaginated and itemsPerPage properties. import './App.css'; import { NewHomes, NavBar, MarketingFooter } from './ui-components' function App() { return ( <div className="App"> <NavBar width={"100vw"}/> <NewHomes isPaginated itemsPerPage={3}/> <MarketingFooter width={"100vw"}/> </div> ); } export default App; Now, you can change the window size and also paginate through the collection as well. There are many more customizations you can apply in code such as applying overrides to child elements, setting up onClick handlers for collection items, or set hover states on icons. Review Extend via Code in the Amplify Studio documentation. Amplify UI Library – from Preview to General Availability Amplify Studio’s UI library feature is currently still in developer preview. There is a range of improvements we’re making before general availability: - Ability to define event-based actions within Studio (e.g. on clicking a component, go to a specific page) - Add search, pagination, and filtering on collections - Ability to define S3 storage bindings (e.g. link an avatar to a user’s profile picture) - More UI components (color pickers, maps, avatar, file uploader) 🥳 Success Success! Your app is built! Ready to get started with your own brand new app? Get Started with Amplify Studio. Please send us your feedback about the new UI Library feature of Amplify Studio via GitHub or our Discord community.
https://aws.amazon.com/ru/blogs/mobile/aws-amplify-studio-figma-to-fullstack-react-app-with-minimal-programming/
CC-MAIN-2022-05
refinedweb
1,466
52.9
MassTransit interface-style messages Did you know that in MassTransit you can use interfaces for your message contracts rather than classes? If not, you can and here is how. Let’s take the following example: public class RegisterSalesOrder { public string OrderNumber { get; set; } } public class Program { public static void Main()) { var bus = BuildNewBus(); bus.Publish(new RegisterSalesOrder { OrderNumber = "abc" }); } } Nothing too surprising here I hope. We get our bus instance and then publish our message. Later someone will consume it. Like so: public class Program { public static void Main() { var bus = ServiceBusFactory.New(sbc => { //ignoring a bunch of setup sbc.Subscribe(subs => { subs.Handler<RegisterSalesOrder>(msg => { //do stuff }); }) }); } } more configuration options here Why interfaces? So this is great and all, but if our contract was that messages should be immutable it would be nice to have that enforced by the compiler. But with classes that can be tricky if we want to support being able to easily build the messages when we need to as well. We could just use private setters, but now they are a pain to build and would require constructor parameters which further makes serialization a pain. public class RegisterSalesOrder { public string OrderNumber { get; private set; } } MassTransit support interface based contracts like this public interface RegisterSalesOrder { string OrderNumber { get; } } Now we have an immutable contract, and we can subscribe to this interface in our subscription setup part of the bus. The trick is now in the publishing. How do you create an interface and set its properties and publish it? I simple create a class that implements the interface (outside of the contract dll). I usually make them in the publishing system as they are a private implementation detail of that system. public class MyRegisterSalesOrder : RegisterSalesOrder { public string OrderNumber { get; set; } } Now I can create an instance of the contract w/ ease and it can be made in whatever way my publishing code needs it. It can be built using a ctor, a factory method, through the container, whatever my app wants or needs. Then I can simply publish it bus.Publish(new MyRegisterSalesOrder { OrderNumber = "bob" }); And subscribers to the interface will get the message as it conforms to the interface. Boom.
http://codebetter.com/drusellers/2014/01/01/masstransit-and-interface-contracts/
CC-MAIN-2015-48
refinedweb
366
54.42
View Full Document This preview has intentionally blurred sections. View Full Document This preview has intentionally blurred sections. Unformatted text preview: Basic File I/O 1 Basic File I/O We discussed file output briefly before. Now we describe more C++ file input and output features that are very useful. To begin with, consider the following rectangle area program discussed before. The program sends some of its output to the computer screen and some to a text file. // An example using an output file // Compute the area of a rectangle with user inputs. #include <iostream> // this line for standard I/O #include <fstream> // this line for file I/O using namespace std; int main() { ofstream outfile ("myoutput.dat"); //this line for file output int length, width, area; cout << "Enter numbers for length and width:"; cin >> length >> width; area = length * width; outfile << "Area of a rectangle with "; outfile << "length = " << length << " and width = " << width; outfile << " is " << area; outfile.close(); // this line closes the file return 0; } // end of main If the user enters 12 and 6, the input and output appearing on the screen will be as follows. Output on the screen: Enter number for length and width: 12 6 ← Output sent to the file myoutput.dat : Area of a rectangle with length = 12 and width = 6 is 72 Basic File I/O 2 • The include file fstream allows us to use functions associated with output files. The name we chose for our output file, myoutput.dat , was totally arbitrary. • The program will create that file in the directory under which our program executes. You may check the contents of that file by printing it or by opening it up with your text editor. • The output to the text file does not accumulate over several runs. Each time we run a program that sends its output to the file myoutput.dat , the previous contents of the file are erased. • The output file we created using the name myoutput.dat” uses the current working directory as the location to create the file. If we wanted to create a file with the same name in a different location, we have to specify the exact pathname with the filename. For example, to create the file and open it on a different directory on drive c, we have to use the following statement. ofstream fileout("c:\\mydir\\mydocs\\myoutput.dat"); Basic File I/O 3 Just as writing output to a file, we can also read input from a file. To illustrate this, we rewrite our rectangle area program. The program uses two files, one for reading the input, and the second for writing the output. // An example using an input file and an output file // Compute the area of a rectangle with user inputs from file #include <iostream> // this line for standard I/O #include <fstream> // this line for file I/O using namespace std; int main() { ofstream outfile ("myoutput.dat"); // for output file ifstream infile ("myinput.dat"); // for input file if ( infile.fail() ) { cout << "Input file \"myinput.dat\" opening failed" << endl; return 1; } int length, width, area; infile >> length >> width; if (length > 0 && width > 0) { area = length * width; outfile << "Area of a rectangle with ";... View Full Document This note was uploaded on 04/07/2008 for the course CS 181 taught by Professor Satya during the Fall '08 term at Stevens. - Fall '08 - Satya Click to edit the document details
https://www.coursehero.com/file/87992/lect4-2/
CC-MAIN-2017-26
refinedweb
566
63.29
How to Elegant Coding in Python - 3245 How to Elegant Coding in Python .Typically, aesthetic programming was not a critical issue when we were studying in school. Individuals then follow their style when writing in Python. However, the work may be quite undesirable whenever we have to spend most of the time understanding one’s implicit code Typically, aesthetic programming was not a critical issue when we were studying in school. Individuals then follow their style when writing in Python. However, the work may be quite undesirable whenever we have to spend most of the time understanding one’s implicit code, which could also happen to others when reading our code. Therefore, let’s focus on the Zen of Python and some improvement tips to solve the problem. The Zen of Python? For those who haven’t seen it before, type and execute import this in your Python interpreter, and 19 guiding principles penned by Tim Peters will show this piece, I’m going to share my interpretation of these aphorisms and some useful Python tips I’ve learned. Photo by June Wong on Unsplash. Beautiful Is Better Than Ugly Python features simple syntax, code readability, and English-like commands that make coding a lot easier and more efficient than with other programming languages. For example, using or and vs. || &&to construct the same expressions in semantic perspective**:** # &&, || if a == 0 && b == 1 || c == Ture: # and, or if a == 0 and b == 1 or c == Ture: # These are the same logical expression in Python # The alternative operators can be used to construct the exact same expressions from a semantic perspective. alternative_operator.py Furthermore, the layout and composition of the code are crucial, and there are plenty of resources that exist covering this topic. Here is the most popular and my favorite one: After looking through PEP8, take a look at these articles showing some highlights and applications: Never mess up your code. Be elegant and make it beautiful. Explicit Is Better Than Implicit In Python, a good naming convention not only prevents you from getting bad grades in classes but also makes your code explicit. Fortunately, there are some guidelines you can find in PEP8, and I would like to highlight some points below. - In general, avoid using names that are 1. Too general, like my_list. 2. Too wordy, like list_of_machine_learning_data_set. 3. Too ambiguous, like “l”, “I”, “o”, “O.” - Package/Module names should be all lowercase. - One-word names are preferred. - When multiple words are needed, add underscores to separate them. - Class names should follow the UpperCaseCamelCase convention. - Variables\Methods\Functions should follow the lowercase convention (add underscores to separate words if needed). - Constant names must be fully capitalized (add underscores to separate words if needed). Everything has to be lucid and understandable. Simple Is Better Than Complex “Simple can be harder than complex: You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains.” ― Steve Jobs A lot of times when dealing with iterators, we also need to keep a count of iterations. Python eases the task by providing a built-in function called enumerate(). Here is the immature way followed by the recommended one: words = ['Hannibal', 'Hanny', 'Steeve'] # Novice index = 0 for word in words: print(index, word) index += 1 # Pro for index, word in enumerate(words): print(index, word) Another example is using the built-in zip() function, which creates an iterator that would pair elements from two or more iterables. You can use it to solve common programming problems fast and efficiently, like creating dictionaries. subjects = ['math', 'chemistry', 'biology', 'pyhsics'] grades = ['100', '83', '90', '92'] grades_dict = dict(zip(subjects, grades)) print(grades_dict) zip_app.py The ability to simplify is to eliminate the unnecessary so that the necessary may speak. Complex Is Better Than Complicated The difference between complex and complicated is that complex is used to refer to the system level of components, while complicated refers to a high level of difficulty. Sometimes, although we try to keep tasks simple and stupid, the result could still be nasty. In this case, optimization in programming becomes necessary, and my favorite option of learning it is working on coding challenge websites. You can view others’ solutions and even be inspired by better algorithms. HackerRank provides a variety of levels fitting new programmers, which is outstanding for getting started. After that, try websites that are more professional, like: Flat Is Better Than Nested Nested modules are not common in Python—at least I haven’t seen anything like module.class.subclass.function before—and not easy to read. Though building a submodule in another submodule may reduce lines of code, we don’t want users to get troubled with unintuitive syntax. Sparse Is Better Than Dense Don’t stress the reader by sticking too much code in one line. A recommended maximum line length is 79 characters. The limitation of the editor window width works well when using code review tools. Download images agiler from Unsplash using Python. Readability Counts Code is read more often than it’s written. Think about indentation and how much easier it is to read code, and compare the codes below: money = 10000000 print("I earn", money, "dollars by writing on medium.") money = 10_000_000 print(f"I earn {money} dollars by writing on medium.") In this case, the codes share the same result, but the last one provides more readability by using underscore placeholders and f-string. After Python 3.6 was announced, f-string started to make formatting easier and it’s more powerful when dealing with longer sentences with more variables inside. A writer’s style should not place obstacles between his ideas and the minds of his readers. Special Cases Aren’t Special Enough to Break the Rules The consistency of supporting general cases is the key, so try to reorganize a cumbersome project into a simple form. For example, structure the code in classes or sort it into different files according to its functionality, even though Python doesn’t force you to do so. Since Python is a multi-paradigm programming language, a powerful approach to problem-solving is to create objects, which is known as object-oriented programming. Object-oriented programming is a programming paradigm that organizes program structure so that attributes and behaviors can be viewed as individual objects. The benefit of it is intuitive and easy to manipulate, and many tutorials have splendidly explained the concepts. This one is my favorite: Although Practicality Beats Purity This aphorism is contradictory to the last one and reminds us of the balance between them. Errors Should Never Pass Silently Passing errors would eventually leave implicit bugs that are even harder to figure out. Thanks to the robust error handling in Python, it is not difficult for programmers to use the tool compared with other languages. try: x = int(input("Please enter an Integer: ")) except ValueError: print("Oops! This is not an Integer.") except Exception as err: print(err) else: print('You did it! Great job!') finally: print('ヽ(✿゚▽゚)ノ') # 1.The code that potentially break down. # 2.Triggered if the value error occur. # 3.Handling error other than value error. # 4.Execute if no error triggered. # 5.Execute no matter the error triggered or not. exception_handling.py According to Python’s documentation: “Even if a statement or expression is syntactically correct, it may cause an error when an attempt is made to execute it.” Especially for a big project, we don’t want our code crashing after a time-consuming computation. This is why exception management is charming. Unless Explicitly Silenced In some situations, small bugs would not bother you. Maybe you want to catch the specific error, though. To get more details on specific error messages, I recommend reading the official Built-in Exceptions document and finding out your target. In the Face of Ambiguity, Refuse the Temptation to Guess “What is important is to keep learning, to enjoy challenge, and to tolerate ambiguity. In the end there are no certain answers.” ― Matina Horner This quote is elegant and lyrical but not a good metaphor in programming. Ambiguity may refer to unclear syntax, complicated program structure, or mistakes that trigger an error message. For example, a simple mistake when using the numpy module for the first time: import numpy as np a = np.arange(5) print(a < 3) if a < 3: print('smaller than 3') ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() If you execute the code above, you’ll find an array of five bools in the output indicating the values under 3. Thus it’s impossible for the if statement to determine the state. The built-in .all() and .any() functions showed in the message are used for successive And/Or. import numpy as np a = np.array([True, True, True]) b = np.array([False, True, True]) c = np.array([False, False, False]) print(a.all()) print(a.any()) print(b.all()) print(b.any()) print(c.all()) print(c.any()) all_any_demo.py The output shows that .all() returns True only if all of the items are True, while .any() returns True if any of the items are True. There Should Be One — and Preferably Only One — Obvious Way to Do It Think of why Python is described as an easy-to-learn programming language. With marvelous built-in functions/libraries and high expansibility, Python encourages programmers to write gracefully. Though there are more solutions to provide flexibility, it could spend too much time digging into the same problem. Although That Way May Not Be Obvious at First Unless You’re Dutch The creator of Python, Guido van Rossum, is a Dutch programmer who makes this aphorism unarguable. You won’t claim that you know Python better than he does… at least I won’t. Photo courtesy of Guido van Rossum on GitHub. Now Is Better Than Never “You may delay, but time will not, and lost time is never found again.” — Benjamin Franklin For those who suffer from procrastination like me and are searching for change, check this out and cooperate with the panic monster. On the other hand, another side of the aphorism is to stop you from over-planning, which is no more productive than watching Netflix. The mutual attribute of procrastination and over-planning is that “Nothing is done.” Although Never Is Often Better Than Right Now “Now is better than never” doesn’t mean that planning is useless. Writing the ideas down and setting a goal to conquer is better than doing it at the very moment. For example, I usually spend an hour every Sunday to scratch out my weekly schedule and update my plan for tomorrow right before I go to bed to check out anything that has to be put off. If the Implementation Is Hard to Explain, It’s a Bad Idea Recall the idea of “Complex is better than complicated.” Usually, the complicated code means weak design—especially in high-level programming languages like Python. In some cases, however, the complexity of its domain knowledge could make implementation hard to explain, and how to optimize its lucidity matters. Here’s a guideline for structuring projects that leverages your achievement. If the Implementation Is Easy to Explain, It May Be a Good Idea It’s programming expertise to make the design (or even people’s lives) easier while the background knowledge may be profound, and I think this is the hardest part of programming. Take advantage of the simplicity and readability in Python to implement crazy ideas. Namespaces Are One Honking Great Idea — Let’s Do More of Those! Last but not least, a namespace is a set of symbols that are used to organize objects of various kinds so that these objects may be referred to by unique names. In Python, a namespace is a system composed of: - Built-in namespaces: Can be called without creating a self-defined function or importing modules such as the print()function. - Global namespaces: When a user creates a class or function, a global namespace gets created. - Local namespaces: The namespace inside local scopes. Diagram of namespace relations. The namespace system prevents Python from conflicting between module names. Conclusion Thanks for reading! I hope you enjoyed it.
https://geekwall.in/p/kOJ4I8HM/how-to-elegant-coding-in-python
CC-MAIN-2020-40
refinedweb
2,071
55.44
#include "petscmat.h" PetscErrorCode MatCreateIS(MPI_Comm comm,PetscInt bs,PetscInt m,PetscInt n,PetscInt M,PetscInt N,ISLocalToGlobalMapping rmap,ISLocalToGlobalMapping cmap,Mat *A) Notes: See MATIS for more details m and n are NOT related to the size of the map, they are the size of the part of the vector owned by that process. The sizes of rmap and cmap define the size of the local matrices. If either rmap or cmap are NULL, than the matrix is assumed to be square Level:advanced Location:src/mat/impls/is/matis.c Index of all Mat routines Table of Contents for all manual pages Index of all manual pages
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateIS.html
CC-MAIN-2016-26
refinedweb
109
56.59
Security is a general concern with web services because SOAP (request and response) messages are exchanged (between web service and client) in a plain text format. Though with WSE 2.0/3.0 and WCF, it is very much possible to encrypt the sensitive information in the message, it is a commonly accepted practice to use SSL (HTTPS) communication. This article discusses the problems that generally pop-up when a SSL enabled (self-signed/test certificate) service is consumed by a .NET application. To implement SSL on your web service, you need to get and install a certificate issued by a Certificate Authority (CA) on your web server (IIS). Mostly this certificate is used only in production environments. When it comes to development and test environments, a self-signed certificate (test certificate) is being used. You can generate a test certificate using MakeCert.exe tool (included in the .NET Framework SDK) or using (IIS) 6.0 Resource Kit Tools. When you try to access an SSL enabled web service from your C# code, you will get the following error.... "The underlying connection was closed: Could not establish trust relationship with remote server." This is true with a test (self-signed) certificate or a certificate issues by CA where the host name and the name on which the certificate was issued don't match - Perhaps you might be accessing it through an external IP address. How many times have you observed the following windows in your browsers when browsing an HTTPS web page or a web service? All the three browsers (Internet Explorer 8, Firefox 3.0.11 and Chome) are asking the user to choose between closing the window or adding an exception because they couldn't verify that this certificate is being issued from a valid CA. When you are accessing the web service through your C# code, you should do the same as what you have done in the browser - Trust the certificate!!. But there is no message window for you to accept it when you are accessing it programmatically. So you just need to simulate the message windows and ask it to trust the certificate. Here is code to simulate the message window. Add the following code just before invoking a web service method: ServicePointManager.ServerCertificateValidationCallback = delegate(Object obj, X509Certificate certificate, X509Chain chain, SslPolicyErrors errors) return (true); }; Sometimes even after implementing Solution #1, you might get the following error: Server was unable to process request. ---> Unable to generate a temporary class (result=1). error CS2001: Source file 'C:\WINDOWS\TEMP\zezde3bz.0.cs' could not be found error CS2008: No inputs specified Two different settings can cause this problem: Needless to say, the solution is straight forward: This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) ServicePointManager.ServerCertificateValidationCallback = delegate(Object obj, X509Certificate certificate, X509Chain chain, SslPolicyErrors errors) { return (true); }; General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/script/Articles/View.aspx?aid=38028
CC-MAIN-2015-48
refinedweb
512
53.41
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 md@Linux.IT (Marco d'Itri) writes: > On Dec 18, Roger Leigh <rleigh@whinlatter.ukfsn.org> wrote: > >> How strongly can I put this? /dev/shm is for *shared memory*, not for >> random junk. /dev/shm is for POSIX shared memory and semaphores > /dev/shm is a tmpfs which happens to be used by POSIX SHM. I have not > seen yet a good reason why it should not be used by other users too. There could be a naming conflict. The entire namespace is reserved for SHM, right? That means if someone does a "int shm = shm_open("/foobar", O_CREAY, 0600)" and some thoughtless prat already put something by than name there, they are completely stuffed. They should use a tmpfs mounted somewhere else. Here's a sample program to demonstrate: #include <sys/mman.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <errno.h> #include <stdio.h> #include <stdlib.h> #include <string.h> int main (void) { int fd = shm_open("/foobar", O_CREAT, 0600); if (fd < 0) { fprintf(stderr, "ERROR: %s\n", strerror(errno)); exit(1); } fprintf(stderr, "SUCCESS\n"); exit(0); } >> created with sem_open() and shm_open(). We don't want random breakage >> because people put files in there. /dev/shm is reserved. > Actually people have been putting files there for a while, even in > packages in a stable release. Can you point us to some examples of the > random breakage you suggest has happened? It hasn't happened, but POSIX shm will inevitably take time to gain users. That doesn't mean abusing it is a good idea in the meantime. >> Where was it ever written down that any package could use /dev/shm? >> They can't. > Oops. They already do. Correct, but does that make it OK? I find it disgustingly bad practice, and now we have /run, they can move to using that. /run is a good idea for this reason alone, because it will correct this abuse. I fail to see why anyone can consider abuse of an unrelated subsystem "because it's there" is good engineering practice. Any package abusing /dev/shm is deserving of an RC bug.Z2cVcFcaSW/uEgRApREAJ9tyP3j5T8nXJEUyq/2yidKjxIlfgCgkMAr iOv0KKusOB1opxHOmcGzRUQ= =8W5l -----END PGP SIGNATURE-----
https://lists.debian.org/debian-devel/2005/12/msg00791.html
CC-MAIN-2016-44
refinedweb
368
78.35
original source : compile "android.arch.persistence.room:runtime:1.0.0-alpha3"annotationProcessor "android.arch.persistence.room:compiler:1.0.0-alpha3"compile 'com.google.code.gson:gson:2.6.2' First 2 lines for including Room DB in our project and 3rd one is Gson which we will use to convert our object to String. Our modified User.java looks something like this. Even though you can’t see any getter or setter method here, you need to include them or make the fields public for Room to access it. User.java @Entitypublic class User { @PrimaryKey private int uId; private String uName; private ArrayList<String> uPets = new ArrayList<>(); public User() { } public User(int id, String name, List<String> pets){ this.uId = id; this.uName = name; this.uPets.addAll(pets); }; //getters setters removed for brevity} User DAO stays more or less the same. You can include methods to fetch particular user from user id or name, I will leave that as an exercise for you. UserDAO.java @Daopublic interface UserDAO { @Insert(onConflict = REPLACE) void insertUser(User user); @Query("SELECT * FROM User") List<User> getUsers(); //include any methods to fetch specific user // @Query("SELECT * FROM User")} And our UserDB will also stay the same except small difference. UserDB.java @Database (entities = {User.class},version = 1)@TypeConverters({Converters.class}) public abstract class UserDB extends RoomDatabase { public abstract UserDAO userDAO();} Noticed that @TypeConverters annotation we provided along with @Database annotation, this will be the class which will tell Room how to convert ArrayList object to one of SQLite data type. We will implement methods to convert ArrayList to String for storing it in DB and String back to ArrayList for getting back original User object. Type Converter specifies additional type converters that Room can use. The TypeConverter is added to the scope of the element so if you put it on a class / interface, all methods / fields in that class will be able to use the converters. Without further delay, we will see how our type converter looks in our case. UserDAO.java public class Converters { @TypeConverter public static ArrayList<String> fromString(String value) { Type listType = new TypeToken<ArrayList<String>>() {}.getType(); return new Gson().fromJson(value, listType); } @TypeConverter public static String fromArrayLisr(ArrayList<String> list) { Gson gson = new Gson(); String json = gson.toJson(list); return json; }} As you can see, we have 2 methods in TypeConverter. - public static String fromArrayLisr(ArrayList<String> list) : This method takes our arraylist object as parameter and returns string representation for it so that it can be stored in Room Database. Very simple and easy way to convert any object to string is converting it into its JSON equivalent. Just creating Gson object and calling toJson method with our object as parameter is enough. - public static ArrayList<String> fromString(String value) : While reading data back from Room Database, we get JSON form of our arraylist which we need to convert back. We will use Gson method fromJson by providing JSON string as parameter. But while converting back, we also need to provide the class of original object (in our case, arraylist), but providing arraylist is not enough here as Gson will not be able know what kind of list it has to form. That’s why we used Type to provide the type of list we want Gson to form for us from the JSON string. . . . . 참고사항) @Bindable 에 대하여
http://jacob-yo.net/tag/android/
CC-MAIN-2021-31
refinedweb
562
54.02
Hi all, Here's a patch for my 'direxec' feature for bash 2.05. This feature lets you "execute" a directory by defining a function called 'direxec' which takes the directory (and other args) as its arguments. When this function isn't defined, the behaviour is the same as before (ie. a "is a directory" or "command not found" error). When it is defined, typing something like $ /tmp foo bar is equivalent to $ direxec /tmp foo bar If there's a directory with the same name as some legitimate command (eg. 'ls'), then the command will be run rather than direxec - that is, direxec only runs when when an error would otherwise occur (ie. "is a directory" or "command not found"). I find it's very handy to have direxec defined as function direxec { cd "$1"; shift; ls "$@"; } This lets me navigate around the filesystem by just typing directories, rather than repeated cd and ls commands. For example, $ dl $ incoming $ gnu $ bash instead of $ cd dl $ ls $ cd incoming $ ls $ cd gnu $ ls $ cd bash $ ls It also means I can do $ /tmp -lat instead of $ cd /tmp $ ls -lat This is my first play with the bash source, so there could still be bugs. It turned out to be harder and more complicated than I originally thought, and changed structure several times. I think I have the forking and piping working (eg. "/tmp -lat | head"). Feedback is of course appreciated. Bye 4 now, Kev. --- execute_cmd.c.orig Fri Mar 23 02:17:23 2001 +++ execute_cmd.c Mon Oct 1 21:51:00 2001 @@ -145,7 +145,7 @@ static int execute_builtin_or_function (); static int builtin_status (); static void execute_subshell_builtin_or_function (); -static void execute_disk_command (); +static int execute_disk_command (); static int execute_connection (); static int execute_intern_function (); @@ -154,6 +154,10 @@ /* The line number that the currently executing function starts on. */ static int function_line_number; +/* The name of the function which, if set, will be run when a directory + is "executed". */ +static char direxec[] = "direxec"; + /* Set to 1 if fd 0 was the subject of redirection to a subshell. Global so that reader_loop can set it to zero before executing a command. */ int stdin_redir; @@ -2601,6 +2605,8 @@ last_shell_builtin = this_shell_builtin; this_shell_builtin = builtin; + runfunc: + if (builtin || func) { if (already_forked) @@ -2659,9 +2665,16 @@ if (command_line == 0) command_line = savestring (the_printed_command); - execute_disk_command (words, simple_command->redirects, command_line, - pipe_in, pipe_out, async, fds_to_close, - simple_command->flags); + if (!execute_disk_command (words, simple_command->redirects, + command_line, pipe_in, pipe_out, async, + fds_to_close, simple_command->flags)) + { + /* This means that we should run direxec, so prepend 'direxec' to + the words and set func, then jump back and run the function. */ + words = make_word_list (make_word (direxec), words); + func = find_function (direxec); + goto runfunc; + } return_result: bind_lastarg (lastarg); @@ -3103,7 +3116,7 @@ in the parent. This is probably why the Bourne style shells don't handle it, since that would require them to go through this gnarly hair, for no good reason. */ -static void +static int execute_disk_command (words, redirects, command_line, pipe_in, pipe_out, async, fds_to_close, cmdflags) WORD_LIST *words; @@ -3116,7 +3129,11 @@ char *pathname, *command, **args; int nofork; pid_t pid; + int maybedir; + struct stat finfo; + int result; + result = 1; nofork = (cmdflags & CMD_NO_FORK); /* Don't fork, just exec, if no pipes */ pathname = words->word->word; @@ -3132,6 +3149,24 @@ command = search_for_command (pathname); + /* If search_for_command() found a valid command, use that rather than + pathname. This means that when you type 'ls' where there's a directory + called 'ls', command will be '/bin/ls', which isn't a directory, and so + maybedir will be false, and we'll fork to run /bin/ls. If you check for + 'ls' (ie. pathname) being a directory rather than '/bin/ls' (ie. + command), then maybedir will be true and so we won't fork to run + /bin/ls, which is obviously bad news. */ + if (command) + { + maybedir = (find_function (direxec) && + (stat (command, &finfo) == 0) && (S_ISDIR (finfo.st_mode))); + } + else + { + maybedir = (find_function (direxec) && + (stat (pathname, &finfo) == 0) && (S_ISDIR (finfo.st_mode))); + } + if (command) { maybe_make_export_env (); @@ -3142,7 +3177,10 @@ of COMMAND, since we want the error messages to be redirected. */ /* If we can get away without forking and there are no pipes to deal with, don't bother to fork, just directly exec the command. */ - if (nofork && pipe_in == NO_PIPE && pipe_out == NO_PIPE) + /* If this command might be a directory (which requires running the + 'direxec' function in this process), then we purposely don't fork. + In this case, we avoid exit()ing after calling shell_execve(). */ + if ((nofork || maybedir) && pipe_in == NO_PIPE && pipe_out == NO_PIPE) pid = 0; else pid = make_child (savestring (command_line), async); @@ -3207,15 +3245,40 @@ if (command == 0) { - internal_error ("%s: command not found", pathname); - exit (EX_NOTFOUND); /* Posix.2 says the exit status is 127 */ + /* Having a null command might not be an error, since if maybedir + is true, then we should run direxec. */ + if (!maybedir) + { + internal_error ("%s: command not found", pathname); + exit(EX_NOTFOUND); /* Posix.2 says the exit status is 127 */ + } + } + else + { + /* Execve expects the command name to be in args[0]. So we + leave it there, in the same format that the user used to + type it in. */ + args = word_list_to_argv (words, 0, 0, (int *)NULL); + result = shell_execve (command, args, export_env); + } + + if (maybedir) + { + /* Didn't fork, so don't exit(). */ + /* If shell_execve failed and this might be a dir for execution, + then run the direxec function. This way typing 'ls' where + the subdir 'ls' exists will run the command 'ls' (ie. /bin/ls) + rather than running the direxec function. Typing 'ls/' runs + direxec though. */ + /* So we return 1 to indicate to execute_simple_command that it + should run direxec however it needs to. */ + result = 0; + } + else + { + /* Forked, so exit(). */ + exit (result); } - - /* Execve expects the command name to be in args[0]. So we - leave it there, in the same format that the user used to - type it in. */ - args = word_list_to_argv (words, 0, 0, (int *)NULL); - exit (shell_execve (command, args, export_env)); } else { @@ -3225,7 +3288,9 @@ unlink_fifo_list (); #endif FREE (command); + result = 1; } + return (result); } #if !defined (HAVE_HASH_BANG_EXEC) @@ -3401,7 +3466,14 @@ if (i != ENOEXEC) { if ((stat (command, &finfo) == 0) && (S_ISDIR (finfo.st_mode))) - internal_error ("%s: is a directory", command); + { + /* Suppress error message if we're going to run the direxec function + on returning to execute_disk_command() */ + if (!find_function (direxec)) + { + internal_error ("%s: is a directory", command); + } + } else { #if defined (HAVE_HASH_BANG_EXEC) -- .----------------------------------------------------------------------. | Kevin Pulo Quidquid latine dictum sit, altum viditur. | | address@hidden _ll l_ng__g_e_ _r_ hi__ly p__d_ct__le. | | God casts the die, not the dice. | `--------------- Linux: The choice of a GNU generation. ---------------'
https://lists.gnu.org/archive/html/bug-bash/2001-10/msg00000.html
CC-MAIN-2022-33
refinedweb
1,046
62.48
Content-type: text/html XmTextCut - A Text function that copies the primary selection to the clipboard and deletes the selected text #include <Xm/Text.h> Boolean XmTextCut (widget, time) Widget widget; Time time; XmTextCut copies the primary selected text to the clipboard and then deletes the primary selected text. This routine widget ID. Specifies the server time at which the selection value is to be modified. This should be the time of the event which triggered this request. One source of a valid time stamp is the function XtLastTimestampProcessed(). For a complete definition of Text and its associated resources, see XmText(3X). This function returns False if the primary selection is NULL, if the widget doesn't own the primary selection, or if the function is unable to gain ownership of the clipboard selection. Otherwise, it returns True. XmText(3X)
http://backdrift.org/man/tru64/man3/XmTextCut.3X.html
CC-MAIN-2016-44
refinedweb
140
56.45
1. Introduction What is the FIDO2 API? The. What you'll build... In this codelab, you are going to build an Android app with a simple re-authentication functionality using fingerprint sensor. "Re-authentication" is when a user signs in to an app, then re-authenticates when they switch back to your app, or when trying to access an important section of your app. The latter case is also referred to as "step-up authentication". What you'll learn... You will learn how to call the Android FIDO2 API and options you can provide in order to cater various occasions. You will also learn re-auth specific best practices. What you'll need... - Android device with a fingerprint sensor (even without a fingerprint sensor, screenlock can provide equivalent user verification functionality) - Android OS 7.0 or later with latest updates. Make sure to register a fingerprint (or screenlock). 2. Getting set up Clone the Repository $ git clone git@github.com:googlecodelabs/fido2-codelab.git What are we going to implement? - Let users register a "user verifying platform authenticator" (the Android phone with fingerprint sensor itself will act as one). - Let users re-authenticate themselves to the app using their fingerprint. You can preview what you are going to build from here. Start your codelab project The completed app sends requests to a server at. You may try web version of the same app there. You are going to work on your own version of the app. - Go to the edit page of the website at. - Find "Remix to Edit" button at the top right corner. By pressing the button, you can "fork" the code and continue with your own version along with a new project URL. - Copy the project name on top left (you may modify it as you want). - Paste it to the .envfile's HOSTNAMEsection in glitch. 3. Associate your app and a website with the Digital Asset Links To use FIDO2 API on an Android app, associate it with a website and share credentials between them. To do so, leverage the Digital Asset Links. You can declare associations by hosting a Digital Asset Links JSON file on your website, and adding a link to the Digital Asset Link file to your app's manifest. Host .well-known/assetlinks.json at your domain You can define an association between your app and the website by creating a JSON file and put it at .well-known/assetlinks.json. Luckily, we have a server code that displays assetlinks.json file automatically, just by adding following environment params to the .env file in glitch: ANDROID_PACKAGENAME: Package name of your app (com.example.android.fido2) ANDROID_SHA256HASH: SHA256 Hash of your signing certificate In order to get the SHA256 hash of your developer signing certificate, use the command below. The default password of the debug keystore is "android". $ keytool -exportcert -list -v -alias androiddebugkey -keystore ~/.android/debug.keystore By accessing https://<your-project-name>.glitch.me/.well-known/assetlinks.json , you should see a JSON string like this: [{ "relation": ["delegate_permission/common.handle_all_urls", "delegate_permission/common.get_login_creds"], "target": { "namespace": "web", "site": "https://<your-project-name>.glitch.me" } }, { "relation": ["delegate_permission/common.handle_all_urls", "delegate_permission/common.get_login_creds"], "target": { "namespace": "android_app", "package_name": "com.example.android.fido2", "sha256_cert_fingerprints": ["DE:AD:BE:EF:..."] } }] Open the project in Android Studio Click "Open an existing Android Studio project" on the welcome screen of Android Studio. Choose the "android" folder inside the repository check out. Associate the app with your remix Open gradle.properties file. At the bottom of the file, change the host URL to the Glitch remix you just created. // ... # The URL of the server host=https://<your-project-name>.glitch.me At this point, your Digital Asset Links configuration should be all set. 4. See how the app works now Let's start by checking out how the app works now. Make sure to select "app-start" in the run configuration combobox. Click "Run" (the green triangular next to the combobox) to launch the app on your connected Android device. When you launch the app you'll see the screen to type your username. This is UsernameFragment. For the purpose of demonstration, the app and the server accept any username. Just type something and press "Next". The next screen you see is AuthFragment. This is where the user can sign in with a password. We will later add a feature to sign in with FIDO2 here. Again, for the purpose of demonstration, the app and the server accept any password. Just type something and press "Sign In". This is the last screen of this app, HomeFragment. For now, you only see an empty list of credentials here. Pressing "Reauth" takes you back to AuthFragment. Pressing "Sign Out" takes you back to UsernameFragment. The floating action button with "+" sign doesn't do anything now, but it will initiate registration of a new credential once you have implemented the FIDO2 registration flow. Before starting to code, here's a useful technique. On Android Studio, press "TODO" at the bottom. It will show a list of all the TODOs in this codelab. We'll start with the first TODO in the next section. 5. Register a credential using a fingerprint In order to enable authentication using a fingerprint, you'll first need to register a credential generated by a user verifying platform authenticator - a device-embedded authenticator that verifies the user using biometrics, such as a fingerprint sensor. As we have seen in the previous section, the floating action button doesn't do anything now. Let's see how we can register a new credential. Call the server API: /auth/registerRequest Open AuthRepository.kt and find TODO(1). Here, registerRequest is the method that is called when the FAB is pressed. We'd like to make this method call the server API /auth/registerRequest. The API returns) } } Run the app, and you will be able to click on the FAB and register a new credential. 6. Authenticate the user with a fingerprint We now have a credential registered on the app and the server. We can now use it to let the user sign in. We are adding fingerprint sign-in feature to AuthFragment. When a user lands on it, it shows a fingerprint dialog. When the authentication succeeds, the user is redirected to HomeFragment. Call the server API: /auth/signinRequest Open AuthRepository.kt and find TODO(5). This AuthFragment is opened. Here, we want to request the server and see if we can let the user sign in with FIDO2. First, we have to retrieve PublicKeyCredentialRequestOptions from the server. Use api.signInRequest to call the server API.) } } Run the app and click on "Reauth" to open AuthFragment. You should now see a fingerprint dialog prompting you to sign in with your fingerprint. Congrats! You have now learned how to use FIDO2 API on Android for registration and sign-in. 7. Congratulations! You have successfully finished the codelab - Your first Android FIDO2 API. What you've learned - How to register a credential using a user verifying platform authenticator. - How to authenticate a user using a registered authenticator. - Available options for registering a new authenticator. - UX best practices for reauth using a biometric sensor. Next step - Learn how to build similar experience in a website. You can learn it by trying out the Your first WebAuthn codelab! Resources Special thanks to Yuriy Ackermann from FIDO Alliance for your help.
https://codelabs.developers.google.com/codelabs/fido2-for-android/index.html?index=..%2F..index
CC-MAIN-2022-27
refinedweb
1,234
59.7
This week I embarked on the experiment of seeing how much Bitcoin client you can build on OpenBSD. I might as well use this space to consolidate the mental notes on the process I used. Some particular details have likely slipped my mind already, so consider this an abridged guide. The version I ended up building with this process was 0.7.2, it builds fine with or without upnp support and with or without QR code support. Going earlier in version to 0.5.3 or the 0.6 series should be fine too, you just might have to make some different changes in the source and flags. Building any later version which uses leveldb for blockchain storage might not be possible on OpenBSD. The Bitcoin source tarball for later versions includes its own leveldb source and hammering that into a shape that will compile into something useful on OpenBSD is a challenge too far. Note that around the 0.8 release with the move to leveldb is when the effort on Bitcoin in the OpenBSD work in progress ports tree dropped precipitously. - Your first order of business is acquiring a source tarball from somewhere. Ask a friend. Download it from Github. However you acquire it is your business.1 - To minimize headaches I installed the version of BDB this version of Bitcoin-qt expects: pkg_add unzip curl -O unzip db-4.8.30.NC.zip cd db-4.8.30.NC/build_unix ../dist/configure --prefix=/usr/local --disable-replication --enable-cxx --enable-shared=yes make install Now that there's a base it's time to extract the tarball, fire up a text editor and start chopping at code. It's fire up the text editor and chop because reading is good, and understanding is good.2 - The first change is to wallet.cpp as described here. In OpenBSD world rand() is the shitty C standard compliant deterministic random and arc4random() is the good random. As the man page explains arc4random isn't based off of arc4 anymore so the mnemonic is now "A Replacement Call for Random" and the backend remains liable to change in the future. The need for this will likely be changing after OpenBSD 5.7, but until then this change is explicitly necessary. - For the second change we consult the Bitcoin Foundation's patchset for bitcoind 0.5.3.1 particularly its BDB database configuration patch. Simply read the changes and make the changes. This protects against the great blockchain fork of March 2013's dilemma. - For the third change we consult the foundation's Alert snipping patch. The patch cannot be used as is because the alert code's been move around since version 0.5.3 but it still informs of changes that can be made in 0.7.2 to remove the system. I futzed with the public key strings in Alert.cpp and deleted from main.cpp most of the alert invocation code. It is especially important though to remove the parts where your version would impose a denial of service penalty on nodes sending it bad alerts. Removing the Alert stuff is critical for making sure your node can function as is indefinitely into the future. - Update: Originally I neglected that in protocol.cpp you need to add #include <netinet/in.h> #include <sys/socket.h> - In the makefile change the invocation of -libboost-thread to -libboost-thread-mt and LIBS += -lrt when qmake gives you a makefile. Address compiler errors as they come up. If everything worked out well the result should be a functioning version of Bitcoin-qt which will take advantage of multiple cores on your machine and as far as I can tell work. For initial sync you will want to do it from the network in the wild to make sure it is actually a Bitcoin implementation. Built against LibreSSL 2.0 it should make it past the first wedge block 168001 fine. Mine is still in the syncing process. Expect it to die a few or a lot of times as it hits the 512 MB RAM limit during initial sync. You can avoid this by letting OpenBSD give it more memory, but the reason memory usuage is bloating so much are bastard fatherless blocks fed to you which fill your memory as you get fed more and more blocks you lack the precedent to verify. It is both kinder and faster to let malloc() kill the process when it hits them memory limit, and then just script restarting it. When in doubt about something read. OpenBSD has wonderful manual pages. The depth and quality of documentation they contain is beautiful. - There's multiple ways to do things. You are responsible for making your own decisions. I offer these notes without warranty and with the disclaimer that I am an amateur when it comes to building software assembled by other people. Building dependencies you choose from ports or doing pkg_add it's your call. If instead you just really want to gamble there are places for that. [↩] - Also I can't emphasize this point enough, but I am an amateur to the point I haven't had occasion yet to learn the unix patch utility. Maybe if I had I'd submit this to the ports tree, but my actual self is the one putting this information together so... [↩]
http://bingology.net/2015/02/14/notes-on-building-bitcoin-qt-on-openbsd/
CC-MAIN-2019-04
refinedweb
897
74.69
Ah, URLs. The Unified Resource Locator (URL) is ubiquitous in enterprise software. It doesn’t matter whether it’s a desktop, a web application, or a backend service, URLs have the unique ability to catch you off guard when you least expect it. One can lean on the ASP.NET framework for URL routing which provides its own way of matching URLs to action methods. But alas, as is often the case, a full feature framework might not get you where you need to be. URL routing has a powerful way to invoke action methods inside MVC controllers but doesn’t help with URL matching. If you’ve worked with URLs before and found it hard, then you’re doing it right. If it was easy, then this write up is for you. There are many traps hidden inside these URLs. A URL appears harmless on the surface but, when you look closer, it can be perilous. In this take, I’d like to give you a deep dive into working with URLs in plain C#. In IT, there may come a time when you have this URL from a config and must match it with another one. The URL can come from the web request that you need to intercept through middleware with a match. I’ll stick to real examples I’ve come across in my programming adventures in the enterprise. To start, let’s define what the internals of a URL looks like: data-src="" data-lazy-load> For our purposes, we care about the scheme, authority, path, query, and fragment. You can think of the scheme as the protocol, i.e., HTTP or HTTPS. The authority is the root or domain, for example, mycompany.com. The path, query, and fragment make up the rest of the URL. The URL spec defines each segment in this specific order. For example, the scheme always comes before the authority. The path comes after the scheme and authority. The query and fragment come after the path if there is one in the URL. With the textbook definition in place, it’s time for string matching URLs. I’ll stick to the terminology from the figure so it is crystal clear for you. String URL Match Given a URL, it is somewhat reasonable to do a string comparison: All code samples use xUnit assertions to prove out matching concepts. Note the String.Equals comparison to get a string match with a URL. One thing to look out for is that URLs are case-insensitive in the spec. This means matches. A naïve string comparison with an equals method does not account for this. To make this more robust, add case-insensitivity to the comparison: The StringComparison.OrdinalIgnoreCase enumerator will do a byte match for each character while ignoring casing. This works well for URLs which are made up of ASCII characters. Note that there is an extra parameter to the overloaded the String.Equals method. With this much effort necessary to add robustness, it should appear often in your code. Another interesting aspect is that the path of the URL can end with a forward slash. For example, /abc/ also matches /abc without a trailing forward slash. The config or app providing the URL can go either way and you must account for this. Using string manipulation, we can trim the ends then do a match: This accounts for many mishaps with string comparisons. You will start to notice you need a good amount of trimming around URLs. When engaging in this line of work, it is best to stay alert and practice defensive coding. It’s difficult to imagine the many radical new ways folks can type in a simple URL. Human beings are not like computers and may find innovative ways to muck up URLs. C# uses the .NET framework behind the scenes and provides a list of methods that can aid with URL matching. The System.String type, for example, has many extension methods available. It’s like having a full array of tools at your fingertips, time to examine which methods are most useful. Let’s say we want to match the scheme to make sure it’s HTTPS: Note that it is safe to assume the scheme comes first according to the spec. The String.StartsWith method has a sibling method that can match the end of the string. This is useful for doing a match on the path of the URL. This is assuming your URLs always end with the path only. So, for example: One can be clever with string matching in C#. Your string comparisons have an arsenal of methods at your disposal, so you can be as effective as possible. Let’s say, for example, I want to know if a given URL even has a query. The spec defines this as ?key=value. Note that the question mark is a unique character. This question mark character is in the URL spec and does not belong elsewhere. So, for example: If you can make safe assumptions about your URLs, like in the example above. Feel free to exploit these assumptions to your advantage with string comparison methods. All you need is to know is which method to use and a little imagination. LINQ URL Match With URLs coming from a config or any data source, what you might get back is a list. With the .NET framework, you can use LINQ to iterate through URL lists and then do a match. Imagine there is a list of URLs that must match a target URL. All I want to know is whether the URL exists within the list. Say, for example: The IEnumerable.Any method allows you to match a list with a URL. Note the use of a lambda expressions to further refine the match. This becomes quintessential when you need to trim and ignore case sensitivity. At the end, this lambda expression expects a true or false which comes from the equal string comparison. If any items on the list return true then the entire method returns true. For example, let’s say you have a list of paths that belong to the URL that needs a match. What you need is to combine the paths to the whole URL, then do a match. The string type has a String.Join method you can use to do the job. This join method takes in a list you can further refine using LINQ. So, for example: I am purposely being naughty with the list of paths. One path has a trailing slash while the other does not. The goal here is to illustrate what kind of assumptions you can and cannot make with URLs. The way you write URL matching can have a life of its own depending on the assumptions. Note the IEnumerable.Where method to filter out empty paths. LINQ has many more methods available you can use for URL matching. What I find is that I tend to use both IEnumerable.Any and IEnumerable.Select() often. These extension methods are part of the IEnumerable interface in C#. This means it can support a wide array of list types including an array of integers. LINQ gets enabled on a list when you add System.Linq to the using statements. Inside Visual Studio, these extension methods don’t show up in IntelliSense until you do so. Feel free to explore this namespace if you need more ideas when working with URLs. What you will find in .NET is that each type may have methods that come with it. The string type, for example, has a list of methods through the System namespace. So far, you can see how these methods are useful to you. It is like having a toolbelt with a whole array of functionality available. URI Match The .NET framework has a type to encapsulate URLs if necessary. There is a System.Uri type that can parse any valid URL. The string and LINQ methods I have explained so far do not parse but only provide URL matching. The Uri type has a list of methods and properties you can use to break a URL apart for further analysis. Let’s say you have a URL with a scheme, authority, path, query, and fragment. Attempting to match against each piece requires good Regex skills. The good news is that a Uri type can do matches in an object-oriented fashion. This OOP (object oriented programming) approach can help keep the code nice and tidy. One gotcha is that the Query property returns a string type, not a dictionary object. This will require that you parse out the string into a key-value pair. When you are working with the query inside a URL, you often need it as a dictionary to do lookups. So, for example: You can get the schema and path through the Uri.GetLeftPart method. Note the use of the System.UriPartial enumerable to get each segment of the URL. The Fragment property has the fragment of the URL. For the Query, note that ?key1=value&key2 is a valid query string because the spec is lenient. The String.Split method gives me back an array I can turn into a dictionary object. For duplicate keys, I use a Dictionary.ContainsKey first then a Dictionary.Add if it’s not in the dictionary. This is a defensive way of dealing with potential typos from a bad config, for example. For those in .NET Core 2.0+, there is a shiny new Dictionary.TryAdd that has this same logic as part of the method. Each itemValue can come from the Query or get a default value of string.Empty. Empty keys in the Query are still plausible. The asserts prove out that code above works as expected. One gotcha comes from the scheme segment. Note that it returns the colon and backslashes as part of the scheme itself. If the goal is to match it against HTTPS, for example, it might be wise to match it with a String.StartsWith and ignore casing. This covers just about everything you will encounter when matching URLs with a Uri type. I hope you can see it is far from trivial. One nice advantage is you get the Uri type through the System namespace. This namespace often appears inside many using statements in C#. Conclusion The .NET framework comes with a set of namespaces useful for working with URLs. So far, you have seen the System and System.Linq namespaces at work. In C#, there are two types of primary concerns which are System.String and System.Uri. These two types have many methods which are useful to you. For the System.String type, keep an eye on String.StartsWith and String.Equal with case insensitivity. For working with a list of URLs, use any list type that implements the IEnumerable interface. The System.Linq namespace will enable a set of extensions methods for your favorite type. To parse the Query into a dictionary type use the System.Collections.Generic namespace. All these namespaces have been available in .NET since 3.5 and are part of the .NET Standard library. This means this code is guaranteed to work with many implementations of the .NET framework which include .NET Core. Microsoft is pushing for a standards-based approach and these namespaces are part of it. It is nice to have working code that has a commitment and supports a standard. Because we are talking about the .NET framework and not only niche features in C#, these same namespaces and object-oriented types are available in PowerShell if you have a language version that supports .NET version 3.5 at a minimum. This means you can go all the way back to PowerShell 3.0. Load comments
https://www.red-gate.com/simple-talk/dotnet/c-programming/url-matching-c/?utm_source=DBW&utm_medium=pubemail
CC-MAIN-2020-24
refinedweb
1,975
76.42
Still. Can you make a small, simple, single program file that shows the problem you are working on? Or merge the six files into one file and post that. The two important classes; iirc the error is about IllegalMonitorException; the Producers pop onto the queue , the Consumers don't pull anything. public class Producer implements Runnable { protected final static List<Message> sharedQueue = new ArrayList<>(); protected final int MAX_SIZE = 10; @Override public void run() { while (true) { synchronized (sharedQueue) { while (sharedQueue.size() == MAX_SIZE) { try { sharedQueue.wait(); } catch (InterruptedException e) { } } sharedQueue.add(new Message(Utility.getRandomProduct(), new Date(), Utility.regionLookup(Utility.getState()))); System.out.println(Thread.currentThread().getName() + " adding. Queue size: " + sharedQueue.size()); } } } public static Message messageConsume(String region) { synchronized(sharedQueue) { while (sharedQueue.isEmpty()) { try { sharedQueue.wait(); } catch (InterruptedException e) { } } if (sharedQueue.get(0).getRegion().equalsIgnoreCase(region)) { Message tempMessage = sharedQueue.get(0); sharedQueue.remove(0); return tempMessage; } else { return null; } } } } public class Consumer implements Runnable { private List<Message> consumerList = new ArrayList<>(); @Override public void run() { while (true) { try { Message tempMessage = Producer.messageConsume(Thread.currentThread().getName()); if (tempMessage.getRegion().equalsIgnoreCase(Thread.currentThread().getName())) { consumerList.add(tempMessage); } Thread.sleep(500); } catch (InterruptedException ex) { Logger.getLogger(Consumer.class.getName()).log(Level.SEVERE, null, ex); } } } } How do you test the posted code? I don't see a main() method? Can you post the full text of the error message? I can not find: IllegalMonitorException In Producer class inside messageConsume() method, did a call from consumer go through the loop down to the checking for region? If so, did it actually get through inside the checking region if-condition? Try to print out each line of your process to pin point where exactly is wrong. Anyway, I see you use wait() method, but who is going to wake the thread up after the wait()? Where are you putting your notify()? A thread can't simply wake up after being wait(). The same goes to pushing message onto the queue. Once the queue is full, who is going to wake the thread up to keep pushing again? public static Message messageConsume(String region) { synchronized(sharedQueue) { while (sharedQueue.isEmpty()) { try { sharedQueue.wait(); // Who is going to notify this thread??? } catch (InterruptedException e) { } } System.out.println("Queue size: "+sharedQueue.size()); if (sharedQueue.get(0).getRegion().equalsIgnoreCase(region)) { Message tempMessage = sharedQueue.remove(0); System.out.println("Removed: "+tmpMessage); //sharedQueue.remove(0); // use remove() right away, don't need this System.out.println("Queue size after removed: "+sharedQueue.size()); return tempMessage; } else { return null; } } } Personally, I would keep the synchronized block as small as I could. Also, I would keep an infinite loop out of the block (the current while loop). I would instead check if an item is returned after each loop; otherwise, I will wait. Main consists of just instantiating and starting the threads nothing else atm. Taywin, thanks I made a few changes and I'm no longer getting the IllegalMonitor Exception for the time being; instead a NUllPointer which I believe happens in the messageConsume; I'll check it out and post back. Thanks, Smooth as butter, thanks!
http://www.daniweb.com/software-development/java/threads/440339/multithreading-completely-lost
CC-MAIN-2014-10
refinedweb
509
52.66
William Ray Noble22,189 Points Flask 5 * 5 question So I'm not sure what I'm doing wrong. Can you not use default parameters in Flask? I tried putting the numbers directly into the route path but that didn't work either. TIA from flask import Flask app = Flask(__name__) @app.route('/multiply/<int:num1>/<int:num2>') def multiply(num1=5, num2=5): return str(num1 * num2) #return '{} * {} = {}'.format(num1, num2, num1 * num2) 1 Answer Josh Keenan19,401 Points You need to retain the route for multiply as well as with input. @app.route('/multiply') @app.route('/multiply/<int:num1>/<int:num2>')
https://teamtreehouse.com/community/flask-5-5-question
CC-MAIN-2021-25
refinedweb
103
58.28
#include <compound.h> The space_automap_row_compound class is used to represent a SunOS or Solaris automap row that has sub mount points. Definition at line 28 of file compound.h. Definition at line 39 of file compound.h. Reimplemented from space_automap_row. Definition at line 32 of file compound.h. The destructor. Definition at line 25 of file compound.cc. The default constructor. It is private on purpose, use the create class method instead. Definition at line 30 of file compound.cc. The default constructor. Do not use. The copy constructor. Do not use. The create class method is used to create new dynamically allocated instances of this class. Definition at line 42 of file compound.cc. The assignment operator. Do not use. The print method may be used to print a representation of this object instance. Implements space_automap_row. Definition at line 50 of file compound.cc. The repath method is used to add a directory prefix to the mount point. Implements space_automap_row. Definition at line 97 of file compound.cc. The space_automap_insert_sub is used to call amp->insert_sub for each row and subrow. Reimplemented from space_automap_row. Definition at line 83 of file compound.cc. The paths instance variable is used to remember all of the internal sub-mount-points of this row. Definition at line 70 of file compound.h.
http://nis-util.sourceforge.net/doxdoc/classspace__automap__row__compound.html
CC-MAIN-2018-05
refinedweb
219
63.46
Exception::Simple - simple exception class use Exception::Simple; use Try::Tiny; #or just use eval {}, it's all good ### throw ### try{ Exception::Simple->throw( 'oh noes!' ); } catch { warn $_; #"oh noes!" warn $_->error; #"oh noes!" }; my $data = { 'foo' => 'bar', 'fibble' => [qw/wibble bibble/], }; try{ Exception::Simple->throw( 'error' => 'oh noes!', 'data' => $data, ); } catch { warn $_; #"oh noes!" warn $_->error; #"oh noes!" warn $_->data->{'foo'}; #"bar" }; pretty simple exception class. auto creates argument accessors. simple, lightweight and extensible are this modules goals. When using this module, you can specify a shortcut method, so you don't have to type the full module name each time. This works by importing a sub with the name specified into the current namespace, that returns the package name so you need to make sure this sub does not already exist, or you'll get an error e.g. use Exception::Simple qw/E/; use Try::Tiny; #or just use eval {}, it's all good ### throw ### try{ E->throw( 'oh noes!' ); } catch { warn ref $_; # Exception::Simple warn $_; #"oh noes!" warn $_->error; #"oh noes!" }; with just one argument $@->error is set Exception::Simple->throw( 'error message' ); # $@ stringifies to $@->error or set multiple arguments (creates accessors) Exception::Simple->throw( error => 'error message', data => 'custom attribute', ); # warn $@->data or something say you catch an error, but then you want to uncatch it use Try::Tiny; try{ Exception:Simple->throw( 'foobar' ); } catch { if ( $_ eq 'foobar' ){ #not our error, rethrow $_->rethrow; } }; accessor for error message (set if only 1 arg is passed to throw) package that threw the exception filename of the code that threw the exception line number that threw the exception If you pass in package, filename or line, they will be overwritten with the caller information If you don't pass in error, then you'll get an undef warning on stringify Please submit bugs through For other issues, contact the maintainer Mark Ellis <markellis@cpan.org> Stephen Thirlwall This library is free software, you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/dist/Exception-Simple/lib/Exception/Simple.pm
CC-MAIN-2018-26
refinedweb
349
60.04
ARIA now longer allows namespaced properties. ARIA looks like this now: <div role="myrole" aria-[propertyname]="blah"> Just remove any namespaces from the attribute names and values. Created attachment 292130 [details] [diff] [review] WIP -- a swipe at it Someone just needs to update Aaron's patch against trunk and test it. Created attachment 603639 [details] [diff] [review] Re-worked patch (v1) I went ahead and took this ... I did a plug-n-play patch from the previous attachment ... it built clean locally, and mochitest-plain and mochitest-a11y both ran as normal ... what other testing can I do for you? I can't push to TRY yet, so maybe that's next ... Comment on attachment 603639 [details] [diff] [review] Re-worked patch (v1) Asking review from a toolkit peer ... Checking status ... Marco asked me to get a toolkit peer review+ to move this along ... I picked robert at random ... let me know if there's a better way? Comment on attachment 603639 [details] [diff] [review] Re-worked patch (v1) Let's try Neil first. Comment on attachment 603639 [details] [diff] [review] Re-worked patch (v1) Review of attachment 603639 [details] [diff] [review]: ----------------------------------------------------------------- do we have any kind of test on this that may need updating? ::: toolkit/components/feeds/FeedProcessor.js @@ -983,1 @@ > "='" + attributeValue + "'"); this doesn't apply just to the aria case, a different prefix may still be added (in future), why are we removing it? @@ -985,5 @@ > - // write an xmlns declaration if necessary > - if (prefix != "xml" && !this._isInScope(uri)) { > - this._inScopeNS[this._inScopeNS.length - 1].push(uri); > - this._buf += " xmlns:" + prefix + "='" + uri + "'"; > - } also this check seems to be more generic than just the rolePrefix stuff you removed above, prefix is assigned by gAllowedXHTMLNamespaces, doesn't look like it should be removed (even if likely it won't be hit atm). For example, see^[^\0]*%24&hitlimit=&tree=mozilla-central Created attachment 606892 [details] [diff] [review] Re-worked patch (v2) Ok, I restored the two code portions removed in Aarons original patch ... mochitests all complete locally ... Mochitests aren't the bulk of feed parser tests - the stuff in is run in make check. Ahhh.. ok, new to this area ... how do I perform the tests ... locally? with a TRY push? Ok, got help from IRQ ... testing shows two tests failing ... off exploring in that direction ... FAIL | xml/rfc4287/feed_accessible.xml | Test was: "atom entry with many funky namespaces" | var content = feed.items.queryElementAt(0, Components.interfaces.nsIFeedEntry).content;((content.text.indexOf("h2 aaa:checked") > -1) && (content.text.indexOf("h4 aaa:checked") > -1) && (content.text.indexOf("h6 xml:base") > -1)); | FAIL | xml/rfc4287/feed_roleatt.xml | Test was: "atom entry with many funky namespaces" | var content = feed.items.queryElementAt(0, Components.interfaces.nsIFeedEntry).content; ((content.text.indexOf("xhtml2:role='wwwwwww") > -1) && (content.text.indexOf("xmlns:wwwwwww=''") > -1) && (content.text.indexOf("xmlns:xhtml20=''") > -1)); | Foul Bachelor Frog will tell you exactly what's actually the correct way to deal with those failures, in - they are testing the thing you're removing, so just remove them too. Victory! Created attachment 606919 [details] [diff] [review] Re-worked patch (v3) Lord I hope you weren't pulling my leg :) Anyhow, removing the tests causes ... all to pass !!! Exactly right, and figuring out what parts of what I say I actually mean is one of those difficult parts of getting started with Mozilla, like figuring out what failures on tbpl you can and can't ignore. At least twice, I've stared long and hard at those two tests trying to figure out if there was anything we wanted to salvage from them in a post-ARIA-namespace world, and I've never gotten any closer to figuring out what it would be if there is. Sometimes, if you test multiple things while saying you are only testing one, and copy around summaries without changing them to say what they really are, and copy around tests that say that they are about one thing when they are something else, you wind up having your tests deleted. thanks for not mocking me mercilessly :P
https://bugzilla.mozilla.org/show_bug.cgi?id=407401
CC-MAIN-2017-26
refinedweb
676
65.22
Jython, originally known as JPython, is an all-Java application that allows developers to use the syntax and most of the features of the Python programming language. Jython is interesting to Java programmers for several reasons: - Jython's version of the Python interpreter shell allows convenient experimentation and exploration of ideas and APIs without having to go through the usual Java compile/run cycle. - Python is dynamic and generic by design, so you don't have to add these features by using complex libraries (such as those for Java reflection and introspection). This makes some sorts of development easier and is especially useful in automated testing frameworks. - Many developers like Python's syntax and the feel of the language; they find it a much more productive way to develop and maintain Java applications. In this article I will introduce Jython 2.1, the most recent release, by offering some examples of accessing Java libraries, using the Jython interpreter shell, and showcasing Jython code files. You don't need to know Python to follow along, although you will have to learn the language if you plan to go much further with Jython than the basic examples in this article. The environment I use is Red Hat 8.0 (2.4.18 kernel) and J2SE 1.4.0. On the Jython site (see Resources), take a look at the Jython platform-specific notes for more information on platform and Java environment choices for Jython users. Note: Both the Jython and Java languages operate on the Java runtime. Getting started with Jython Jython is distributed as a single Java class file containing the installer. Just download jython-21.class and place the file somewhere in the CLASSPATH, then run java jython-21. Select the components you would like to install (in the examples, I chose all the defaults), accept the license (which is the open source BeOpen/CNRI license), and specify the installation directory -- the installer will take care of the rest. If you run into problems with the installation, see the installation information page on the Jython Web site. For UNIX platforms, you may want to add the chosen Jython installation path to your PATH environment variable. You can now just type "jython" to run the interactive shell: Listing 1. Running the Jython shell The >>> prompt allows you to enter commands and get immediate results. In Java programming, every program must define at least one class. Listing 2 illustrates a complete Java program for writing a message to the screen: Listing 2. A complete Java program JPython reduces these lines to: Listing 3. Jython reducing Java code overhead The Listing 4. Print is a key Jython tool Jython expressions are similar to Java expressions. The result of 1+1 is an integer that is coerced to a string by You don't even need much apparatus to access standard Java libraries using Jython. The following example accesses java.util.Random: Listing 5. Accessing standard Java libraries via Jython Jython's import keyword is similar to the Java language version in that it makes the contents of one module available in another, but there are some syntax and behavior differences. The example in Listing 5 above uses the related from keyword to narrow down which symbols are imported from java.util. The next line shows the creation of an instance of the Random class. No new keyword is needed, as you can see. There is also no type declaration needed for the variable that holds the new instance. This highlights an important simplification in Jython and one benefit of its dynamic nature -- it reduces much of the need to worry about data typing. The next line in Listing 5 demonstrates method invocation, which is just the same as in the Java language except for the lack of type declaration for the result. nextBoolean() in Java code is a boolean. Jython 2.1 does not have a boolean type (although this may soon change; Python 2.3 adds a boolean type), so it substitutes an integer of value 0 or 1. Similarly, to invoke a Java method that expects a boolean value, you pass in an integer value meeting these constraints. You can also use the import keyword in such a way that you fully qualify the names of all symbols imported, as shown in Listing 6: Listing 6. Import fully qualifies all imported symbol names Jython's floating-point values are just the same as in the Java language. Writing code directly in the source The interpreter is handy for quick checking and prodding, but you don't have to do all your work there -- Jython also allows you to write code in source files and then run the code (though with Jython, the compilation step is optional). As an example, the following listing is a stand-alone Jython program: Listing 7. Sample Jython program that simulates a coin toss (save in file named listing7.py) Let's explain the code before we explain how to run it. This example introduces if statements in Jython, one of the first things some people remark about in Jython (as well as its antecedent Python). There is no character delimiter to mark the block executed when the condition in the if statement is true (conditions in Jython do not require enclosing parentheses, as they do in Java programming). The code is merely indented to a greater degree than the surrounding code. Blocks of code in Jython are always marked with indentation rather than, say, curly braces. Statements that introduce code bocks, such as if, end in colons. This feature of Jython means that you have to be careful when you write code because the way you indent the code can actually change the meaning. For example, Listing 8a results in a print out of only the number 3 because the two statements above it are part of an if block whose condition is never true: Listing 8a. Indentation: Prints only "3" If I merely change the indentation of one of the lines, then the numbers 2 and 3 are printed: Listing 8b. Indentation: Prints "2" and "3" The indentation also has to be consistent, it has to be associated with statements that organize code into blocks, and usually it also has to control the flow of code. For example: Listing 8c. Indentation: A syntax error This would simply result in a syntax error because there is no controlling statement that requires a block to be separated from the rest of the code. The use of indentation to mark code blocks is one of the more controversial features of Python and Jython, but I believe it is often an exaggerated issue. After all, it shouldn't matter if you follow good coding standards in indentation. If good coding indentation is followed, the fact that a machine enforces this rather than a peer reviewer shouldn't matter. Furthermore, I know of no developer who notices this restriction after a few hours of using the language. It becomes second nature to indent properly. Certainly this link between indentation and syntax can cause errors that you may have not encountered before, but the lack of explicit delimiters also eliminates some errors that are common in languages that use them. You can run the file in Listing 7 (listing7.py) without having to compile it just by invoking the filename as an argument to the jython command, as shown below: Listing 9. Running "coin toss" without compilation In the previous example, $ is just the UNIX shell prompt, much like the C:\> on a Windows system. You can also compile modules into Java bytecode ( .class) files using the jpythonc command, which allows you to use the java or jre command to run it directly. There are some restrictions on the Jython modules compiled in this way, but that issue is outside the scope of this article. Building global functions You can create global functions with ease in Jython, even though the Java language does not support global functions. You can also define global variables (usually to set up constants without having to make a class wrapper for them). For example, look at the following listing: Listing 10. Global function returns series of numbers in string form (save in file named listing10.py) First we define two global variables used as constants in this program -- START and SPACER -- one an integer, and one a string. Next we define a function, CounterString, using the def keyword. The function takes a single argument, an integer called length. The fact that Jython does not explicitly check that the argument is an integer is an advantage of Jython's dynamic feature; but it can also be a disadvantage because some sorts of type errors won't be caught until later than they would be in Java programming. Notice that the function signature line ends with a colon and thus introduces a new block, marked by the indentation in the subsequent lines. The first line in this new block initializes a string buffer as an empty string. This buffer will be manipulated to yield the expected function results. The next line creates a loop. Jython's for statement is fundamentally different from the Java language statement. In Java programming, you set up initial and termination conditions, as well as each loop step. Jython's loops always walk over a particular sequence from start to end. The sequence is usually a list, a very important data type in Jython. A list of three strings looks like this: If you want a loop over numbers from 1 to N (as we do here), you can use the function range(), which returns a list of numbers in a given range. Some experimentation at the interactive Jython prompt should help you familiarize yourself with this tool: Listing 11. range() function examples Looking back at Listing 10, each iteration of the for loop runs as a block of code that is indented an additional step from the rest of the function body. The block is a single line in which the current buffer is concatenated to the new number, which is first coerced to a string using the str() function (rather than a cast as in Java programming), then a spacer is appended. After this loop terminates, the resulting buffer is returned. Right after the function body is a line of code to test it. Again Jython allows you to do this without any special rigging, such as a main method on an application class. The output from Listing 10 is shown here: Listing 12. Output from Listing 10 Building classes as easily as functions You can create a class in Jython with the same ease as creating global functions. Listing 13 offers an example: Listing 13. A simple example of a user-defined class (save in file named listing13.py) In the code above, the first line names the class, the definition of which is all one big code block. The first method defined is a special one, the initializer (similar to a Java constructor). It is always named __init__ and is invoked whenever a new instance of the class is created. In Jython you explicitly declare the current instance being invoked (or in the case of the initializer, created) as an argument. Traditionally this argument is called self. In the Dog initializer, the bark_text argument, a string, is stored as an instance variable using self. The method bark() does not take any explicit parameters when invoked, but you must still specify self. The method annoy_neighbors does take a single explicit argument which is specified in addition to self and is the number of times the dog is to bark in order to annoy the neighbors. Notice how easily the code runs to deep nesting, and thus, to indentation. annoy_neighbors has a loop block within the method definition within the class definition. The code starting with print "Fido is born" again demonstrates the class. The output of Listing 13 looks like this: Listing 14. The output from Listing 13 Bridging the programming languages In this article, we've just scratched the surface of the benefits of adding Jython to your Java programming arsenal: - The Jython language reduces the amount of code required to perform tasks. - The Jython interpreter helps with rapid code development by allowing you to run code without compilation. - It allows you to establish global functions and variables which the Java language doesn't support. - It introduces dynamic typing, while using inference and casts to operate properly in the statically-typed virtual machine. - It introduces the use of generic datatypes (although upcoming Java versions such as Tiger have also introduced generic types). - It lets developers easily develop automated testing frameworks. Through a series of examples, we've also covered some differences in syntax and typing that a developer should be aware of, including the syntactical meaning of indentation in Jython and the introduction of an integer in place of a currently non-supported boolean type. Using Jython by no means requires that you ditch the Java language. Jython can be a very handy supplement, useful for quick inspection and prototyping, testing, and for handling a selection of coding tasks for which its approach is better suited. - Visit the Jython home page to download the implementation and learn more about how to use it; if you've already acquired Jython, try these resources for help with installation and platform-specific issues. - Jython is an implementation of the Python language; if you plan to use Jython, you will want to be familiar with the documentation and other resources hosted at Python.org. - If you're a Java beginner, the "Introduction to Java programming" tutorial (developerWorks, November 2004) introduces the Java programming language through examples that demonstrate the syntax of the language in an object-oriented framework and standard programming practices. (Some of the examples in this article are based on examples in that tutorial.) - In "Diagnosing Java code: Repls provide interactive evaluation" (developerWorks, March 2002), Eric Allen delivers an example of using Jython to build an elegant repl, a "read-eval-print-loop." - Join the Jython-users mailing list, a great place for online help, for interactive discussions of Jython with fellow developers. - O'Reilly and Noel Rappin offer "Tips for Scripting Java with Jython, Part 1," which covers 11 specific features of Jython that can be particularly time saving or exciting for Java programmers. - Jython Essentials (Samuele Pedroni and Noel Rappin, O'Reilly, March 2002) provides a solid introduction to the Jython, numerous examples of Jython/Java interaction, and reference material on modules and libraries of use to Jython programmers. (Chapter 1 is available online.) - Learn to build Web and enterprise applications with Jython in Python Programming with the Java Class Libraries (Richard Hightower, Addison-Wesley/Pearson, June 2002). - The ActiveState Programmer Network offers two Jython resources: a simple JSP custom tag implemented in Jython and a simple Jython servlet. - Fourthought Inc. is a software vendor and consultancy specializing in XML solutions for enterprise knowledge management. Fourthought develops 4Suite, an open source platform for XML, RDF, and knowledge-management applications. - This comprehensive set of articles on developing Web services with Python can help you understand the workings of Jython development. - Find hundreds of other Java technology-related articles at the developerWorks Java technology zone. >.
http://www.ibm.com/developerworks/java/library/j-jython.html
crawl-002
refinedweb
2,553
60.65
Another way to get the cameras for the camera projections on the selected object. This time with no loops (well, except for the list comprehension on line 22). from siutils import si if Application.Version().split('.')[0]>= "11": si = si() # win32com.client.Dispatch('XSI.Application') from siutils import log # LogMessage from siutils import disp # win32com.client.Dispatch from siutils import C # win32com.client.constants # Get all CameraTxt operators ops = si.FindObjects2( C.siOperatorID ).Filter( "CameraTxt" ) # Filter function to get the CameraTxt ops under the selected object def f(x): o = si.Selection(0) return o.IsEqualTo( x.Parent3DObject ) # Get list of cameras cams = [ x.InputPorts(2).Target2.Parent3DObject for x in filter( f, ops ) ] if len(cams) > 0: print 'Projection cameras for %s:' % si.Selection(0) for c in cams: print ' %s' % c.Name This script is based on the observation that you can get the camera from an input port on the CameraTxt operator. I’m confused: why do you include a dispFix in this script? From your earlier post – – I gathered the whole dispatch problem was more or less dealt with (around XSI 6)… P.S. Your direct link to the previous entry doesn’t seem to work… There is a http:// stuck at the end of the URL Thanks, I fixed the link. dispFix is still needed sometimes…but as it turns out, not in this case…Thanks! I had stuck it in an earlier version of the script when something wasn’t working, and then just carried it forward. I am afraid I am going to have to beg you for a new post on this dispatch issue. In what kind of cases is a dispFix still needed? I don’t know really. But sometimes you find some method or property doesn’t work on an object, so you try dispFix(). Search this site, there’s two examples. One is the ViewportCapture object. Okay, I’ll try to live with the uncertainty… 😀 But wouldn’t that mean the classic fix (the “__init__.py file hack”) still is the best option as it should eliminate the problem once for all? Or is this not true either? I’ve never tried the classic __init__.py hack. I suppose if I kept running into more and more problem areas, then I might try it. But so far I’ve hit only three cases, and iirc two of them were kind of ‘corner cases’ like the ViewportCapture object, which is hard to get at if you don’t use Dictionary.GetObject. But if the problem still exists and Softimage does have its own “internal” Python now and even if only a few occasions are still problematic, wouldn’t it still be an idea to incorporate the __init__.py hack in future basic internal Python installations?
https://xsisupport.com/2012/06/06/finding-the-camera-used-by-a-texture-projection-part-ii/
CC-MAIN-2022-27
refinedweb
465
66.64
I've been enamored with the fantasy of making a programming language for a while. Not that I think I ever will, but I like to think about how it would work. I think I've learned a lot from studying very different languages and that I know how to combine the best ideas from all of them. I also think some of my ideas are genuinely original (as far as I've seen) and pretty good. And there's some chance someone who is in a position to design or influence a language will read this and get my good ideas. So here's the working spec for what I'd consider an ideal language. It's working name is Gold and stands for "Go Over Limits, Doofus". Basic usage¶ Must compile to native code, but should also be usable interactively. Haskell proves it's possible. Static typing with type inference, generics, sum types, and a C-like concepts of structs. I'm still unopiniated about lazy evaluation, referential transparency, and the Haskell-like syntax that they invite. I'll be writing most of the examples as if they use traditional syntax and the language is not generally lazy or referentially transparent. The error handling strategy¶ Default behavior on error is to throw upward. The name of an error type on an indented line below a statement will be followed by code to be executed in the case of that error before throwing. errcatches all errors. A string expression as the last statement of an errblock will be context to wrap the error with. The ignorestatement in an errblock means don't throw the error but continue. An errstatement after a block indented under tryapplies to the whole block (the errstatement should not be indented). Ignore an error: dangerous() err ignore Throw with context: dangerous() err "dangerous failed" Get the error reference: dangerous() err(e) "dangerous failed because:" + e Different behavior for different error types: dangerous() err_index ignore err_os close(file) An index error will be ignored, an OS error will close file and then throw, and any other type of error will just throw. Multi-statement err blocks: dangerous() err(e) close(file) print("An error occurred: " + e) ignore try block: try dangerous_1() dangerous_2() dangerous_3() err ignore The error will be caught and ignored if it happens anywhere in the try block. Syntax¶ No variable declarations. x = 5. If you need to specify a type, x = Int 5. #for comments, because it's only 1 char. #{.. #}for a comment block to easify commenting out many lines at once. Int literals: besides just using the decimal number, you can use 0xprefix for hex and 0bfor binary. Byte escapes: \x00, \d000, and \b00000000for byte codes. \nfor newline, \tfor tab, \rfor carriage return. ''is for declaring a Byte using the literal character in the source. u''makes a Char (and equivalent notation exists for other quoted literals). ""is syntactic sugar for Array Byte. """... """are multiline strings. ``are template strings like in Javascript. arr[i]is index syntax for arrays. dict[key]is lookup syntax for a Dict. .is access syntax for Struct fields. It's also method syntax: there's no difference in definition between a function and a method; any function can be called with the .syntax as if it were a method of its first argument. [a, b, c, d]is Array syntax. {a, b, c, d}is Set syntax. {a: b, c: d}is Dict syntax. _is a null variable name; it doesn't store the result. Mostly used for unpacking tuples. Line continuations are done like: start_of_long_line \ \ end_of_line_line This way neither line can look complete, and there's no indentation confusion. When inside parens or a similar character and breaking across lines, you don't need commas: items = [ item1 item2 item3 ] lower_snake_case for variable and function names, SCREAMING_SNAKE_CASE for constants, UpperCamelCase for type and maybe module names. Tabs for indentation. Set your editor to display them as 4 spaces so you don't have to pull your hair out. Misc¶ import module [as name]tells the language about another file that needs to be loaded. The module name can be the name of a package to find in a library directory, or a filesystem path. Its contents will be namespaced. Importing a file never runs code - all execution is traceable to main. exit Int- exit the process with the given status. There are no variadic functions, just take arrays as arguments. You can have multiple functions with the same name if they have different type signatures. As long as the calls aren't ambiguous. Binary operators can be given behavior on custom types because they're aliases for functions. For example, +is _add. STDIN, STDOUT, STDERR, ARGS, and ENVare available as global names without imports. Infix operators¶ or, and, not- in ascending order of how closely they bind ==, !=, <, >, <=, >=- comparisons that return Bool <>- comparison that returns Tri in- takes the form elem in containerand returns the Bool. Works for anything that implements Source. allin- takes two Sources and tests if every element of the first is in the second within- like allin, but requires them to be found in sequence Flow control¶ Non-looping¶ branch if cond1 do_1() if cond2 do_2() else do_default() Single statement clauses can be written on the same line with a colon: branch if cond1: do_1() if cond2: do_2() else: do_default() This is also an expression: var = branch if cond1: val1 if cond2: val2 else: val3 Clauses can be put onto one line: function(branch if cond1: val1; if cond2: val2; else: val3) branch is always necessary so that there's never any ambiguity about whether a lone if or if-else is part of the preceding branch or not. For example: branch if cond1: do_1() if cond2: do_2() if unrelated_cond: do_thing_that_should_happen_regardless_of_cond1_and_cond2() I don't want to need to indent branches under branch, or have an endbranch keyword or something. So that last part should just be written as: branch if cond1: do_1() if cond2: do_2() branch if unrelated_cond: do_thing_that_should_happen_regardless_of_cond1_and_cond2() A branch ends at the first statement on its level that doesn't start with if or else. Looping¶ while condition statement() for counter, item from items where criterion - iterates on elements of items where criterion is true, binding the element and the iteration counter to item and counter. Counter is not incremented when an item is skipped due to failing the criterion. If you want it to still increment, filter using a continue statement instead of the where clause in the loop header. break and continue can take an arg that says how many levels of loop out to go. Generators¶ Generator expression: results = function(v) for v from inputs where condition(v) The results are not evaluated immediately, but they can be iterated on. Comprehension: results = Array (function(v) for v from inputs where condition(v)) Converts the results to an Array (not lazy), so they're all evaluated. Function declarations¶ Functions are values, so there isn't a keyword to declare them; you just use the => to define a function literal and name it. double_num = num => num * 2 # Default value greet = (name = "Anon") => print "Hi, " + name + "!" Types¶ Bool- trueor false Tri- top, bottom, or middle. This is used in some situations like the <>comparison operator. Byte Char- UTF8 Int- infinite size Maybe so that size limits can be guaranteed for performance benefits if desired, we should have Int32, Int64and Bigintor similar and Intis their interface? Float Array a Set a Dict a b Tuples probably in the Haskell sense: a way of treating multiple values as one, but they have a separate type parameter for each slot. Interfaces¶ Source - anything you can get values out of. Iteration works on Sources. Arrays, Sets, Dicts, Files, and other stuff are Sources. Dest - anything you can put values into. Arrays, Sets, Dicts, Files, and other stuff are Dests. Seq - a Source that has a defined ordering of elements. Structs¶ struct Person # The colon specifies the type of the field. name : String age : Int # The = specifies a default value, inferring the type. admin = false alice = Person(name = "Alice", age = 22) Inheritance: struct Programmer include Person lang : String bob = Programmer(name = "Bob", age = 46, lang = "Gold") Any function that wants a Person will accept a Programmer. Enums¶ enum Color "blue" "red" "green" "yellow" Color is inferred to be an enum of String, so it can be casted to string without a converter function. The enum values can also be given names: enum Color "blue" blue "red" red "green" green "yellow" yellow Other thoughts¶ Should things pass by reference or value by default? Maybe there should be a way of specifying the initial capacity of a reallocating structure, like Go. Probably a capkeyword on any assignment.
https://yujiri.xyz/software/gold
CC-MAIN-2020-40
refinedweb
1,459
63.7
- RE: Case sensitive and ludicrous statements - PyDoc modifying the source. - Python on web with mod_python - entering the lists against CamelCase - How to graph two columns of x,y numbers? - apologies for a previous post mistitling - win32: tile desktop windows - Thanks for all the help - finding files that have extensions - timeout in socket.gethostbyname()? - Base conversion method or module - How to fresh or delete a file in azip-archive via zipfile module? - OT: Spell checker - Questions about datetime module - how to display clock? - Why Python won't work on .net - Instance Exception Oddity: Implicit and Explicit not the same? - ANN: advocacy article: Why your organization needs Python - Best structure for (binary) trees? - Some problems for exercises - Re: Accessing Python script with ColdFusion - I saw what you did - New inited instance of class? - RE: list.sorted() - finding python modules - Nub needs help withTkinter - Fuzzy Logic - Accessing Python script with ColdFusion - how to determine if files are on same or different file systems - RE: list.sorted() - Help: Uploading .zip to Python CGI - Installing Python 232 onto CD-ROM - Translation (used to be 'Moving around in a string') - Reading pipes in Python - newbie list question - ANN: Eric 3.3 released - How to use the outp(...) command - Re: medusa as win32 service - wxPython 2.4.2.4 segmentation fault on (Python 2.2.2 SuSE 8.2) - Moving around in a string - question on HTMLParser and parser.feed() - memory usage - help with awt and jython - min and max id numbers - Python modules for manipulating multimedia files? - Newbie question: Any way to improve this code? - Re: memory leak - C API - Perplexing buffering(?) problem - memory leak - C API - Re: Lists and Tuples - RE: Threading - Why Not Lock Objects Rather than lock theinterpreter - Unicode 4.0 support - FAQ or HOWTO on windows event logs - When is bare except: justified? - Object finalization - Re: Turning f(callback) into a generator - xml-rpc/twisted question - Tkinter binding list - How to I delete key from dictionary ? - PyDoc Private Functions - PyProtocols 0.9.2 Bug fix release - ANN: Leo 4.1 beta 5: An outlining editor - How to tell the difference between string and list - RELEASED Python 2.3.3 (release candidate 1) - Threading - Why Not Lock Objects Rather than lock the interpreter - tkFileDialog.askopenfilenames not regonizing -multiple option - Thank you - How to call a python function in visual c++ - Yet another newbie question - standard or built-in methods - xml - Getting Extended Error Information - Embedding Python in C++ - Embedding Python in C++ - Lists and Tuples - why embed demo crash on freebsd4.8? - regexp question - xml.dom.minidom -> nextElement ? - import from the new logging package - ANN: pcalc 3.0 - a Python powered MFC calculator - har har - [maintenance doc updates] - HTTP Location: supported by HTTPServer? - returning a list of tuples -- C API - Why 'string' functions i.e. rstrip, upper, lower are 'not found'......... - anydbm bug ? - stddev bug in util.py - py2exe: problem including libxml2 - python style guide? - Web authentication - Any interest in a Python for .NET sprint at PyCon 04? - ISO to Gregorian, strptime madness - Creating a List of Empty Lists - Simple newbie question - Hopelessly Lost And Desperate Newbie - Web authentication - weakrefs - Newbie data structes question - PyThreadState_SetAsyncExc ignores except: clause - Python Embedding in C++ - scipy for Python 2.3 on win? - py2exe question from a total beginner - How to I get the exception message as string in GUI - Writing to the parallel port (Mem 0x378) when runnning WinXP or Mandrake 9.2 - Re: [Boa Constr] Download of 0.2.7 Available - I want to know to create our own TestResult Class - Python and Zope Jobs Yahoo Group - Python and Zope Jobs Yahoo Group - thread vs threading -- Unexpected Results - Python and Zope Jobs Yahoo Group - How to log the TestResult in the Excel or Notepad.... - customize Tkinter window button~ - newbie: suppressing return values - RE: Turning f(callback) into a generator - String length ... len(str) - Beginners Tutorial - How to know a system command doesn't exist - Why Python (a bit different) - graphs in python... - Python bounties - "import dbxml" hangs --- what to look for? - lambdas (lambda x: printx) - How to share 3rd-party modules between 2.2 and 2.3? - Re: diferences between 22 and python 23 - Turning f(callback) into a generator - reloading modules - diferences between 22 and python 23 - Newbee needs Help ref Using Function Statements not Loops(For) - Dr. Dobb's Python-URL! - weekly Python news and links (Dec 3) - Adding attribute to object - In a function, how to get the caller object ? - How to use TestResult Class - Python mem leaks? - Asynchronous SimpleXMLRpcServer - Random traceback - RE: SUSPECT: Re: Python, xml, databases, ... - Remove no-printable characters in string - Remove no-printable characters in string - redirecting Stdin - Adding method to object - [ python-l ] scrolling a frame - Python, xml, databases, ... - Building a GUI agnostic database application - Chameleon ++ - Weird idle behavior for GUI apps - socket settimeout ? - Tkinter question: how to the red "Tk" window title bar icon? - Python/Zope Jobs - Advice on Python C modules - import of 'string' fails in CGI-BIN script...but works in IDLE - Stuck newbie - ANN: python-ldap-2.0.0pre16 - pysexpr - The Snake goes Lisp - slightly OT: how do you do that? - PyCon DC 2004 - Submissions Deadline Extended - [ANNOUNCE] PySQLite 0.5.0 - Module for doing some Parsing in Python - [ python-list ] scrolling a frame - Handling exceptions consistently with the logging module. - Re: notes on compilation Python with MinGW gcc - Telnet server implementation - [pysnmp] - pygtk talk oinline - pixels -> metrics - Can Python be run off of a CD-ROM? - Names for modules and functions - newbie - integer variable with leading zero for os.makedirs - Equivalence of dictionary keys? - Website Design - language learning vs. process - Build failing on Solaris - calling programs from python - Looking for a Python accessible OS framework. - NumTut corrupted - Scientific Python - FD_SETSIZE not working in 2.3.2? - sys._getframe() not behaving as expected - Re: notes on compilation Python with MinGW gcc - PySqlite client_encoding, stores fine but queries don't return unicode - python on handhelds devices (IPAQs) - notes on compilation Python with MinGW gcc - Re: Using and Implementing iterators with classes such as linkedlists - help with soap, please (java axis) - RE: Python for web ? - leftmost longest match (of disjunctions) - wxPython / styles and marks in TextCtrl - passing global data to a function - ReportLab sponsor the XML Conference 2003 - PyCon DC 2004 - Registration about to open! - tkFileDialog.askdirectory - gzipping several files - PyNum, A multidimensional array where the elements are arrays? - Turning off warnings - Re: compile(unicode) & source encoding - split on blank lines - comparing nan "number" - RE: Binary number manipulation - win32net.NetUserChange vs XP - wxPython: combining wxListCtrl and wxCheckBox - un-tuple (newbie) - How to delete this file ??? - PyArg_ParseTupleAndKeywords and optional unicode arguments - Changing the contents of a displaying gtk window thorough the commandline - I just want to say that python is a COOL language - python-list, Six Free V1AGRA Tabe ts, Limited Time Offer. - Using and Implementing iterators with classes such as linked lists - Implementing Iterators in, e.g., Linked Lists - Python and Zope Jobs Yahoo Group - Tkinter window button~ - Python for web ? - Python Learning Foundation - SF bug / patch / RFE tracker search? - segfault in extension module - struct module usage - Unicode output to file - speed of spambayes? - RE: Fast attribute/list item extraction - how to show a bitmap? - medusa as win32 service - New Forums - custom sorting and __cmp__ - Fast attribute/list item extraction - pil bug? - win32print how to lower cpu load? - problem capturing output from dos program - running a dos program and catching the output - Re: python-list query re sockets, - RE: Bug in glob.glob for files w/o extentions in Windows - Bug in glob.glob for files w/o extentions in Windows - referencing an object attribute sort of indirectly from within list - PythonWin IDE doesn't save all project files?? - Need some advice - cygwin tcltk 8.3 - Python Book - The namespace for builtin functions? - ANN: mailbox_reader 1.0.4 -- Python module to read UNIX mailboxessequentially. - RE: list: from 2 to 3 dimensions..looking for a nice way - list: from 2 to 3 dimensions..looking for a nice way - sorting dictionary keys? - SGMLParser eats ä etc - Filtering through an external process - ActiveState Python won't call module function. - sockets, select or: twisted and other file-descriptors - py_mpgedit SDK 0.1 beta released - Graph algorithms - DFS, generators callbacks, and optimisation - How to Extract the Control Name in VB Application Using Python - Python On WinCE.Net (x86) - GMPY compare warning - Importing from a file to use contained variables - Sun audio blocks, even when run in a thread - PyProtocols 0.9.1 Released
http://www.velocityreviews.com/forums/archive/f-43-p-275.html
CC-MAIN-2013-20
refinedweb
1,403
56.96
- Not to mention that there's always eminent domain, which would probably have been cheaper than tunneling. Tear down the buildings that should never have been allowed to be created in the first place and do it RIGHT this time. Besides, what's going to happen when you need to expand the tunnel? Or is Boston's population shrinking? Boston has created yet another scenario where it's impossible to expand, failing to learn the "scalability" requirement that caused the entire mess in the first place. Admin Anyone who does not thoroughly elicit and analyze requirements (functional and architectural) and likely change cases will tend to end up designed into a corner. Sure, you can refactor your way out of some corners, but architectural limitations typically don't lend themselves to that. You can architect a system to flex in ways that allow you to cover reasonable changing or "found" requirements - in fact doing otherwise is a ticket to failure. I find it amusing that you feel the need to imply that I don't know what I am doing. I do. Admin Enough with the sermons. Even if you define "WTF" as Worse Than Failure, this wasn't even a WTF, it was just a simple F. Since you can't seem to distinguish between a WTF and a F then I'll do you a favor and give you a free U, as in: F. U. CAPTCHA: ewww (my thoughts exactly) Admin Multiple sites: scalability. Multiple clients: scalability. Multiple products: I don't know what the heck this means, but in all probability extensibility, not scalability. Multiple forms: extensibility. "etc": unknowable at this time. I suppose the last three would (in a very minor way) fall partially under "scalability" if you have to juggle around with extents and the like, but I thought that Oracle did all that for you, these days . There's enough namespace pollution in computer terminology already, without needlessly adding to it and obfuscating a perfectly useful and well-understood term.I think the obvious solution would have been to hold off building Boston until around 1950, when the little local difficulty with motorised traffic started to become apparent. Damn those eighteenth century town planners ... The exercise of eminent domain in the case of Faneuil Hall and the like might prove a mite contentious outside the rather narrow viewpoint of the Al Qaeda school of civic rectitude. Admin Admin Isn't this just what agile methods are all about, making it easier to adapt to change? A few months ago Alex was bashing agile (), but has he now changed his opinion? As for myself, I've experienced TDD and related methods to be good for producing high quality code. Some of the other agile practices, especially how the requirements are gathered from the users by gatherin user stories which are right away written as code, I'm slightly doubtful about. Not because they would not work, but because I know better methods (which also happen to be iterative and test-driven) for designing systems which do what the user needs (not what the users says he would like to have) and which produce user interfaces with high utility, efficiency and learnability. Addendum (2007-10-11 17:03): PS: I think it would be good to have a new category for "serious articles" (such as this and The Mythical Business Layer and the older ones), so that it would be easy to find them afterwards from amongst all the "traditional WTF articles". Admin We all know that building designs are not changed lightly: to change a building after the foundations are laid is to invite expense and risk failure. We all know software designs are changed lightly, causing expense and failure. Time to stop pretending that large codebases can change; If a customer provides a new set of requirements, that equals a new product. New code, from scratch. This way, the customer gets the real price, up front. Frequent change after the project has started have transparent costs. Changed requirements are still possible, but to do so requires a new contract. New contract means a new delivery date and a new price. The old code is thrown away. It's the sort of thing which would need serious laws to enforce. We need the respect and responsibilities of architects. (insert random real profession with professional liability and enforced accreditation). However: we current operate like the less respectable carnival sideshow operators. And some have to take orders from clowns. Upside of scary grown-up system: we could deliver working products. And feel good about ourselves again. Admin The problem: The team decided to keep it simple: no database, no special configuration and no extensibility. The solution: Perhaps add a database, special configurations and extensibility? No! Instead, use Haacks Rule of Three: just hack in kludges, preferably with a straight copy/paste until the architecture starts groaning under the weight then undo the last few months work for a complete rewrite just before it collapses. Can't beat the Microsoft Way. Admin Amen to that! My friends and I have been advocating architect/engineer-like licensing for software professionals for years now - and we're only ~25 - because we know we'd be able to get our licenses, and the people who turn this industry into a joke (in college we had professors who wrote worse code than we did) would have to find something else to do. just please, please if this ever happens, make it required for the "PHB" to have his license, too... captcha: sanitarium - see, even the captcha bot wants us to clean up the software landscape! Admin Reading all of this is pretty funny to me. This project failed because of massive supplies of idiocy. Now, the notion that people without formal training cannot do projects like this ... oh, that got me laughing. The largest idiots I've known in my 30+ years of development have had college degrees. This project and most others that fail miserably like it fail primarily because these companies fail to do one important thing: hire and foster high-quality talent. Another huge issue is the failure to break such massive projects down into managable subsystems. It's now en vogue, but I've done it for 20 years. It works great. Scalability: always plan on it. There's always some joker. Just plan on it. And ... please, have some of these college-educated moronic managers read The Mythical Man-Month. You can't just toss more consultants at a project and get it on time. In fact, that's often the reverse. A while back I was on a project that got bloated and horrible. Management kept adding people. Finally, management said that they were going to trash the project after a couple years of development. I went to the head boss and said told him that if we kept the core and talented folks, canned the rest, and started over ... that we could be up-and-running within 60 days. He took the risk, cut the team from 20 down to 4, and kicked serious code ass. Documentation: we did architectural documentation and used automatic systems (such as those now found in VS.NET) to manage the documentation of functions, classes and modules. Much of the code was self-documenting as we were brutal with each other about naming ... if something wasn't clear, we made the developer responsible change it. So, let's put the blame for this large failures where it belongs: with the idiots who shouldn't be hired for this work in the first place. Admin Is there a larger image for "The code to ruin". I want to print a poster and put it in my cube. Admin I concur. I am close to having screwed some of my deadlines by generalising too early, spending time on a wonderful infrastructure that is at present scarcely used. Admin Admin That's a very different version of the 0, 1, Infinity rule than I've seen before. The version I'm familiar with says that in a system, for any particular thing, you should allow either no instances (that is, it is prohibited), exactly one instance (that is, an exception to the prohibition), or infinite examples (or, at least, limited only by system resources). The purpose to this is to avoid placing arbitrary limits on things that actually are just arbitrary. You wouldn't want to have, for instance, a mail client that placed a purely arbitrary limit on the length of folder names, or the number of folders, or the depth of folder nesting, because it would be frustrating under some circumstances. Thus, you should avoid writing software that does that kind of thing. Admin [quote] I don't understand what you are trying to say. When an agilist says that the docs will become outdated, this means with respect to how the application actually works (the code). How can the code be outdated with respect to the code?[/quote][/quote] This has already been answered, by I'd like to add my 2 cents. Actually, it's pretty easy for code to get outdated. Of course its not outdated with respect to the code, but it becomes easily outdated with respect to the reasons for the code being there. Seen that a lot. E.g., code circumvents some obscure bug in library X, but is 10x slower than it should be. Obscure bug is fixed, code never gets removed/switched back to the original code. You need to document things like that! Filing a bug in your bug tracker for it, could be a way of doing that. I would even go as far as filing that bug and putting a comment in the code, that points to it. Maybe even link that bug to the bug report you created upstream. Then again, I've seen programmers, which can't even read a comment two lines above the line of code they are reading and adding a comment, asking about the reasons for doing something in the code, that were explained just two lines above ... go figure.
https://thedailywtf.com/articles/comments/Avoiding-Development-Disasters/3
CC-MAIN-2019-35
refinedweb
1,697
63.49