text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Rolling Averages In statistics, a rolling average, also called a moving average and sometimes a running average, is used to analyze a set of data points by creating a series of averages of different subsets of the full data set. So a moving average is not a single number, but it is a set of numbers, each of which is the average of the corresponding subset of a larger set of data points. A simple example is if you had a data set with 100 data points, the first value of the moving average might be the arithmetic mean (one simple type of average) of data points 1 through 25. The next value would be this simple average of data points 2 through 26, and so forth, until the final value, which would be the same simple average of data points 76 through 100. In terms of robocode, a rolling average is use to keep average of more recent data instead of all data collected. This is useful in most Statistical Targeting systems to hit enemies that change their movement frequently. So, in order to match general description above, robocode's rolling average only considers the last set of points to be averaged. Rolling Averages are recommended over straight averages because it can adapt to an adaptive enemy. In practice, the most commonly used form of rolling average, is an Exponential moving average. Rolling Averages were brought to the Robocode community by Paul Evans. The first averaging code published by Paul is still used among top bots in the rumble. Robocode's Rolling Averages Ordinary Code - by Paul Evans public static double rollingAvg(double value, double newEntry, double n, double weighting ) { return (value * n + newEntry * weighting)/(n + weighting); } You feed the function the current averaged value (value), the value to be averaged (newEntry), the weighting on the old value (n) and the weighting on the new value (weighting). The function will return an exponential rolling average. Note, having both n and weighting is redundant, and many implementations simplify them to a single value expressing the bias towards newer data. SpareParts Version Here is the implementation of a Simple rolling average that used by Kawigi in his SpareParts. It is easier to understand but slower. public class Averager { private int totalentries; private double[] recent; public Averager(int size) { recent = new double[size]; totalentries = 0; } public void addEntry(double entry) { recent[totalentries%recent.length] = entry; totalentries++; } public double recentAverage() { if (totalentries == 0) return 0; double total = 0; for (int i=0; i<Math.min(totalentries, recent.length); i++) total += recent[i]; return total/Math.min(totalentries, recent.length); } public int totalEntries() { return totalentries; } public int recordedEntries() { return Math.min(totalentries, recent.length); } } Sample Implementations Rolling Averages can change in various of ways, here are some implementation of rolling average. Time Weighted Rolling Average - by Robert Skinner public static double timeWeightedAverage( double newValue, double oldValue, double deltaTime, double decayRate ) { double w = Math.exp( -decayRate*deltaTime ); return( w*oldValue + (1-w)*newValue ); } public static double computeDecayRate( double halfLife ) { return( -Math.log( 0.5 ) / halfLife ); } This is another implementation of a exponential rolling average. This implementation is arguably more clear than the Paul Evans version above, but is essentially equivalent. The difference is that in the Paul Evans version the deltaTime is fixed at 1, and the decayRate value was split redundantly into n and weighting. In this version, you can also calculate the decay rate to obtain a specific half life of data (this means, how many ticks you want before the weighting of a piece of data becomes 50%). (More to be added...) Code Example Many Statistical Targeting algorithms use rolling averages. Here are some improvements to the GuessFactor Targeting Tutorial to make it use a rolling average. It uses Paul's code above. Call this function every time a wave hits. static int readings = 0; // In WaveBullet class, replace old checkHit with this one.] = rollingAvg(returnSegment[index], 1, Math.min(readings++, 200), 1); for (int i = 0; i < returnSegment.length; i++) if (i != index) returnSegment[index] = rollingAvg(returnSegment[index], 0, Math.min(readings++, 200), 1); return true; } return false; } Don't forget to change each int[] stat into double[] stat. This code uses a rolling depth of 200 readings. This should be used in conjunction with Bin Smoothing in order to gain the best performance, because the bin smoothing makes sure that you don't roll on zero on every bin that doesn't hit that time.
http://robowiki.net/wiki/RollingAverage
CC-MAIN-2013-48
refinedweb
745
53.71
Do you ever get these kinds of messages when you compile your project? ------ Build started: Project: Suteki.Shop.CreateDb, Configuration: Debug x86 ------ No way to resolve conflict between "System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" and "System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35". Choosing "System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" arbitrarily. Consider app.config remapping of assembly "NHibernate, Culture=neutral, PublicKeyToken=aa95f207798dfdb4" from Version "3.0.0.2001" [] to Version "3.0.0.4000" [D:\Source\sutekishop\Suteki.Shop\packages\NHibernate.3.0.0.4000\lib\NHibernate.dll] to solve conflict and get rid of warning. Consider app.config remapping of assembly "System.Web.Mvc, Culture=neutral, PublicKeyToken=31bf3856ad364e35" from Version "2.0.0.0" [C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET MVC 2\Assemblies\System.Web.Mvc.dll] to Version "3.0.0.0" [C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET MVC 3\Assemblies\System.Web.Mvc.dll] to solve conflict and get rid of warning. C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1360,9): warning MSB3247: Found conflicts between different versions of the same dependent assembly. Suteki.Shop.CreateDb -> D:\Source\sutekishop\Suteki.Shop\Suteki.Shop.CreateDb\bin\Debug\Suteki.Shop.CreateDb.exe The problem is that the build output doesn’t tell me which of my assemblies references version 2.0.0.0 of System.Web.Mvc and which references version 3.0.0.0. If you’re writing software using lots of 3rd party assemblies like I do, it’s a constant problem. I’ve written a little bit of code that I drag around with me that outputs lists of assemblies that my assemblies reference. I’ve found it very useful for resolving these kinds of issues. Now I’ve wrapped it up as a little console app, AsmSpy, and put it on github here: Or you can download a zip file of the compiled tool here: How it works: Simply run AsmSpy giving it a path to your bin directory (the folder where your project's assemblies live). E.g: AsmSpy D:\Source\sutekishop\Suteki.Shop\Suteki.Shop\bin It will output a list of all the assemblies referenced by your assemblies. You can look at the list to determine where versioining conflicts occur. The output looks something like this: .... Reference: System.Runtime.Serialization 3.0.0.0 by Microsoft.ServiceModel.Samples.XmlRpc 3.0.0.0 by Microsoft.Web.Mvc 4.0.0.0 by Suteki.Shop Reference: System.Web.Mvc 2.0.0.0 by Microsoft.Web.Mvc 3.0.0.0 by MvcContrib 3.0.0.0 by MvcContrib.FluentHtml 3.0.0.0 by Suteki.Common 2.0.0.0 by Suteki.Common 3.0.0.0 by Suteki.Shop 2.0.0.0 by Suteki.Shop Reference: System.ServiceModel.Web 3.5.0.0 by Microsoft.Web.Mvc Reference: System.Web.Abstractions 3.5.0.0 by Microsoft.Web.Mvc .... You can see that System.Web.Mvc is referenced by 7 assemblies in my bin folder. Some reference version 2.0.0.0 and some version 3.0.0.0. I can now resolve any conflicts. 14 comments: Holy smokes are you spying on me? This is exactly the problem I've been having the last couple of days. I've been using PowerShell in a similar way, but not looking at the references inside files (I was using Reflector for that), more to find out what versions of what assemblies are where in my working copy. Something like this: PS C:\data files\projects> gci -Filter Castle.Core.dll -r | %{ $_.VersionInfo } | group-object ProductVersion Awesome, this helped me today...cheers. Thanks Mark, this has already helped me. very nice and usefull little tool. maybe a SortedDictionary would be nice for the output. thanks! Marco, do you mean sort the dependencies by name? Was talking about sorting the references on namespace for easy searching, but maybe the other way around might be useful too. great work ... or you may add the following to the section of your *.config file: *-----* Replace *--- with a less-than and ---* with a greater-than symbol. I'm having a terrible time getting this to work on projects that reside on a network share. Any ideas of how to get it to work? I'm getting a "Failed to load assembly: " error. Bernie Thank you, this helped me. I run it in another way: cd to your bin, then: C:\alreay\at\bin\asmspy . Brilliant! Helped me so much :) Thanks. Any change you can add *.EXE, in addition to *.DLL? Hi Anonymous, done, just get the latest from GitHub.
http://mikehadlow.blogspot.com/2011/02/asmspy-little-tool-to-help-fix-assembly.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+CodeRant+%28Code+rant%29
crawl-003
refinedweb
789
54.79
Product Version = NetBeans IDE 6.9 RC2 (Build 201005312001) Operating System = Windows 7 version 6.1 running on x86 Java; VM; Vendor = 1.6.0_20 Runtime = Java HotSpot(TM) Client VM 16.3-b01 After refactoring the IDE didn’t see one source file despite operating system and other programs saw it. Restarting IDE didn’t help. I had to start NB 6.8, correct one small error in source code of this file, clean and build project. After closing the NB 6.8 and opening NB 6.9 RC2 the file started to be visible. However after some modifications of other files it again disappeared and I had to repeat the process with opening and rebuilding the project in NB 6.8. Created attachment 100029 [details] zipped log files Log files to the described problem Created attachment 100139 [details] Zipped problematic project The error appeared in new project. One of files is invisible in project window as wel as in File window. However th open window see it. But if I ask for opening, th IDE halts. However the compiler and ANT see it, becouse "Clean and Build" finished without problems. Despite it the editor still anounced errors and the sucessfully compiled source file was still invisible for Preject and File panel. I send this miniproject in a ZIP file. The "invisible file" is the source file of class adventura_115_10J.rámec.IMístnost_115; however, similarly named class adventura_101_09P.rámec.IMístnost is visible without problems. In addition, when I write the import statement, IDE can suggest the first package, then I have to continue withou suggestions until I write the whole name "rámec" a then IDE again continues with suggestion - however the "invisible class" is not among the suggested names. The version 6.8 had not these problems - they came with the new version of NB. Created attachment 100140 [details] Screenshot of a dialog with an error message In adition whe I want to create a new interface in the root package, IDE created an empty file with the given name, but then announced the error shown in the attached screenshot. Other programs has no problems with this file. Returning back to filesystems. The file is shown neither in package view nor in files view. And it's not shown as it's not returned by FileObject.getChildren(). FileObject("TEST_NB_6.9/src/adventura_115_10J/r†mec").getChildren() returns 10 children but there is 11 of them and the missing is IM??stnost_115.java Please note that just running jar tool on the ZIP fails: $ jar xf zipped-problematic-project.zip java.lang.IllegalArgumentException at java.util.zip.ZipInputStream.getUTF8String(ZipInputStream.java:324) at java.util.zip.ZipInputStream.readLOC(ZipInputStream.java:264) at java.util.zip.ZipInputStream.getNextEntry(ZipInputStream.java:91) at sun.tools.jar.Main.extract(Main.java:868) at sun.tools.jar.Main.run(Main.java:260) at sun.tools.jar.Main.main(Main.java:1167) I don't think I can do anything with this. The ZIP contains wrong UTF-8 characters and Java is incapable to work with such directories. Try to run following program on content of your ZIP: import java.io.File; class list { public static void main(String[] args) { File f = new File(args[0]); for (String s : f.list()) { System.out.println(s); } } } TEST_NB_6.9/src/adventura_101_09P$ java list r?mec Exception in thread "main" java.lang.NullPointerException at list.main(list.java:8) Fix the ZIP encoding and the problem shall disappear. Created attachment 102830 [details] Self-extracting archive (JAR file) with a NetBeans project I attached a self-extracting JAR file containing a super simple NetBeans project demonstrating the mentioned problem. In this project there are two files in the root folder: IM\u00EDstnost.java IM\u00EDstnost_115.java The first one is visible in the project window but the second one is not. As you can see, these names differs only by the suffix “_115” in the name of the second (non visible) file. The problem occurred on Czech Windows 7, version 6.1, build 7600. The problem occurred also in the NetBeans 7 M2. Last week we fixed bug 189988 which may be closely related to this problem. I tried to unpack the JAR file with NetBeans project in today's build: $ unzip 102830.jar The file IM+-*stnost.java is shown. But only until for a while - either after few minutes or after refresh after main window is activated (yes, I am observing bug 191720). Either this is slightly different problem or bug 189988 is not properly fixed. Btw. I also tried $ jar xf 102830.jar and that worked fine on Kubuntu 10.10 (as it created proper UTF-8 characters). I have tracked this problem down the the FileName encoding of IMístnost_115.java as IMírnost_115.java (notice the off-by-one difference after the problematic letter). The FS couldn't find the file (under the broken name) then, which leads to host of other problems (like issue 190949, reported by one of the evaluators, it seems ;-)). Luckily, the problem really went away with the fix for bug 189988. Once I updated openide.utils, I'm no longer able to reproduce this, but if I rollback oou.CharSequences to 3b8b7f3f4d66, it appears again and a similar problem is caught by (new) CharSequencesTest.testMaths: expected:<11ü[1]111111111111> but was:<11ü[0]111111111111> So I believe this is fixed now, though I'll commit the test I wrote while evaluating this... Test: Integrated into 'main-golden', will be available in build *201011100000* on (upload may still be in progress) Changeset: User: Petr Nejedl I've tried the build 201011100000 on my computer and now all the previously problematic files really started to be visible.
https://netbeans.org/bugzilla/show_bug.cgi?id=187487
CC-MAIN-2018-05
refinedweb
952
59.9
Preface This page was added prior to the release of GML version 3.1.1. At this point, the deficiencies described herein should be taken to warn unwary users away from any GML3 prior to 3.1.1. The last comment on this page is an early evaluation of 3.1.1 with a set of validating parsers. It appears that the DOM style parsers fare better than the SAX style parsers, but GML 3.1.1 is indeed expressed in correct XML schema. In any case, due to the facts mentioned below, one should be extremely skeptical of any product, person, or label which claims compatibility with a version of GML3 prior to 3.1.1. As far as usability is concerned, 3.1.1 is the first (and the only) GML3. GML3 Release Status as of October 13, 2005 GML 3.1.1 has been released. This occurred sometime between August 19, 2005 and now. However, there seems to be a "housekeeping error" with respect to the link on the specification page. The zipfile which is downloaded by the link on the OGC Webpage still contains the old documentation for 3.1.0. I have not compared the schemas to see if they have been updated. However, the schemas for 3.1.1 are hosted on the OGC schemas website. Introduction If my understanding is correct, this document presents fatal flaws in GML 3.1.0 (and probably 3.0.0). These flaws manifest themselves in the Implementation Specification as invalid XML Schema. As the schema are invalid, it is of course impossible to produce a "technically valid" GML 3.1.0 instance document. Worse, the errors are of a "nonsensical" nature, meaning that the actual schema attempts to express concepts which just do not make sense. If I am correct, the fix to these errors requires a human to read the schema in conjunction with the specification document to produce valid XML Schema which conforms to the "perceived intent" of the specification. This document is structured as follows: - It is good to remind the reader at this point that XML is a markup and not a programming language. By itself, a well formed XML document does not need to conform to any schema. In order to be well formed, the XML document must have a single root element and each element must have matching opening and closing tags. XML Schema is an XML application to define legal combinations of elements and legal places for elements with certain names to appear. XML Schema also provides ways to specify legal values for elements. An XML application (e.g., GML) expressed in XML Schema defines a framework that XML documents can use to store information. These XML documents can be validated against the grammar and constraints specified by the XML application (e.g., GML). Data containing documents can then be said to be conformant or noncompliant with respect to the XML application (e.g., GML). It is entirely possible for an XML document to be well-formed but noncompliant with respect to a particular application. This is the crux of the problem: when an XML Application like GML is expressed with an invalid XML Schema, it is impossible to construct an XML document which conforms to the standard. Since XML Schema is all about specifying which tags are legal where (and what they are allowed to contain when they do appear), an invalid XML application might as well not exist. The whole point of GML is to encourage interoperability by providing one unique expression of spatial concepts, permitting vendor neutral access to data. Invalid XML Schema in the standard is more than just inconvenient or difficult to work with: it defeats the purpose of having the standard. Types and Elements Unlike the programming languages of which I am aware, XML Schema decouples types from the elements which possess type attributes. On the other hand, XML Schema is not a dynamically typed language (like Python) either. It is nearly correct to state that an XML Schema type is like a typedef in C, but this also falls short of capturing the concept. XML Schema is a statically typed language. The concept which has been giving me fits is that the objects possessing type information ("elements") are not capable of containing data. The very closest analog I can draw is the C typedef may be considered the association of an XML schema type with an XML schema element. Nothing resembling a "variable" is present in XML schema. "Variables" are always instantiated and initialized in one atomic operation by the "instance" XML document which conforms to the schema. Furthermore, "variables" in XML instance documents are always anonymous no-name-having things. (sidebar: One could possibly consider the concept of a "key" to be analgous to a "variable name", but that's not relevant to the discussion.) Having satisfied the need to identify what composes the traditional notion of a "Type" and what constitutes a "variable", a discussion of inheritance can ensue. It turns out that in addition to being a statically typed language, XML Schema is a semi-object-oriented language. The prefix "semi" is due to the fact that the traditional notion of a "Type" is composed of two things (an XML Schema type and an XML Schema element) and these two things behave differently. This is the topic of the next section. Type inheritance and Element Substitution Groups An XML Schema type possesses inheritance properties, but an XML schema element does not. Because these two objects, taken together, are what allow the expression and storage of data in an XML instance document, and because these two objects have different properties, XML schema possess semi-object-orientated concepts. I will remind the reader once again (because I find myself continually falling into a programming language mindset) that XML is a markup language and not a programming language. In this context, inheritance means inheritance of markup and not inheritance of methods to operate on data. This is XML Schema's mechanism to re-use collections, sequences and choices of tags in multiple contexts. XML Schema type inheritance XML Schema types possess a knowledge of inheritance. Unlike Java and C++, child types are derived from parent types by either extension or restriction. Extension works much as you would expect: any markup in the child type is appended to the parent type. Restriction works by any or all of the following mechanisms: - XML Schema elements possess no knowledge of inheritance. They do, however possess a non-heirarchical notion of interchangability. This concept is known as a "substitution group". An element is allowed to specify another element with which it aspires to be interchangable. When many elements aspire to be interchangable with the same element (known as the head), this collection of elements is known as a substitution group. Note the parallel structure of the two components of a Traditional Type: - little-t types may specify a parent - elements may specify a "head" to which it aspires to be interchangable XML Schema imposes constraints on an element definition such that the element's little-t type and substitution group properties must be consistent. If element LEAF declares element HEAD in it's substitution group, then element LEAF must either possess the same little-t type as HEAD, or it must possess a child of HEAD's little-t type. An important property of element substitution groups is that they are not used within the schema definition itself (it's only declared there). The substitution group is exercised by the instance document and not the schema definition. Semi-object-oriented-ness Programmers using statically typed object oriented programming languages like C++ or Java expect to be able to use subclasses wherever a superclass is required. This is because the subclass is guaranteed to have at least the functionality of the superclass. (This is where it is important to remember that XML Schema is not a programming language.) An XML Schema definition does not permit this substitution. The substitution is allowed to occur in the XML "instance" document which contains the data, not in the schema document which contains the definition. Let's take the programmers view of XML Schema to see where it goes wrong. Say we have an element named LEAF which is a legal member of the substition group HEAD. Let's say these are building blocks for a larger vocabulary. Using good object-oriented design, we want to make a general purpose BASE element which contains HEAD (among other items). We also want to make a DERIVED element, "subclassed" from BASE, which address a more specific concern. In the following, remember these facts: - BASE and DERIVED are little-t types - HEAD and LEAF are elements - HEAD and LEAF have valid declarations which are not shown. - LEAF is a member of the HEAD substitution group Here is the definition of BASE: A programmer might be tempted to "subclass" BASE like this: The important line to note is the substitution of the LEAF element for the HEAD element. This is probably illegal. The reason I say probably is that I cannot find anything in the W3C spec which allows (or forbids) it, but it doesn't seem to pass the schema validators I've tried. The two paragraphs devoted to XML Schema Substitution Groups in XML in a Nutshell specifically mention that their use is in instance documents. GML 3 Overview GML 3 adds significant capability to GML 2. Whereas GML 2 was able to express simple features, GML 3 is capable of expressing most (if not all) of the concepts embodied in the OGC specifications, including coordinate reference systems and all the associated components. GML 3 has seven top-level entry points which do not depend on the other six entry points. This was done in order to reduce the size of the schema for applications which are only concerned with specific subtopics. The organization is depicted by the following illustration, which is a screen-capture of a page out of the specification: This discussion concerns only those elements in the path which begins at the coordinate reference system top level element. The schema and the PDF document are available for download from the OGC website. GML 3 Problem Areas Feeding a top level object to an XML schema validator typically yields a suspiciously round number of errors indicative of an artificial limit to the number of errors reported. To determine the root cause of the errors, I began at the base of the heirarchy (basicTypes.xsd) and traversed the tree towards coordinateReferenceSystem.xsd, fixing the errors as I went. I did not make it past referenceSystem.xsd. In spite of the fact that there are a lot of errors, they seemed to be grouped primarily in two types of error: and There is also the occasional "Unique Particle Attribution" (UPA) error. The UPA error can indicate the situation where the same markup is called for more than once and it would not be clear, reading the document, which rule called for it. Error 1 and Error 2 are both spawned by incorrect usage of type inheritance. Frequently, the type inheritance is incorrect because it does not express a concept which makes sense. An example of this will be given in a section to follow. One easy fix In the navigation of the schema tree, the first error encountered is in gmlBase.xsd: This is a UPA error and the error is in the "any" tag. A validating parser will not be able to tell what rule to use to accept a gml:_MetaData element. This is important because the "any" tag may have different constraints than the element tag. This is easy to fix because the intent of the schema authors is obvious and it's just a matter of fixing syntax. Fix this one by adding namespace="##other" as a parameter to the "any" tag. Example of bad inheritance Aside from this one fix in gmlBase.xsd, one can make it all the way up the tree to referenceSystem.xsd without incident. Unfortunately, this is where it gets hard. The problems start with AbstractReferenceSystemBaseType. They then infect the entire heirarchy based on this type. In the following, I present the parent type for AbstractReferenceSystemBaseType (DefinitionType), as well as the incorrect attempt to subclass DefinitionType. The first thing to note is that the AbstractReferenceSystemBaseType is derived by restriction. This means, among other things, that if an element in the parent is not present in the child, then that element is not allowed. It also means that if an element is required by the parent, then it must also be present in the child. This error type 2 occurs because the child does not include the "name" property. Actually, there are two elements of the parent which are not referred to in the child: description and name. Name is the only one which causes the error because the parent requires the presence of a name, but does not require the presence of a description. To top it all off, this specification attempts to add two elements which are not present in the parent. Stop and think about that. When deriving from the parent by "restriction", lets add markup that the parent doesn't have. Further investigation reveals the following facts: - gml:srsName is part of the substitution group with gml:name at the head. - gml:remarks is part of the substitution group with gml:description at the head. The schema author is obviously a programmer. They're trying to use the subclass in the place where the class goes. The problem is twofold: XML is not a programming language; and being a member of a substitution group is not a direct analog to being a subclass. The schema compiler is not going to let this fly for the same reason that this problem is not easy to fix: it amounts to a replacement of the parent's markup with the child's markup. The schema author wants to change the name of the description tag to "remarks", and change the name of the "name" tag to "srsName". Since this can be broken down into two steps: removal of the old tags and insertion of the new tags, it cannot logically happen under the auspices of "restriction". One cannot implement this renaming of attributes in less than two steps (a restriction followed by an extension). Even if one were to do this two step procedure, the parent will not permit the child to eliminate the "name" element. Much the same mechanism is responsible for the type 1 errors, and the difference between the two seems to be whether the child is eliminating a required parent element or not. Presumably, an error analgous to this example is responsible for all the type 1 and type 2 errors regurgitated in the validation process. How to proceed? GML, as an XML Schema application, attempts to make extensive use of object-oriented concepts like re-use by inheritance. Unfortunately, XML Schema is only semi-object-oriented: the conforming instance documents can make use of the inheritance tree, but the schema definition itself cannot. In particular, one cannot "override" parent elements by child elements in the schema definition. In general, between an analysis of the schema, reference to the specification, and some conservative guesswork, it should be possible to divine the intent of the specification authors. In the example provided, I posit that the author intends to rename "description" to "remarks" and "name" to "srsName". In the instances where it is not possible for a human to intuit the intent, reasonable decisions could be made about how to represent the data. It is certainly possible to make a working XML schema for GML, but who should do so remains an open question. The OGC supports open standards, and releases the standards under an open-source-like model, but does not have an open development model. This may preclude the user community from fixing the schema and submitting it as a "patch" to the OGC. Additionally, the rumor is that 3.1.1 is going to be released "real soon now" (note these rumors started in Nov. 2004.) and the scuttlebutt is that 3.1.1 schemas actually validate cleanly. There are a number of options for going forward: -.
http://docs.codehaus.org/display/GEOTOOLS/Critical+Errors+in+OGC's+GML+3.1.0?focusedCommentId=20572
CC-MAIN-2014-42
refinedweb
2,727
53.51
Basics - Setup - Your First C# Program - Types and Variables - Flow Control Statements - Operators - Strings - Classes, Objects, Interface and Main Methods - Fields and Properties - Scope and Accessibility Modifiers - Handling Exceptions Intermediate - Generics - Events, Delegates and Lambda Expressions - Collection Framework - LINQ Advanced - Asynchronous Programming (Async and Await) - Task Parallel Library What’s New in C# 6 - Null-Conditional Operator - Auto-Property Initializers - Nameof Expressions - Expression Bodied Functions and Properties - Other Features Object-Oriented principles (OOP) - Encapsulation - Abstraction - Inheritance - Polymorphism Solid principles - Single Responsibility Principle - Open Closed Principle - Liskov Substitution Principle - Interface Segregation Principle - Dependency Inversion Principle C# Best practices, Design Patterns & Test Driven Development (TDD) Setup LinqPad is an .NET scratchpad to quickly test your C# code snippets.The standard edition is free and a perfect tool for beginners to execute language statements, expressions and programs. Alternatively, you could also download Visual Studio Community 2015 which is an extensible IDE used by most professionals for creating enterprise applications. Your First C# Program //this is the single line comment /** This is multiline comment, compiler ignores any code inside comment blocks. **/ //This is the namespace, part of the standard .NET Framework Class Library using System; // namespace defines the scope of related objects into packages namespace Learning.CSharp { // name of the class, should be same as of .cs file public class Program { //entry point method for console applications public static void Main() { //print lines on console Console.WriteLine("Hello, World!"); //Reads the next line of characters from the standard input stream.Most common use is to pause program execution before clearing the console. Console.ReadLine(); } } } Every C# console application must have a Main method which is the entry point of the program. Edit HelloWorld in .NET Fiddle, a tool inspired by JSFiddle where you can alter the code snippets and check the output for yourself. Note, this is just to share and test the code snippets, not to be used for developing applications. If you are using visual Studio, follow this tutorial to create console application and understand your first C# program. Types and Variables C# is a strongly typed language. Every variable has a type. Every expression or statement evaluates to a value. There are two kinds of types in C# - Value types - Reference types. Value Types : Variables that are value types directly contain values. Assigning one value type variable to another copies the contained value. int a = 10; int b = 20; a=b; Console.WriteLine(a); //prints 20 Console.WriteLine(b); //prints 20 Note that in other dynamic languages this could be different, but in C# this is always a value copy. When value type is created, a single space most likely in stack is created, which is a “LIFO” (last in, first out) data structure. The stack has size limits and memory operations are efficient. Few examples of built-in data types are int, float, double, decimal, char and string . For complete list of all built-in data types see here Reference types : Variables of reference types store references to their objects, which means they store the address to the location of data on the stack, also known as pointers. Actual data is stored on the heap memory. Assigning reference type to another doesn’t copy the data, instead it creates the second copy of reference which points to the same location on the heap. In heap, objects are allocated and deallocated in random order that is why this requires the overhead of memory management and garbage collection. Unless you are writing unsafe code or dealing with unmanaged code, you don’t need to worry about the lifetime of your memory locations. .NET compiler and CLR will take care of this, but it’s still good to keep this mind in order to optimize performance of your applications. Flow Control Statements - If else statement : Edit in .NET Fiddle int myScore = 700; if (myScore == 700) { Console.WriteLine("I get printed on the console"); } else if (myScore > 10) { Console.WriteLine("I don't"); } else { Console.WriteLine("I also don't"); } /** Ternary operators A simple if/else can also be written as follows <condition> ? <true> : <false> **/ int myNumber = 10; string isTrue = myNumber == 10 ? "Yes" : "No"; - Switch statement : Edit in .NET Fiddle using System; public class Program { public static void Main() { int myNumber = 0; switch (myNumber) { // A switch section can have more than one case label. case 0: case 1: { Console.WriteLine("Case 0 or 1"); break; } // Most switch sections contain a jump statement, such as a break, goto, or return.; case 2: Console.WriteLine("Case 2"); break; // 7 - 4 in the following line evaluates to 3. case 7 - 4: Console.WriteLine("Case 3"); break; // If the value of myNumber is not 0, 1, 2, or 3 the //default case is executed.* default: Console.WriteLine("Default case. This is also optional"); break; // could also throw new Exception() instead } } } for (int i = 0; i < 10; i++) { Console.WriteLine(i); //prints 0-9 } Console.WriteLine(Environment.NewLine); for (int i = 0; i <= 10; i++) { Console.WriteLine(i); //prints 0-10 } Console.WriteLine(Environment.NewLine); for (int i = 10 - 1; i >= 0; i--) //decrement loop { Console.WriteLine(i); //prints 9-0 } Console.WriteLine(Environment.NewLine); for (; ; ) { // All of the expressions are optional. This statement //creates an infinite loop.* } // Continue the while-loop until index is equal to 10. int i = 0; while (i < 10) { Console.Write("While statement "); Console.WriteLine(i);// Write the index to the screen. i++;// Increment the variable. } int number = 0; // do work first, until condition is satisfied i.e Terminates when number equals 4. do { Console.WriteLine(number);//prints the value from 0-4 number++; // Add one to number. } while (number <= 4);
https://forum.freecodecamp.org/t/the-c-programming-language-a-full-guide-with-examples/16157
CC-MAIN-2022-33
refinedweb
937
57.87
cuz I wasn't here genius. The only reason I decided to post was I was reading what looked like an interesting thread and then I saw you insult everyone who was nice enough to try and help you. No,... cuz I wasn't here genius. The only reason I decided to post was I was reading what looked like an interesting thread and then I saw you insult everyone who was nice enough to try and help you. No,... Random numbers are a ***** to get right, and you can't expect everyone to use the best algorithm. rand is standard and every implementation meets the requirements, but that doesn't mean they're all... In C++ the typedef is implicit, you don't have to worry 'bout it. Someone did post it, if you're not smart enough to understand the code you've been given, that's not our problem. Cela's code does exactly what you asked, it has a virtual base function and a derived... How the hell should I know? All I know is that of several different operating systems I've used, all of them run faster with buffered input than with character by character. Or to spell it out since... That's between you and your system, buffered input is almost always faster than character by character input. That was my point entirely, your way printed 2 with a 3 line file that didn't end in a... C++ really isn't any better, but you can be reasonably sure that you'll use most of C for any given program where with C++ you won't even use 15% of the language even for huge programs. You can know... std::ifstream fin("file.txt", std::ios::in); int lines = 0; char c; do { fin.get(c); if (c == '\n') ++lines; Because you're taking without giving anything back. We want to know that you're trying, otherwise it's a waste of time helping you out and we have better things to do. Yea, but you can't quantify exactly what the difference is, so twice as much as as good as any for a "Duh" question like that. A double precision variable has twice the precision of a single precision variable. Isn't that kind of obvious? Dynamic data structures like linked lists and binary trees are best made with pointers. You can use pointers to pass big objects around without worrying about copying them. You can use pointers to... Right, you overload << for ooint to act on an ostream ostream& operator<<( ostream& os, const ooint& stuff ) { //print stuff } then you can do stuff like cout << a << endl because cout... do you have an overloaded = like this? ooint operator=( int rhs ) If you do then it's all good Works good for me #include <iostream> #include <list> using namespace std; template<class T> void PrintList( const list<T>& l ) You don't, int + int is built into the compiler, so it does everything for you as long as ooint has an overloaded = operator that takes an int. in C++ a structure and class are the same except a class is private by default and a structure is public by default. There's no comparison with a C structure because it can't have methods, just... Look dude, you have to overload an operator for every type you want to use it with, if you want to do ooint b = a + 1; then you've got to define ooint operator + ( const ooint & lhs, int rhs ) If... A hash file? That's weird, but you can try using fseek, I'm not sure, but this works for me. #include <stdio.h> int main( ) { FILE *fp; char rec[100]; // Record size is 100 ... Your compiler is protecting you from yourself. Just because it works doesn't mean it's right. void main() is wrong, int main() is right. That's the difference. Even if it seems like its working... Question - When writing an operating system, isn't it required that stuff like the boot loader has to be written in assembly? Or can it be written in optimized C and still work? Cheers! :) No, you can't. If this scares you then send XP to the bit bucket and go get Linux. Cheers! :) I guess it's good to learn if you wanna speed up your programs, but from what I've seen, all inline assembly does is give you quick access to the hardware if your operating system allows it. I think... I don't have that book, but if you send me the questions then we can still do them together if you want. I don't usually have much better to do with my spare time except learn other languages and... Maybe the cause of so many on-posters is because they registered and then either didn't choose to come back or they had no way of deregistering for whatever reason. How about a time limit for new...
https://cboard.cprogramming.com/search.php?s=798342dae941e64872072e2ac023144b&searchid=3606423
CC-MAIN-2020-16
refinedweb
838
80.21
Active IQ Unified Manager Discussions Hello. I would like to clarify the functionality provided by OCI in reporting the occupancy of EMC ECS. Right now the only data available to us in OCI is the rawCapacityMB and usedRawCapacityMB for the entire storage system. I've not managed to find any smaller granularity entities like 'bucket' or 'namespace'. There are no volumes and no pools that belong to ECS in dwh_capacity.volume_dimension and dwh_capacity.storage_pool_dimension. So is there a way to see more details of the ECS space usage ? Ideally we would like to get the data of used space per 'bucket'. Thanks you in advance Solved! See The Solution I believe the buckets manifest themselves in the chargeback_fact. My peer Ketsia Pha is doing a webinar for OCI customers on Dec 19th on ECS and how it manifests itself in current OCI versions. I think if you are using other OCI cloud schema datasources, you may find this useful as they will all tend to have their data flow into the DWH in the same fashion View solution in original post We see this with most of our newer "data sources" as the additon to what's being injected into OCI is a moving target. The inclusion of capacity, configuration, and performance metrics for the ECS data source. will enable you to capacity plan and monitor ECS devices through OCI. This request is currently being montiored via IFR-3553 for the ECS data source. I'd reach out to your SE and request your company be included in this request and please be sure to include other parameter's you would like to see as well. Scott McGowan Live Chat, Watch Parties, and More! Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.
https://community.netapp.com/t5/Active-IQ-Unified-Manager-Discussions/OnCommand-Insight-support-for-EMC-ECS/m-p/145180
CC-MAIN-2022-05
refinedweb
302
54.93
Re: Overriding Text No Records Found You can also do it on page level or component level.. regards Adriano dos Santos Fernandes wrote: HITECH79 escreveu: Hallo, how can i override/modify the text No Records Found If you are talking about the DataTable component, write a YourApplication.properties on the same package as Re: Setting a relevant value for radio buttons without using RadioChoice Hmmm but why not just do it with a LDM? Im a bit puzzled, heres what I do: RadioGroupEventType eventTypeRadioGroup = new RadioGroupEventType( eventType); eventTypeRadioGroup.setLabel(new ModelString(Type of Event)); ListViewEventType eventTypesListView Semigenerating Selenium case's with wicketTester? Hi Guys I were wondering if any of you have tried todo some semi auto generation with wicketTester for selenium? I mean create a wicket tester that runs a scenario and at the same time it runs a selenium rc and check's if the result are the same somehow? Im not sure if it gives any advantage Re: Remove bulletpoints from messagetext in feedbackpanel Yeah, and if you want you can also put on a special icon[1], like warning triangle etc for the separate states a feedback message can be.. [1] Adriano dos Santos Fernandes wrote: HITECH79 escreveu: Hallo, how can i remove the bulletpoints from Re: Ajax response not completed Beats me, seems like somethings wrong,maybe a bug..? I'd create a quickstart (really easy with maven, ) and attach it to a jira issue.. If the code are somewhat working, and the only annoying thing are the mouse icon you could try to set the mouse Re: Autocomplete text concatenation This is also something you can do with object autocomplete from Wicketstuff francisco treacy wrote: if i understand correctly you need a multi autocompleter. do you mean something like this? (i have integrated it with Re: How to donate to Wicket Project James Carman wrote: Of course, the ASF would always love donations: Also, you can buy from the Wicket store and part of the proceeds will help the ASF (I believe that's how it's set up): The coffee mug Re: Setting a relevant value for radio buttons without using RadioChoice of friendly people and trying every option I can think of to work this out Re: Setting a relevant value for radio buttons without using RadioChoice the optional choicerenderer.. Nino Saturnino Martinez Vazquez Wael wrote: Re: Setting a relevant value for radio buttons without using RadioChoice Hi Michael I believe that this is not what Archie asked about, he wanted to place database id's in the value of the radios.. Dont know why he wanted to though... I might have gotten it wrongly though.. regards Nino Michael O'Cleirigh wrote: Hi ArchieC, The way RadioGroup works is that it Re: Openid integration? David Leangen wrote: Hmm, I do actually have something working, which seems to be really simple. Ok, good for you! Yup, I plan to push it back to wicketstuff once I've figured out the last problems... Using openid4java, my only problem are that I cant seem to get any openid Want a simple way to put in conditional css for IE..? Then check this approach out : Easy and simple - The Wicket way :) -- -Wicket for love Nino Martinez Wael Java Specialist @ Jayway DK +45 2936 7684 Re: Want a simple way to put in conditional css for IE..? Hehe, I just got too lazy to fill in two medias for the knowledge (mail and blog).. Also doing blog's about wicket is good PR to get interested in Wicket.. And it would be a shame if someone wanted to keep track of this and only checked the list.. Cool post btw :) I'll be watching your blog:) Re: [OT] wicket users around the world Work in Denmark pimping wicket at every opportunity i get, lived all my life in Denmark, so half from Denmark and half from Spain :) francisco treacy wrote: to know a little bit more of our great (and vast) community, i was just wondering if you're keen on sharing where you come from and/or Re: Openid integration? openid4java, my only problem are that I cant seem to get any openid providers to give me the requested attributes, like email and name. How did you solve this? Cheers, Dave -Original Message- From: Nino Saturnino Martinez Vazquez Wael [mailto:[EMAIL PROTECTED] Sent: 2 December 2008 04:31 Re: junit testing wicket with spring and hibernate Hi Per-Olof You should checkout Wicket Iolite( or WicketTopia( Both uses this technique and has a snappy archetype for a swift start. The latter has builtin profiles for misc conf-files Re: JRPdfResource File not found error when using IE6 I've had huge problems with this aswell, but it's long time since.. I think it where on wicket 1.2, where firefox would pop a dl and IE would open the doc/xls inline.. I guess it's not exactly the same... Ernesto Reinaldo Barreiro wrote: I have found myself some weird behaviors with PDFs and Re: Child page with no html Scott, Think inheritance :) Just write a super which has abstract methods that returns components for c1..c4() and thats it.. no need for trickery with IMarkupResourceStreamProvider ... Should I elaborate more? You could also take a look at the wicketstuff accordion thing, it does Re: Child page with no html to that list. Should work exactly as you described. What trickery is needed? I guess I miss that part. On Wed, Dec 10, 2008 at 5:20 PM, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: Scott, Think inheritance :) Just write a super which has abstract methods that returns Re: Child page with no html Ahh, yeah there is something there.. And yes it's a very good idea to expose the id to the method like getComponents(String id) I think this is also the way I did with the accordion in stuff. John Krasnay wrote: Careful! ChildPage.getComponents() is invoked before ChildPage's constructor. Re: Wicket integration with good charts api I think it's small enough for minis.. But it's Igors baby, try to ask him? Maarten Bosteels wrote: Hello Ryan, I have just added some more code to the wiki page, and a working quickstart project. My OpenFlashChart implementation [Announce] new stuff in wicketstuff openlayers.. Hi Guys I've updated the wicketstuff openlayers and put in a few new things. You can read more about it in my blog : And btw, please say if theres some feature you'd like in it. I might be able to Re: authorization and wicket:link No.. I had the same question some weeks ago.. Just create some markup containers for it instead.. miro wrote: I am using wicket:link ,can I tell wicket to authorize the user if he has permissions to that page in wicket:link tag ? -- -Wicket for love Nino Martinez Wael Java Re: SpringOne America 2008 in Hollywood, FL Hehe cool, I wonder if Martijn ever got around to post the picture he took of me, wearing my wicket merchandise. The Cap and T-shirt at his presentation :) shetc wrote: Well, here I am at the SpringOne Conference, where the theme is Weapons Re: Clearing Cache after Logout Hmm I'd do what you do and set a custom expired page, with perhaps a login form on it.. Voila :) vishy_sb wrote: Hi all, I have a logout link that takes me to the login (Home) page. However when I click the back button it brings me back to the same page that I was on (Although I am not able pimping Openlayers in wicket stuff core... Hi Guys I'll be putting in direct JTS ( support in Openlayers contrib. Any objections to this? This means that the API will break, since I'll be using JTS points instead of the homegrown classes as currently. Also I plan to support drawing shapes on Re: Openid integration? Looks simple enough.. I guess my only fear are that I then will have it converted into something that still wont let me pull the two required values from the openid provider... Jan Kriesten wrote: Hi Nino, Hmm Im using auth roles now.. Are there an way to integrate the two..? that Re: Openid integration? Yup I remember the page.. Michael Sparer wrote: Hmm Im using auth roles now.. Are there an way to integrate the two..? hey nino, take a look at - it's a bit older but I think it might still work. as acegi is now called Re: Openid integration? , doesn't it? :-) Nino.Martinez wrote: Hmm just saw this : Nino Saturnino Martinez Vazquez Wael wrote: Hi Guys Have any of you tried to do a openid integration ? -- -Wicket for love Nino Martinez Wael Java Specialist @ Jayway DK http Re: Openid integration? Hmm, i'll dig into it.. Thanks.. Jan Kriesten wrote: Hi Nino, I have something working only partially though, cant get email and name attribute back from the openid provider properly.. Seems to work with openid.org, but not claimid.com or myopenid.com why not using spring-security Re: Openid integration? Hmm Im using auth roles now.. Are there an way to integrate the two..? Another thing though, I need to either use sreg or AX to pull some values (only email and name) to my system is that possible with the spring security thing( I know this should probably go to the spring forum)? Jan Re: Openid integration? Hmm just saw this : Nino Saturnino Martinez Vazquez Wael wrote: Hi Guys Have any of you tried to do a openid integration ? -- -Wicket for love Nino Martinez Wael Java Specialist @ Jayway DK +45 2936 7684 my new site :) Hi guys I've been puzzling with a new site of mine. It's a community site, still a bit in development. It evolves around events, it has a nice overview map of events, but also more traditional search for events. The idea are that the users enter their public events like user groups etc, so Re: ImageButton picture.x and picture.y questions just ask.. Tim Nino Saturnino Martinez Vazquez Wael wrote: maybe james patch can help you : ? Otherwise it should be ImageMap already there.. Nino Saturnino Martinez Vazquez Wael wrote: Hi Tim You should get Re: Wicket and CoC COOL!!! :) Jeremy Thomerson wrote: You can do exactly what you asked in less than 40 lines of code - and not be bound to the class name in the HTML (which you shouldn't do). Here's how: IN YOUR APPLICATION CLASS: @Override protected void init() { super.init(); Re: Why does org.apache.wicket.authorization revolve around string tokens? A really good question. I've heard that it's because that it is a demo of how you could implement it. I share the exact same thoughts as you do about this. Casper Bang wrote: What attracts me to Wicket is how it tries to do as much in type-safe Java code as possible, so I was a bit surprised Re: [VOTE] End of Life wicket-contrib-gmap? HI Jeremy [ X ] - YES, please create a branch in the Wicket Stuff repo just for I were handed over the code some time ago as I've participated on the project, but I think gmap2 are more developed and stable.. And I have no intention of having the two projects compete im thinking of gmap2, so Re: [VOTE] Consistent naming for Wicket Stuff projects [ X ] - YES - I would like consistent naming Jeremy Thomerson wrote: I am beginning the WS reorg as noted in previous emails. You can monitor progress here: As we move projects into the wicketstuff-core, I Re: ImageButton picture.x and picture.y Hi Tim You should get a grasp on models. But isnt it an imagearea (cant remember the exact name) or something you want? Image button is just a button which has a image... Tim Squires wrote: Hi, I'm trying to retrieve the x and y coords from a user click on an ImageButton. Can anyone tell Re: ImageButton picture.x and picture.y maybe james patch can help you : ? Otherwise it should be ImageMap already there.. Nino Saturnino Martinez Vazquez Wael wrote: Hi Tim You should get a grasp on models. But isnt it an imagearea (cant remember the exact Re: Using CompoundPropertyModel with FormComponentPanel Hi Ned you can call bind on the compound property model.. labelText = new Label(labelText, CPM.bind(propertyname)); You can also do this for your property models btw... Ned Collyer wrote: I'm trying to throw together some components for easily creating accessible forms. I'm a fair bit along Re: Using CompoundPropertyModel with FormComponentPanel ahhh, didnt catch that you were doing that.. Ned Collyer wrote: I'm going to be sourcing the labelText from a properties file relatve to the class of the modelObject (in this case it will be the User - eg, user.properties). If I use the binding, then I need to have scope to the CPM in java Re: Wicket and CoC Theres also wickettopia... jWeekend wrote: Richardo, If you are serious about looking into RADifying extension to Wicket, here are a couple of resources that may be interesting: Al Maw' s excellent Re: SOLUTION:Hibernate Lazy initialazation issues and multi page wizard! Carman wrote: So, what's wrong with using shadow models and letting them eventually write into the real model (which is a LDM) at the very end? On Tue, Nov 25, 2008 at 2:25 AM, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: Hi Guys I've been having a little trouble Re: SOLUTION:Hibernate Lazy initialazation issues and multi page wizard! models helps me keep track of what's going on better. I do use PropertyModels, though. I just like to explicitly know the base that I'm dealing with when I'm using property models. On Tue, Nov 25, 2008 at 6:50 AM, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: I guess not much Re: Running wicketstuff examples yup, I believe that most of the projects are setup like this.. Jeremy Thomerson wrote: You can just do mvn jetty:run from that folder and it will run (just verified). You will need to do a mvn clean install in the wicket-contrib-accordians folder first. On Sun, Nov 23, 2008 at 1:06 PM, Eyal Re: [VOTE] Organizing Wicket Stuff / Regular Release Schedule? Argh that should have been = igor did a release prior to the 1.4 initial release, called 1.3 or something.. Nino Saturnino Martinez Vazquez Wael wrote: [ X ] - YES - I would like to see at least the most used Wicket Stuff projects structured so that they mirror Wicket, and a release Re: [VOTE] Organizing Wicket Stuff / Regular Release Schedule? [ X ] - YES - I would like to see at least the most used Wicket Stuff projects structured so that they mirror Wicket, and a release is produced for each But I also believe this is how it are today sort of anyway not so strict, but cool with me.. Igor did a branch when the initial release of Re: wicket, mootips and a NoSuchMethodError Hi Ricard As is now, mootips are compiled against wicket 1.4 and thus incompatible with the 1.3 branch. So you are completely correct. But it should be somewhat easy to make it compile against 1.3.. regards Nino rvieregge wrote: A newbie needs some help here... I've an existing SOLUTION:Hibernate Lazy initialazation issues and multi page wizard! Hi Guys I've been having a little trouble with hibernate and a multipage wizard, I finally cracked the nut. And heres my solution: In the link that refers to the wizard use a loadable detachable model.. Onclick you initialize all proper collections and CLONE the object, after the wizard are Re: mount outside init You need another of the mounting strategies.. Theres a overview in WIA, or probablly on the wiki.. Or perhaps you are already trying with the other strategies? Mathias P.W Nilsson wrote: I can't get this to work. IF I do /customer1 then wicket tries to find a mount with this. -- Re: Compoundpropertymodel with shadow map? Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: Hi Im trying todo a compoundpropertymodel which does not change original values in the original model. I need this since I am updating some stuff in a wizard but I first want to commit when the user confirms in the end of the wizard Re: Compoundpropertymodel with shadow map? , and also is not my mother... f(t) On Thu, Nov 20, 2008 at 6:26 PM, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: I love simple and simple is good. But this approach has issues with hibernate if your hibernate sessions are per request and your shadowmodel lives in multiple Re: Compoundpropertymodel with shadow map? are eclipselink?) Input on these things are very welcome... regards Nino Francisco Diaz Trepat - gmail wrote: why? simple is good. doesn't need to be complex. what part you dislike the most? f(t) On Thu, Nov 20, 2008 at 2:29 AM, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: BTW Re: Compoundpropertymodel with shadow map? BTW this is a flawed approch.. We need something a little more intelligent.. I'll return on the subject.. Nino Saturnino Martinez Vazquez Wael wrote: heres the raw and completely untested version of it. probably with a whole bunch of issues...: package zeuzgroup.web.model; import Re: overLIB Integration... Hmm, we do actually have both prototip and mootip integration as stuff projects, and mootip supports ajax retrival of tips.. But the more the merrier I guess? Swinsburg, Stephen wrote: I used overLIB years ago and its pretty basic. What about something like jQuery's cluetip Re: Usage of getString with parameters (model?) Igor wrote something about it in a thread with validators.. But heres my cut: add(new Label(confirmation.content, new StringResourceModel( confirmation.content, this, eventModel))); and in property file: confirmation.content=You are about to create event Compoundpropertymodel with shadow map? Hi Im trying todo a compoundpropertymodel which does not change original values in the original model. I need this since I am updating some stuff in a wizard but I first want to commit when the user confirms in the end of the wizard, and if the model are changed directly the transaction are Re: Compoundpropertymodel with shadow map? /wicketopia/src/main/java/org/wicketopia/model/proxy/ProxyModelManager.java On Tue, Nov 18, 2008 at 7:43 AM, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: Hi Im trying todo a compoundpropertymodel which does not change original values in the original model. I need this since I am Re: Compoundpropertymodel with shadow map? (); } } } // IComponentAssignedModel / IWrapModel Francisco Diaz Trepat - gmail wrote: Nice, I was up to something similar. On Tue, Nov 18, 2008 at 9:43 AM, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: Hi Im trying todo a compoundpropertymodel which Re: ajax busy indicator never stops in IE Miro Putting the system out print will only confirm that the ajax call are made from server side, your problem are clientside on IE, somethings not right with ie.. Could you prepend and append alert('before') and after alert('before') to the ajax target..? and tell if both calls are made..? Re: Unable to load Wicket app in hosting provider Hi ' If I were you I would pick up the wicket in action book, or follow a tutorial... These are very basic questions... Wicket has a application class which specify the home folder with a method, you would override that and return your index.class wicket will then use that to display as Re: What is best practice for overriding settings in ModalWindow modal.css file? It's not something wicket related, you can override all css classes by providing a more specific one... Like if you have this: .wicketmodalwindowstyle { width:5em; } then you can override it by doing so in your css: .myhappystyle .wicketmodalwindowstyle{ width:4em; } So above will override Re: gmap2 and helper classes / methods? on if this could be something that the openlayers integration should support.. Martin Funk wrote: 2008/11/12 Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] Hi I seem to be lacking some helper methods for finding out if a glatlng are in a gbounds etc.. Are there someone out there who Re: Progress Bar hehe yes why? Francisco Diaz Trepat - gmail wrote: your voice in the video? On Thu, Nov 13, 2008 at 3:29 AM, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: You could also just do the js with an texttemplate, that way it should be somewhat the same:) I wrote something Re: Per-user, event-aware page/component caching And if you are using spring it's really easy to inject caching using the spring module cache sub project, when using your approach. Igor Vaynberg wrote: oh, and of course, if you do this you are on your own as far as threading goes inside your components. i still think that instead of doing Re: What is best practice for overriding settings in ModalWindow modal.css file? :) shetc wrote: Thanks Nino -- the solution was even easier than what you suggested but you pointed me in the right direction. I was so focused on Wicket that I forgot the plain ole CSS solution :wistle: -- -Wicket for love Nino Martinez Wael Java Specialist @ Jayway DK Re: Progress Bar , Nov 13, 2008 at 3:39 PM, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: hehe yes why? Francisco Diaz Trepat - gmail wrote: your voice in the video? On Thu, Nov 13, 2008 at 3:29 AM, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: You could also Re: What is best practice for overriding settings in ModalWindow modal.css file? hi Steve Wrap it in a class that overrides the property. shetc wrote: Hi All, What is best practice for overriding the properties in the ModalWindow modal.css file? For example, I would like to change the font of the text used for the title. Thanks, Steve -- -Wicket for love Nino gmap2 and helper classes / methods? Hi I seem to be lacking some helper methods for finding out if a glatlng are in a gbounds etc.. Are there someone out there who has implemented anything or do I need to roll my own, and if the latter I guess I should provide a patch? -- -Wicket for love Nino Martinez Wael Java Specialist @ Re: Progress Bar Francisco, feel free to provide a patch :) Francisco Diaz Trepat - gmail wrote: Could you be more specific? You send me the Wicket internationalization link. The word progress, of Progress Bar, doesn't even exist in the page. And finally I found that it is not possible through regular Re: Progress Bar : Hi nino, I'm on it. Very simple at first, English default text and javascript function parameters for other values. But I think latter on it could bring all text from server, although it could increase traffic unnecessarily. f(t) On Wed, Nov 12, 2008 at 4:43 PM, Nino Saturnino Martinez Vazquez Wael Re: Wicket portlet Could people who use wicket in some portal container create a wiki page and report if it's working, working with problems or just not working? maybe a sub page to this: regards Nino Serkan Camurcuoglu wrote: though I've only used it in Re: Basic print.css question You should really install firebug and web developer on firefox. Both will help you figure out what wrong when you hit problems like this.. Jim Pinkham wrote: Sorry this isn't so wicket specific, but I think I'm doing this media type thing correctly - what am I missing? Here's the view-source Re: wicket terracotta integration is out Okay great, and thanks :) richardwilko wrote: Well the module also includes an terracotta xml config file so that terracotta knows which internal wicket classes could be clustered, but apart from that there is nothing else. I don't have any example applications but an normal application can be Re: TextField inside a ModalWindow problems As I can remember you have to turn the confirmation message off (the modal one) or something in order to make it work with ajax. richardwilko wrote: It still seems as though the form is trying to submit in a non-ajax way, as this is the same as navigating to a different url, which is why your Re: ajax request and hibernate lazy loading Yep that was also what I were thinking. Martijn Dashorst wrote: I would actually keep using OSIV. To answer the question: probably you are using models wrong. If OSIV works for your normal requests, it works for Ajax requests. There is nothing different between these requests. Martijn On Re: avoid ajax response evaluate javascript theres a prepend / append js on ajaxRequestTarget, that should work... francisco treacy wrote: hi, we're using a home-grown wrapper for integrating jquery into wicket - specifically for jquery effects. and i'm having some trouble with ajax updates. i'll explain with an example: final Re: avoid ajax response evaluate javascript at 1:53 PM, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: Im not sure I understand then..? Could you explain another way? You would want to not output the list with wicket's ordinary ajax replace method? francisco treacy wrote: yes, i'm aware of those. but i'd want Re: avoid ajax response evaluate javascript of attaching/executing behaviours only once? On Thu, Nov 6, 2008 at 12:58 PM, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: theres a prepend / append js on ajaxRequestTarget, that should work... francisco treacy wrote: hi, we're using a home-grown wrapper for integrating Re: wicket terracotta integration is out Interesting, are it really that simple as using the terracotta page map? And are there any example web applications using it? Are there any recommended settings for the tim-wicket with terracotta.. Im pretty unfamiliar with terracotta so im asking a bit in blind here. regards nino Re: adding favicon using behavior Yup like so, just change it to link instead of meta: public class SiteAHeader extends AbstractBehavior implements IHeaderContributor { public void renderHead(IHeaderResponse response) { response .renderString(meta name=\description\ content=\description\ /); Re: PageExpiredException on production It's on as default.. But checking the log for serializing errors will help you find out what the problem are.. rivkash1 wrote: How do i check if the serializer is on/off? What's the defualt? Nino.Martinez wrote: Did you actively turn serializer check off? rivkash1 wrote: we set Re: How to terminate the session on not bookmarkable page? expect in this case... It works if I use a bookmarkable page, but I dont want a fixed URL on the confirmation page which displays dynamic data from the wizard Thanks Matt Nino Saturnino Martinez Vazquez Wael wrote: Hmm why not to it org.apache.wicket.extensions.wizard.Wizard#onFinish Re: How to terminate the session on not bookmarkable page? Hmm why not to it org.apache.wicket.extensions.wizard.Wizard#onFinish() .. ? Matthias Keller wrote: Hi I've got a wizard letting the user enter some information. In the onFinish() method, I redirect him to my ConfirmationPage: setResponsePage(new ConfirmationPage()); (I chose the instance Re: WASP/SWARM status was looking at : let me checkout the other and have a look I never know where I'm ment to find things with Wicket! On Tue, Nov 4, 2008 at 10:10 AM, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote Re: Advice for a YUI Button Component Sure.. Just make the javascript call what ever the link calls.. You can see the input events contrib on wicketstuff on howto do this.. Adriano dos Santos Fernandes wrote: Hi! I didn't found any project integrating YUI Button with Wicket, and I'd want it. The problem that I'm seen is that Re: Wicket integration with good charts api You do know there are an abundance of jfreecharts right, they are highly customizable.. And theres even a javaweb start thing where they demo it... But you probably did show them this... Tomasz Dziurko wrote: My client needs in his Re: No behavior listener found Hi Martin I got them a lot when creating the wicket reaction game.. For me the problem where that I added new components all the time and if in the mean time the component disappared (we are talking miliseconds) and the user managed to click the cells then they would get the problem.. Re: WASP/SWARM status I've been wondering the same thing... Where this what you looked at? Wayne Pope wrote: Hi, After the staggering loss of Maurice I was wondering if anyone had picked up the baton with Re: WASP/SWARM status m3 On Tue, Nov 4, 2008 at 10:25 AM, Nino Saturnino Martinez Vazquez Wael [EMAIL PROTECTED] wrote: Hehe, I can understand... I figure that Maurice where playing around with wicket 1.4 and wicket security, and created a branch for it.. So my bet are that it probably need a lot of updates Re: JavaScript Framework Dependencies / Wicket Stuff commit access Yup, just give an project name and I can setup jira and teamcity if you want? Igor Vaynberg wrote: you have svn. nino can set you up with the rest soon. -igor On Sun, Nov 2, 2008 at 1:33 PM, Uwe Schäfer [EMAIL PROTECTED] wrote: Igor Vaynberg schrieb: i´m just wondering why there Re: DropDownChoices where Model is different from Data Hi since it's java development, the only limit are your mind (now all your ruby guys dont hit me).. As you write yourself just create a choicerenderer, then choicerenderer itself can decide howto display data.. And yes if you chain stuff and implement your own model etc you can do it all Re: PageParameters in wicket 1.2.7 problem Hi Rik Im not sure about this, it's been some months(if not years) since i've touched 1.2.x. How are you linking to verification page when logged in, for me it looks like you are not using pageparameters at all but instead just instantiate the page with an other constructor.. Rik Overvelde Re: Testing + IFrame So in pseudo code you would: tester.getForm.add(IVisitorImplementation) ??? Timo Rantalaiho wrote: On Fri, 31 Oct 2008, Bert van Heukelom wrote: I want to test an ajax upload component that uses an IFrame. I have difficulties when trying to access Components via their path that are Re: example application for spring wicket hibernate no need for that unless you want to commit miro wrote: whats the username password for svn repositorty ? Tomasz Dziurko wrote: Phonebook: . If you need NetBeans project for it just tell me, I have it somewhere on the Re: PageParameters in wicket 1.2.7 problem , the page works fine (meaning that the page parameters are being read). Also, when the bug occurs it triggers an error message which tells that the given code is incorrect, which means that the page parameters are read at that point as well. Nino Saturnino Martinez Vazquez Wael wrote: Hi Rik Re: Testing + IFrame Argh really bad example, heres a better one: tester.getComponent(basepath).getIframe().addvisitor or something? And then visitor would have to sort out if the component probed were a form etc? Nino Saturnino Martinez Vazquez Wael wrote: So in pseudo code you would: tester.getForm.add Re: onclick event in a listview Add a link to the list item and override onclick miro wrote: I have two tables side by side in my page . Table 1 has two columns name, age Table 2 has details columnlike sex, address etc when user selects a name in table 1 i want to update table two with details of selected Re: Testing + IFrame Ahh ok.. Timo now that you've mentioned jdave... Are there any way jdave supports plain text scenarios like jBehave? Timo Rantalaiho wrote: On Mon, 03 Nov 2008, Nino Saturnino Martinez Vazquez Wael wrote: Argh really bad example, heres a better one: tester.getComponent(basepath
https://www.mail-archive.com/search?l=users%40wicket.apache.org&q=from:%22Nino+Saturnino+Martinez+Vazquez+Wael%22&o=newest
CC-MAIN-2022-21
refinedweb
5,362
62.27
Usage Event Logging in Windows SharePoint Services 3.0 Summary:. Supplementing Usage Event Logging with IIS Logs. The following table shows the name and data type of each field in the structure and describes the information that is contained in each field of an entry. traverse each entry in a log and return site usage information. unsigned long cbEntrySize = 0; for(pCur = pBase; pCur < pEnd; pCur += cbEntrySize) { pLFE = (VLogFileEntry *)pCur; pszSiteGuid = pCur + sizeof(VLogFileEntry) + 2; pszTS = pszSiteGuid + cbSiteGuid + 1; pszSite = pszTS + cbTimeStamp + 1; *(pszSite + pLFE->cbSiteUrl) = '\0'; pszWeb = pszSite + pLFE->cbSiteUrl + 1; pszDoc = pszWeb + pLFE->cbWeb + 1; pszUser = pszDoc + pLFE->cbDoc + 1; After casting the current entry as a structure, the example proceeds to gather the site GUID,. To test the example, open Microsoft Visual Studio 2005 on the server that contains the log files and create a Microsoft Visual C++ console application. To create a to create the application, and then click OK. In Solution Explorer, double-click the Project_Name.cpp file that is produced and replace the code that Visual Studio includes by default with the following code. #include "stdafx.h" #include "windows.h" #include "assert.h" #include <stdio.h>", argv[0]); return(1); } char *szFile = argv[1]; char *szCsvFile = argv[2]; char *szOptionalField1 = argc > 3 ? argv[3] : NULL; char *szOptionalField2 = argc > 4 ? argv[4] : NULL; char *szGuid = NULL; char *szReplace = NULL; /* Format of each .csv line. Include optional fields passed as command-line arguments, if any.*/ char *szFormat = "%s,%s,%s,%s,%s,%s,%s,%s\r\n"; if (NULL == szOptionalField1) szFormat += 3; if (NULL == szOptionalField2) szFormat += 3; FILE *csvFile = open file %s (perhaps because it doesn't exist)", szFile); return (1); } DWORD dwFileSize, dwFileSizeHigh = 0; dwFileSize = GetFileSize(hF, &dwFileSizeHigh); /* We should never encounter a file larger than about 1 GB. */ if (dwFileSizeHigh || dwFileSize > 1000000000) { printf(" File too large %s", szFile); CloseHandle(hF); return (1); } if (dwFileSize == 0) { printf(" Skipping empty file %s", szFile); CloseHandle(hF); return (1); } hFM = CreateFileMapping(hF, NULL, PAGE_WRITECOPY, 0, 0, NULL); if ((NULL == hFM) || (NULL == (pBase = (char *)MapViewOfFile(hFM, FILE_MAP_COPY, 0, 0, 0)))) { printf(" Can't map file %s", szFile); if (hFM) CloseHandle(hFM); CloseHandle(hF); return (1); } pEnd = pBase + dwFileSize - sizeof(VLogFileEntry); char *pCur, *pszSite, *pszSiteGuid, *pszTS; char *pszWeb, *pszDoc, *pszUser; VLogFileEntry *pLFE; unsigned long cItemsProcessed = 0; unsigned long cbEntrySize = 0; const unsigned long maxCbEntrySize =;; } // Skip 2 bytes for \r\n. pszSiteGuid = pCur + sizeof(VLogFileEntry) + 2; // Skip 1 byte for the NULL separator. pszTS = pszSiteGuid + cbSiteGuid + 1; pszSite = pszTS + cbTimeStamp + 1; // Stop at the end of the site URL. *(pszSite + pLFE->cbSiteUrl) = '\0'; // Skip 1 byte for the NULL separator. pszWeb = pszSite + pLFE->cbSiteUrl + 1; pszDoc = pszWeb + pLFE->cbWeb + 1; pszUser = pszDoc + pLFE->cbDoc + 1; /* Output is in this format: timestamp, site guid, siteUrl, subsite, document, user, optional1, optional2*/ fprintf(csvFile, szFormat, pszTS, pszSiteGuid, pszSite, pszWeb, pszDoc, pszUser, szOptionalField1, szOptionalField2); } cleanup: UnmapViewOfFile(pBase); CloseHandle(hFM); CloseHandle(hF); fclose(csvFile); return fError; } On the Build menu, click Build Solution. At a command prompt, navigate to the folder:
https://msdn.microsoft.com/en-us/library/bb814929.aspx
CC-MAIN-2015-27
refinedweb
495
51.89
Boston Python workshop/Saturday/ColorWall [edit] Project Program graphical effects for a ColorWall using the Tkinter GUI toolkit. [edit] Goals - Have fun experiment with and creating graphical effects. - Practice using functions and classes. - Get experience with graphics programming using the Tkinter GUI toolkit. - Practice reading other people's code. [edit] Project setup [edit] Download and un-archive the ColorWall project skeleton code Un-archiving will produce a ColorWall folder containing several Python files, including: run.py, effects.py, and advanced_effects.py. [edit] Test your setup From a command prompt, navigate to the ColorWall directory and run python run.py -a You should see a window pop up and start cycling through colorful effects. If you don't, let a staff member know so you can debug this together. [edit] Project steps [edit] 1. Learn about HSV values Run the ColorWall effects again with python run.py -a The names of the effects are printed to the terminal as they are run. Pay particular attention to the first 4 effects: - SolidColorTest - HueTest - SaturationTest - ValueTest In all of these effects, a tuple hsv containing the hue, saturation, and value describing a color are passed to self.wall.set_pixel to change the color of a single pixel on the wall. What are the differences between these tests? Given these difference and how they are expressed visually, how does varying hue, saturation, or value change a color? Check your understanding: what saturation and value would you guess firetruck red have? [edit] 2. Examine Effect and the interface its subclasses provide All of the effects inherit from the Effect class. Examine this class and its __init__ and run methods. What is the purpose of the __init__ method? What is the purpose of the run method? Open up run.py and look at this chunk of code at the bottom of the file: for effect in effects_to_run: new_effect = effect(wall) print new_effect.__class__.__name__ new_effect.run() effects.py exports and Effects list at the bottom of the file. run.py goes through every effect in that list, creates a new instance of the effect, and invokes its run method. Check your understanding: what would happen if you added an effect to the Effects list that didn't implement a run method? (Try it!) [edit] 3. Examine the nested for loop in SolidColorTest for x in range(self.wall.width): for y in range(self.wall.height): self.wall.set_pixel(x, y, hsv) This code loops over every pixel in the ColorWall, setting the pixel to a particular hsv value. After that for loop is over, self.wall.draw() updates the display. Check your understanding: what would happen if you moved the self.wall.draw() to inside the inner for loop, just under self.wall.set_pixel(x, y, hsv) in SaturationTest? (Try it!) Tip: you can run individual tests by passing their names as command line arguments to run.py. For example, if you only wanted to run SaturationTest, you could: python run.py SaturationTest [edit] 4. Implement a new effect called RainbowTest It should run for 5 seconds, cycling through the colors in the rainbow, pausing for a moment at each color. Remember to add your effect to the Effect list at the bottom of effects.py! Test your new effect with python run.py RainbowTest [edit] 5. Play with the randomness in Twinkle Walk through Twinkle. Find explanations of the random.randint and random.uniform functions in the online documentation at. Experiment with these functions at a Python prompt: import random random.randint(0, 1) random.randint(0, 5) random.uniform(-1, 1) Then experiment with the numbers that make up the hue and re-run the effect: python run.py Twinkle Challenge: make Twinkle twinkle with shades of red. [edit] 6. Implement a new effect that involves randomness! Remember to add your effect to the Effect list at the bottom of effects.py. [edit] Bonus exercises [edit] Checkerboard Find and change the colors used in the Checkerboards effect, and re-run the effect: python run.py Checkerboards Then change the line if (x + y + i) % 2 == 0: to if (x + y + i) % 3 == 0: re-run the effect, and see what changed. What other patterns can you create by tweaking the math for this effect? [edit] Matrix Find and change the color of the columns in the Matrix effect, and re-run the effect: python run.py Matrix Each column that we see on the wall corresponds to a Column object. Add some randomness to the color used by each column (the variable whose value you changed above) using the random.random function, re-run the effect, and see what happens. [edit] Write more of your own effects! You have color, time, randomness, letters, and more at your disposal. Go nuts!
http://wiki.openhatch.org/Boston_Python_workshop/Saturday/ColorWall
CC-MAIN-2016-40
refinedweb
797
59.4
Java example The following Java class shows how to get tomorrow's date in just a few lines of code. Not counting all the boilerplate code and comments, you can get tomorrow's date in four lines of code, or fewer. Here's the source code for our class that shows how to get "tomorrow" by adding one day to today: import java.util.*; /** * A Java Date and Calendar example that shows how to * get tomorrow's date (i.e., the next day). * * @author alvin alexander, devdaily.com */ public class JavaDateAddExample { public static void main(String[] args) { // get a calendar instance, which defaults to "now" Calendar calendar = Calendar.getInstance(); // get a date to represent "today" Date today = calendar.getTime(); System.out.println("today: " + today); // add one day to the date/calendar calendar.add(Calendar.DAY_OF_YEAR, 1); // now get "tomorrow" Date tomorrow = calendar.getTime(); // print out tomorrow's date System.out.println("tomorrow: " + tomorrow); } } When I run this test class as I'm writing this article, the output from this class looks like this: today: Tue Sep 22 08:13:29 EDT 2009 tomorrow: Wed Sep 23 08:13:29 EDT 2009 As you can, tomorrow's date is 24 hours in the future from today's date (i.e., "now"). Java Date add - discussion In this example date/calendar code, I first get today's date using these two lines of code: Calendar calendar = Calendar.getInstance(); Date today = calendar.getTime(); I then move our Java Calendar instance one day into the future by adding one day to it by calling the Calendar add method with this line of code: calendar.add(Calendar.DAY_OF_YEAR, 1); With our Calendar now pointing one day in the future, I now get a Date instance that represents tomorrow with this line of code: Date tomorrow = calendar.getTime(); Java Date add - summary As you can see from the example source code, there are probably two main keys to knowing how to add a day to today's date to get a Date instance that refers to "tomorrow": - Knowing how to get a Dateinstance from the Java Calendarclass. - Knowing how to perform Datemath with the Calendar addmethod. Date myDate = new Date myDate = new Date(); myDate.setTime(myDate.getTime() + 86400000); (86400000 milliseconds = 1 day) Very nice, thanks. I had Very nice, thanks. I had gotten used to most of the Java Date class constructors being deprecated, and somewhere along the line I quit using the no-args constructor, which is not deprecated, and just started using the Calendar class.
https://alvinalexander.com/java/java-date-add-get-tomorrows-date
CC-MAIN-2019-47
refinedweb
421
54.02
My journey with unit testing in Java so far I am a very recent to the unit testing bandwagon. Perhaps it seemed way too corporate for my taste when I first read about it. But now I’ve come to see that it can be useful even for someone working alone on a hobby Java project. The first time I read about unit testing, I thought it was kind of pointless and stupid. You write a program that does one thing, and then you write another program that does the same thing the same damn way? I realized early on that it’s possible to fool yourself with unit testing. I didn’t see the point back then. Getting the green bar with the 100.00% on it, or a column of green checkmarks, that’s not the point of unit testing. Because if that’s all you want, you can just do a few tests like this one: @Test public void fakeTest() { System.out.println("Fake test"); } Nor is the red bar with 0.00% the point either, because for that you can just run auto-generated tests without making any changes. The point is, according to my understanding now, to put each individual component of the program through the wringer, so that any problems arising in the system as a whole are due to an unexpected interaction, and not a flaw in any of the individual components. When I was starting to learn about unit testing, I was also starting to work in earnest on a Java program to draw certain mathematical diagrams pertaining to prime numbers in certain domains. I’m not going to go too in depth on the math here. It’s not that it’s advanced, but it does take several paragraphs to explain properly and it would be a major sidetrack from the topic of unit testing. I will try to limit the math stuff to basic high school math. A lot of it boils down to checking whether ordinary, “simple” whole numbers are prime or not, and then drawing diagrams to show the results of the calculations. Two of the diagrams my program would draw are famous. Not Mandelbrot set famous, but famous enough that a Google image search can readily bring them up. Look up “Gaussian primes” or “Eisenstein primes.” These correspond to the “discriminants” −1 and −3 in my program (the terminology is incorrect but convenient). I can just compare the results of my program to the images Google shows me. As for −2, −5, −6, −7, etc., I can visually scrutinize the results of my program to make sure they make sense. At that early stage, it occurred to me that the program could also draw diagrams of sets mathematicians call “ideals,” and that some of the functions could be used to explore the Euclidean GCD algorithm in domains that are “not Euclidean.” For that latter application, it does not immediately occur to me what sort of diagram should be drawn. I would have to be sure that the basic arithmetic functions on “complex” numbers (addition, subtraction, multiplication and division) all work correctly. The prospect of slowly inputting a bunch of numbers at the command line and carefully checking the results one by one was not the least bit appealing. If only there was a way to have the computer test those functions on several numbers in quick succession and let me know if the results are correct or not! That’s how I realized the program I was working on could use unit testing after all. If you’ve read this far, it might seem like a fair assumption that you know what unit testing is. Also, I will assume you know at least the basics of Java or C#. Even so, an elementary explanation of what unit testing is might help avoid confusion with the related terminology. I will put a few key terms in bold. Unit testing tests whether the individual components of a program work correctly in isolation. Unit testing is not about testing how well a program works with other programs or devices, and it’s certainly not about testing how intuitive the user interface is to a human user. Unit testing is not unique to Java, which is part of the reason I’m going to use the term “subroutine” to mean what would be called a “method” in Java, or a function or procedure in the old Pascal programming language. Plus, as an added bonus, the word “subroutine” has somewhat the flavor of Treknobabble. Though I think it still makes sense to use the term “function” to refer to methods with any return type other than void. Call me old-fashioned. So you write a subroutine in C++, C#, Java, whatever, and then you write another subroutine in that same programming language which calls on the first subroutine and checks the validity of the results. The second subroutine is a test subroutine, which we just call a test. A test is any subroutine that serves no other purpose than to test the operation of another subroutine and report whether or not the subroutine worked correctly. Let’s say a subroutine calls on another subroutine, checks the validity of the subroutine’s output and does some System.out.println() to report on the performance of the first subroutine. That is indeed a test. But a test is more useful when it contains one or more assertions, which makes it easier for the computer to tell whether the test passed or not. If all the assertions in a test hold true, the test passes, but if even just one assertion is false, the whole test fails. Perhaps the most common assertion is that two things are equal. Here is a toy example of an equality assertion: int expResult = 2; int result = 1 + 1; assertEquals(expResult, result); That’s a toy example because the assertion should always hold true. If it’s false it might signal a very peculiar problem with the hardware. Here is another example of an equality assertion, one likelier to reveal a mistake the programmer can actually fix: GaussianInteger expResult = new GaussianInteger(-1, 0); GaussianInteger imagUnit = new GaussianInteger(0, 1); GaussianInteger result = imagUnit.times(imagUnit); assertEquals(expResult, result); The assertion should hold true if the times() function of GaussianInteger has been properly defined. The program you’re writing probably consists of several subroutines, which are likely grouped in classes and packages, and maybe each subroutine needs its own test. More than one test to be run together makes a test suite. If you have a class of tests with a test for each subroutine in another class, it makes sense to call the class of tests a test class, and to name it accordingly. For example, a test class for class Dashboard would quite logically be class DashboardTest. A testing framework makes it easier to group tests together and run them in different combinations one after the other. The framework then reports the results of the tests, usually as a percentage, with a 100% pass rate generally associated with the color green, and anything less with the color red. Perhaps the most famous testing framework for Java is JUnit, though TestNG is gaining ground. For C#, there is XUnit; a little thought will show why they did not use the letter C for that name. It should be emphasized that the framework will most likely not run the tests in source code order; I will harp on this point a bit later on. I hear TestNG has some facility for specifying test order. As far as I know, the most you can do in JUnit is designate code to execute before or after the test class or before or after each test. One time you might have the framework run all the tests for one class. Another time you might have it run all the tests for one package, or all the tests for one project. This enables you to break up programming and testing into manageable chunks. Let’s say one package in the project is to consist of five classes. You can write one class, then write the test suite for that class, and test and debug that one class before moving on to the other classes and their tests. Or you can even write all the tests first and then write the classes to be tested. That’s test-driven development, which has its pros and cons, but I’m not going to say too much more about it today because I don’t program that way (though I suppose I might have to if I were to get a job in a test-driven shop). You can write tests from scratch or you can have your integrated development environment (IDE), like Eclipse or NetBeans, automatically generate tests for you which then you review and tweak, or sometimes almost completely rewrite. I can only guess as to the granularity of automatically generated tests on other IDE and testing framework combinations, but at least NetBeans with JUnit seems to generate one test per subroutine. For example, for boolean primeQ(int number) it might generate public static void testPrimeQ(). But nothing prevents you from writing more tests for different aspects of the same subroutine. For example, in addition to testPrimeQ(), you could also have testPrimeQNegativeNumbers(), testPrimeQZero(), testPrimeQNaN(), etc. It is also possible to write one test for more than one subroutine. This might make sense in the case of polymorphism, such as if you have boolean primeQ(int realInt) and boolean primeQ(GaussianInteger gauInt). I can attest that in such a case, NetBeans with JUnit would generate two separate tests. You may or may not like such granularity. This seems as good as any a point to express my opinion on how to test for thrown exceptions. Do you use the modern approach of annotations, or the dated try-catch with fail way? I would call the latter approach, which I prefer, “classic” rather than “dated.” But if the test suite is granular enough, I can acknowledge that annotations might be the best way to go. For example, let’s say you have one division function but two tests for that division function. One test works with normal nonzero divisors, and the other test tries to divide a number by zero. @Test(expected = Exception.class) public void testDividingByZero() { System.out.println("Testing division by zero."); QuadraticRing OQi7 = new QuadraticRing(-7); QuadraticInteger dividend = new QuadraticInteger(-3, 2, OQi7); QuadraticInteger divisor = new QuadraticInteger(0, 0, OQi7); QuadraticInteger result = dividend.dividedBy(divisor); } If I’ve done this correctly, the test should pass as long as the division function throws any exception at all. But I think the old try-catch with fail enables you to be both more specific and more general. What if in the division by zero example you want the division function to throw either an ArithmeticException or an IllegalArgumentException to pass, but any other exception fails the test? There’s probably a way to do that with annotations. But since I prefer to not get too granular, try-catch with fail is easier. The following example would be an excerpt from a longer test: dividend = new QuadraticInteger(-3, 2, OQi7); divisor = new QuadraticInteger(0, 0, OQi7); try { result = dividend.dividedBy(divisor); fail("Division by 0 did not trigger any exception."); } catch (ArithmeticException ae) { System.out.println("Division by 0 correctly triggered ArithmeticException."); } catch (IllegalArgumentException iae) { System.out.println("Division by 0 correctly triggered IllegalArgumentException."); } catch (Exception e) { fail("Division by 0 triggered the wrong exception," + e.getMessage()); } The order of the catches matters because both ArithmeticException and IllegalArgumentException are subclasses of Exception (by way of RuntimeException). So if the catch (Exception e) block preceded the other two catch blocks, the test would fail even if the correct exception was triggered. Actually… the IDE will probably let you know if you have your catches in the wrong order. This is really only a problem for those who still insist on writing their source code in a separate editor rather than the IDE’s editor. In test-driven development, the try-catch with fail example would express that either ArithmeticException or IllegalArgumentException are valid exceptions to throw in the case of attempting to divide by zero, but Exception or any of its other subclasses would be wrong. Exception is too vague, even if the attached message is very detailed, and PrinterException seems irrelevant somehow. So the try-catch with fail in the example is specifically for trying to divide −3 + 2√−7 by 0. It seems unlikely that this would be the one case for which the implementation works as expected but fails for all others. It would of course be impossible to test this for every number in that domain, since it’s an infinite domain. And it would take too much time to test by every number that can be represented by our implementation of QuadraticInteger, because that’s a finite set but large enough to tie up the computer for too long a stretch. But it’s certainly practical to put the try-catch in a loop in order to test division by zero with, say, a hundred different numbers. That shouldn’t take the computer more than a few seconds, and it would give us greater confidence in a passing test. On my journey from unit testing skeptic to believer, I’ve become aware of some of the benefits of unit testing. All the code examples below come from my Java program to draw diagrams of prime numbers in “imaginary” quadratic integer rings, which is available on GitHub. Because at first I wasn’t clear on the nuts and bolts of unit testing, it made sense for me to start out by having NetBeans automatically generate the tests for my review, rather than trying to write the tests from scratch. The first class I needed to test was NumberTheoreticFunctionsCalculator, which mostly operates only on purely “real” integers, but is necessary for the other classes in the project. So I opened the project in NetBeans and put the cursor on this line: public class NumberTheoreticFunctionsCalculator { A lightbulb icon replaced the line number. Hovering the mouse over that lightbulb brought up the hints “Create Subclass” and “Create Test Class”. Clicking on the test class hint brings up more specific hints: my NetBeans installation came pre-loaded with JUnit 4, Selenium (not sure which version) and TestNG (not sure on the version of that one either). I selected JUnit. NetBeans got to work, creating a package for the test classes, some “boilerplate” and tests with presumed fails. The “boilerplate” included the following: import org.junit.After; import org.junit.AfterClass; import org.junit.Before; import org.junit.BeforeClass; import org.junit.Test; import static org.junit.Assert.*; Behind the scenes NetBeans also takes care of connecting the appropriate JAR files on my system (it would be an understandable misapprehension to think that the IDE will actually go to junit.org to download the relevant package). I wish I could remember what I did next. Did I run the auto-generated tests right away, or did I spruce them up a little bit first? By default each auto-generated test includes the line fail("The test case is a prototype."); This might seem like an obvious thing, but I speak from experience: if your test keeps failing after you’ve made several changes to it but you can’t see why, check to see if you have deleted the fail line. Remember also that the tests are not guaranteed to run in any particular order. I half suspect the frameworks make sure tests don’t run in source code order. I had read that a few times, but it wasn’t until I had a test fail because of an erroneous test order assumption that this point about test order was etched on my mind. You will make mistakes when writing the tests, and get false passes and false fails. I know I have (well, in my case I’m only aware of false fails). In the case of NumberTheoreticFunctionsCalculator, all the fails I had were due to mistakes in the tests rather than mistakes in the class being tested. I had already slowly and carefully tested NumberTheoreticFunctionsCalculator by typing numbers at the command line and noting the answers. With ImaginaryQuadraticRing, the biggest problem I had was writing correct tests for toString(), toHTMLString() and toTeXString(). But the class that was sorely in need of automated testing was ImaginaryQuadraticInteger, because I’m just too slow at mental arithmetic with complex numbers and I didn’t want to abuse Wolfram Alpha with too many queries like 3 * (1/2 + sqrt(-15)/2), 4 * (1/2 + sqrt(-15)/2), 5 * (1/2 + sqrt(-15)/2), etc. The most obvious benefit of unit testing is ensuring that the algorithm is correct. This is certainly a requirement of a program that does something mathematical. Unit testing also helps ensure the validity of the object-oriented design. Writing tests made me take a closer look at ImaginaryQuadraticRing and it made me realize I needed to add the function boolean hasHalfIntegers(). I probably wouldn’t have done that if I didn’t do unit testing. That’s for the principle of data encapsulation. In RingWindowDisplay, I frequently refer to the protected boolean instance field d1mod4. A purist would say that field needs to be private. But I think, rightly or perhaps wrongly, that RingWindowDisplay using the getter method hasHalfIntegers() instead of accessing d1mod4 directly would incur a performance penalty, resulting in a slower drawing of diagrams (and I still want to improve the performance of the program drawing the Eisenstein primes diagram at 2 pixels per unit interval). So I choose to rationalize that particular design decision for now. But if in the future I need to write classes that import the imaginaryquadraticinteger package, those classes won’t be able to access d1mod4 directly, so a getter function is essential. Technically a class and its test class are in the same package. At least that’s how it looks to me in NetBeans with JUnit. Even so, I advise you to pretend that the tests can’t access the protected fields of the classes being tested. I don’t always follow that advice myself, but I do try to go by it most of the time. And just now I realized that most fields in ImaginaryQuadraticRing need to be final. Upon constructing ImaginaryQuadraticRing(-3), for example, d1mod4 should be set to true and there is no reason for that to ever change. There is also the question of whether or not you need to override equals() and hashCode(). I had wondered about that and read a little about it. What I read left me with the impression that overriding those methods is a major hassle best avoided. But unit testing made me face the fact that, at least for ImaginaryQuadraticInteger, it is necessary to override equals(), and therefore also hashCode(). By the way, the IDE, at least NetBeans, is very helpful in writing equals() and hashCode() overrides. In some cases, unit testing might prompt you to write more constructors. In the case of ImaginaryQuadraticInteger, I decided to add a constructor without a denominator parameter. So for example, to use ImaginaryQuadraticInteger only for Gaussian integers, instead of having to write something like ImaginaryQuadraticInteger gaussInt = new ImaginaryQuadraticInteger(a, b, ringGaussian, 1); over and over again you can write ImaginaryQuadraticInteger gaussInt = new ImaginaryQuadraticInteger(a, b, ringGaussian); Not having to constantly write , 1 might not seem like much, but repeated often enough might get on your nerves. Of course in such a scenario, you might decide you’d rather subclass ImaginaryQuadraticInteger as GaussianInteger, and then you can just have a 2-argument constructor. Unit testing can also help you better understand the limitations of your program. With ImaginaryQuadraticInteger, I made a conscious decision to use the primitive int data type to hold the real part and the imaginary part multiple, rather than BigInteger. A number like 32,768 + 32,768i, for example, does not seem terribly large. But its norm is 2,147,483,648, which is just large enough to overflow the int data type and cause ImaginaryQuadraticInteger.norm() to be wrong. I had been aware of this limitation from early on. But since I have yet to add diagram dragging capability to the program, I didn’t feel any urgency yet to address this limitation. Much more seriously, however, the arithmetic functions can cause overflows with numbers closer to 0. This gave me quite a bit of trouble for unit testing ImaginaryQuadraticInteger.divides(). Eventually I added to ImaginaryQuadraticIntegerTest.setUpClass() the ability to determine what is the largest integer that can be used for the real and imaginary parts without causing overflow problems and tests to fail. For example, in one run of the test suite, setUpClass() pseudorandomly came up with the ring of algebraic integers of Q(√−6,151) for the tests and determined that 295 could be used safely as a real part or imaginary part multiple. Then it came up with the numbers 172 − 82i, 172 − 82(√−2), 345/2 − 163(√−3)/2, 345/2 − 163(√−7)/2 and 345/2 − 163(√−6,151)/2 for some of the tests. That last number has a norm of 76,252, which is small enough not to worry us about overflows. I also wrote some basic overflow detection into the arithmetic functions of ImaginaryQuadraticInteger, but I have yet to write tests to show that they correctly throw ArithmeticException when overflows occur. When you make changes to the program, if you already have a test suite, it is much easier to confirm that the program still works correctly. In my mathematical diagram program, I have a function that tests whether a given “simple” integer is prime. In my original implementation, the isPrime() function would actually call primeFactors() and use that to determine if the number is prime. In hindsight, that was a terrible idea. But at the default magnification of the diagrams (40 pixels per unit interval), the inefficiency made no appreciable difference. But to zoom out the Eisenstein primes diagram to 2 pixels per unit interval, the program would take almost 20 seconds. Twenty seconds would have been acceptable, perhaps even amazing, to Gotthold Eisenstein in the 1840s, but not so much to me today. I added some time benchmarking println() statements in RingWindowDisplay, but after a short while I realized that the real problem was in NumberTheoreticFunctionsCalculator.isPrime(). That function only needs to find a least prime factor, not look at a complete factorization. You can look at a number like 48,015 and immediately tell that it is not prime, but to actually factor it in full would you take you a bit longer. The same goes for a computer, though of course the computer does it much quicker. Still, a slightly inefficient subroutine repeated enough times can cause a major inefficiency. So my improved algorithm for isPrime() should work faster and still work correctly. But in the process of typing the new and improved version of the function, I could make some small but crucial mistake that messes it up. Thanks to having NumberTheoreticFunctionsCalculatorTest ready, checking that my improved isPrime() function works correctly was a simple matter of running the test. One nice, unsung feature of testing frameworks is a little bit of benchmarking. For instance, a recent run of testIsPrime() on my computer took 0.162 seconds. I’ve learned that it’s important to have println statements in your tests. But not too many of them, they can really slow things down. A good baseline is that each test should have a println identifying what is being tested. I know JUnit in NetBeans automatically generates those identification println statements, and I imagine other combinations of testing frameworks and IDEs also do so. But remember to include identification println statements in any tests you add besides those that were automatically generated. This is another one of those things that I’ve learned from personal experience. Knowing what order the tests just ran in does not seem terribly important, and in any case the framework will probably let you know. If your setup and tests generate data besides the results of the assertions, it might be a good idea to have println statements for some of that data. For instance, NumberTheoreticFunctionsCalculatorTest.setUpClass() generates a list of the positive primes below 1000, so it reports that it has generated a list of 168 primes, the 168th prime being 997. And randomNegativeSquarefreeNumber() comes up with a pseudorandom negative “squarefree” number (not divisible by any perfect square), so the test for that function reports what pseudorandom number the function delivered. The assertions in the test are what determines if the test passed or failed, but having the tests give a little bit of information about what is going can be a valuable sanity check. Also, it helps the test duration feel not too long, as you’re not worried so much that your computer may have crashed. My next steps in unit testing should probably include unit testing the graphical unit interface (GUI) created by RingWindowDisplay. It can be done and it should be done, but I haven’t yet read up on how to do it. As for integration testing, I’m simply not at that point yet. Maybe when I write a program that uses a database, or a mobile device’s accelerometer. In summary, unit testing can definitely help you improve the program you’re testing and overall make you a better programmer. Whether it can make you a more employable programmer, I can’t comment on, as that touches on factors that have nothing to do with your skill, knowledge or ability to work in a team.
https://alonso-delarte.medium.com/my-journey-with-unit-testing-in-java-so-far-e5b4db2e048f
CC-MAIN-2021-21
refinedweb
4,321
60.85
Custom ASP.NET Core Middleware Example Mike One of the great things about ASP.NET Core is its extensibility. The behavior of an ASP.NET Core app’s HTTP request handling pipeline can be easily customized by specifying different middleware components. This allows developers to plug in request handlers like MVC middleware, static file providers, authentication, error pages, or even their own custom middleware. In this article, I will walk you through how to create custom middleware to handle requests with simple SOAP payloads. A Disclaimer Hopefully this article provides a useful demonstration of creating custom middleware for ASP.NET Core in a real-world scenario. Some users might also find the SOAP handling itself useful for processing requests from old clients that previously communicated with a basic WCF endpoint. Be aware, though, that this sample does not provide general WCF host support for ASP.NET Core. Among other things, it has no support for message security, WSDL generation, duplex channels, non-HTTP transports, etc. The recommended way of providing web services with ASP.NET Core is via RESTful web API solutions. The ASP.NET MVC framework provides a powerful and flexible model for routing and handling web requests with controllers and actions. Getting Started To start, create a .NET Core library (the project type is under web templates and is called Class Library (package)). Throughout this article I will be using the Preview 2 version of the .NET Core tools. ASP.NET Core middleware uses the explicit dependencies principle, so all dependencies should be provided through dependency injection via arguments to the middleware’s constructor. The one dependency common to most middleware is a RequestDelegate object representing the next delegate in the HTTP request processing pipeline. If our middleware does not completely handle a request, the request’s context should be passed along to this next delegate. Later, we’ll specify more dependencies in our constructor but, for now, let’s add a basic constructor to our middleware class. Add a dependency to Microsoft.AspNetCore.Http.Abstractions to your project.json (since that’s the contract containing RequestDelegate), give your class a descriptive name (I’m using SOAPEndpointMiddleware), and create a constructor for the middleware class like this: Next, we need to handle incoming HTTP request contexts. For this, middleware is expected to have an Invoke method taking an HttpContext parameter. This method should take whatever actions are necessary based on the HttpContext being processed and then call the next middleware in the HTTP request processing pipeline (unless no further processing is needed). For the moment, add this trivial Invoke method: To try out our middleware as we create it, we will need a test ASP.NET Core app. Add an ASP.NET Core web API project to your solution and set it as the startup project. ASP.NET Core middleware (custom or otherwise) can be added to an application’s pipeline with the IApplicationBuilder.UseMiddleware<T> extension method. After adding a project reference to your middleware project ( "CustomMiddleware": "1.0.0.0"), add the middleware to your test app’s pipeline in the Configure method of its Startup.cs file: You may notice that the other middleware components (MVC, static files, etc.) all have custom extension methods to make adding them easy. Let’s add an extension method for our custom middleware, too. I added the following method in a new source file in the custom middleware library project (notice the Microsoft.AspNetCore.Builder namespace so that IApplicationBuilder users can easily call the method): The call to register the middleware in the test app then simplifies to app.UseSOAPEndpoint() instead of app.UseMiddleware<SOAPEndpointMiddleware>(). At this point, we have successfully created a basic piece of custom middleware and injected it into our test app’s HTTP request processing pipeline. Launch your test app (using Kestrel so that you can easily see Console output) and navigate to the hosted site with your web browser. Notice that the custom middleware logs messages as requests are received! Specifying a Service Type Now that we have a simple custom middleware component working, let’s have it start actually processing SOAP requests! The middleware should listen for SOAP requests to come in to a particular endpoint (URL) and then dispatch the calls to the appropriate service API based on the SOAP action specified. For this to work, our custom middleware will need a few more pieces of information: - The path it should listen on for requests - The type of the service to invoke methods from - The MessageEncoder used to encode the incoming SOAP payloads These arguments will all need to be provided when an app registers our middleware as part of its processing pipeline, so let’s add them to the constructor. Note that the MessageEncoder class is in the System.ServiceModel.Primitives contract. After updating the constructor, it should look like this: The UseSOAPEndpoint extension method will also need to be updated (and can be made generic to capture the service type parameter): Because MessageEncoder is an abstract class without any implementations publicly exposed, users of this library will have to either implement their own encoders or (more likely) extract an encoder from a WCF binding. To make that easier, let’s also add a UseSOAPEndpoint overload that takes a binding (and extracts the encoder on the user’s behalf): Discovering Service Type Operations Requests handled by our SOAP-handling middleware will be requests to invoke some operation on the specified service type. We can use reflection to find methods on the given service type which correspond to contract operations. Let’s create a new type ( ServiceDescription) to store this metadata. It should take the service type as an input to its constructor and then should walk the type with reflection to discover implemented contracts and operations according to the following heuristic: - Contracts should be discovered by finding ServiceContractAttributeelements on interfaces that the type implements - Contract name and namespace information is taken from the attribute - Operations should be discovered by finding OperationContractAttributeelements on methods within the contract interfaces - Operation name, properties, and action name are taken from the attribute; the method to invoke is the service type’s implementation of the interface method Note that ServiceDescription, ContractDescription, and OperationDescription used here are not the types from the System.ServiceModel.Description namespace (so you should not need to depend on that namespace). Rather, they are simple new types used for the purpose of this sample code. If the names are confusing, feel free to change them. The ServiceDescription type (or whatever you have named it) should end up looking something like this: The ContractDescription type is similar: The OperationDescription class looks much the same except that it also contains metadata about how the operation should be invoked: Once these types exist, the middleware’s constructor can be updated to store a ServiceDescription created from the specified Type instead of storing the Type itself: Note that this could all be simplified by just having a dictionary of action names and OperationDescription or MethodInfo dispatch methods. I’ve opted to have the whole service/contract/operation structure stored, though, because it will allow expanding the sample with more complex functionality (such as supporting message inspectors) in the future. Invoking the Operations At this point, you should have a custom middleware class that takes a service type as input and discovers available operations. Now it’s time to update the middleware’s Invoke method to actually call those operations. The first thing to check in the Invoke method is whether or not the incoming request’s path equals the path our service is listening on. If not, then we need to pass the request along to other pipeline members. If the request’s path does equal the expected path for our service endpoint, we need to read the message and compose a response (this code replaces the ‘todo’ in the previous snippet). After that, we need to get the requested action by looking for a ‘SOAPAction’ header (which is how SOAP actions are usually communicated). Again, this code replaces the ‘todo’ from the previous snippet. Knowing the requested action, we can build on the previous snippet by finding the correct OperationDescription to invoke. Replace the previous ‘todo’ with the following: Now that we have a MethodInfo to invoke, we need to extract the arguments to pass to the operation from the request’s body. This can be done in a helper method with an XmlReader and DataContractSerializer. This argument reading helper assumes the arguments are provided in order in the message body. This is true for messages coming from .NET WCF clients, but may not be true for all SOAP clients. If needed, this method could be replaced with a slightly more complex variant that allows for re-ordered arguments and fuzzier parameter name matching. With the operation and arguments known, all that remains is to retrieve an instance of the service type to call the operation method on. This can be done with ASP.NET Core’s built-in dependency injection. Change the middleware’s Invoke method signature to add an IServiceProvider parameter ( IServiceProvider serviceProvider). Then, we can use the IServiceProver.GetService API to retrieve service types that the user has registered in the ConfigureServices method of their Startup.cs file. All together, the call to invoke the operation (replacing the ‘todo’ back in our Invoke method) should look something like this: Encoding the Response Finally, with a response in hand, we can use the MessageEncoder specified by the user to send the object back to the caller in the HTTP response. Message.CreateMessage requires an implementation of BodyWriter to output the body of the message with correct element names. So, add a class like the one below that implements BodyWriter. Then we can update the middleware’s Invoke method (replacing the final ‘todo’) to create a response message (if the operation isn’t one way) and write it to the HTTP context’s response. And that’s it! You have written custom ASP.NET Core middleware for handling SOAP requests. Testing it Out Now that our custom middleware actually works with service types, the simple test app we created before will need to be updated. We’ll need a simple service type to call into. If you don’t have one on-hand to test with, you can use this sample: The UseSOAPEndpoint call we added to the Configure method in our test host’s Startup.cs file will need updated to point to this new type: app.UseSOAPEndpoint<CalculatorService>("/CalculatorService.svc", new BasicHttpBinding());. Note that we’ve also created an HttpBinding (to get a message encoder from). To use BasicHttpBinding, we will need to add a reference to the System.ServiceModel.Http contract in the test app’s project.json file. Also, since the instance of our service is created with dependency injection, the following line will need added to the ConfigureServices method in our host’s startup.cs file: services.AddSingleton<CalculatorService>(); If you have a WSDL for your test service, you can create a client from that using WCF tools. Otherwise, create a client directly using ClientBase<T>. Here is a simple client I created (as a .NET Core console application) to test the middleware and host (be sure to reference System.ServiceModel.Http and System.ServiceModel.Primitives in the project.json file): Launch the test host and point a test client (like the one pasted above) at it to see ASP.NET Core handle a SOAP request with our custom middleware! Using a network monitoring tool like Wireshark or Fiddler, we can observe the requests and responses. Request from sample: Response from sample: Conclusion I hope that this article has been helpful in demonstrating a real-world case of custom middleware expanding ASP.NET Core’s request processing capabilities. By creating a constructor that took the middleware’s dependencies as parameters and creating an Invoke method with the logic of deserializing and dispatching SOAP requests, we were able to serve responses to a simple WCF client from ASP.NET Core! SOAP handling middleware is just one example of how custom middleware can be used. More details on middleware are available in the ASP.NET Core documentation. The snippet texts are not being loaded after the change of page design.
https://devblogs.microsoft.com/dotnet/custom-asp-net-core-middleware-example/
CC-MAIN-2020-10
refinedweb
2,049
53.71
jakzaprogramowac.pl All questions About the project How To Program How To Develop Data dodania Pytanie 2017-09-25 16:09 Data frame to nested list » I have a dataframe which I read from a .csv file and looks like this: job name `phone number` <chr> <chr> ... (2) odpowiedzi 2017-09-18 11:09 Nest select statement in from clause while using order by » In SQL Server I try to include a select statement into a from clause while using an order by statement and I get an error "Incorrect syntax near the k... (2) odpowiedzi 2017-08-25 17:08 Laravel: how to get nested models » Could not you to direct me in right way. I have four models: "Item" belongs to several "Category" belongs to "Shop" belongs to "City" How can i sele... (1) odpowiedzi 2017-08-15 19:08 How to serialize a List content to a flat JSON object with Jackson? » Given the following POJOs .. public class City { private String title; private List<Person> people; } ... public class Person { ... (2) odpowiedzi 2017-07-23 22:07 How to access the Redux store from deeply nested components » I'm new to Redux and my nested component structure is shown below. I have a Redux container which owns the state and renders Component A. Component A ... (1) odpowiedzi 2017-07-13 07:07 Display nested array key values in respective columns php html » I have a data coming in below format in php 'Date 1' => array ( 'Object 1 ' => array ( 'Field 1' => Value 1, 'Field 2'... (1) odpowiedzi 2017-07-12 08:07 split nesting array for use in php » i have an array and want split them.may be tow ,tree or more array( name=>array( 0=>asda.jpg, 1=>kewj.j... (2) odpowiedzi 2017-07-06 00:07 Issues with nested forms » I'm trying to create a fully manageable website where the user can fill some skills ('css', 'php', 'ruby', you name it). Next to it, they fill how goo... (1) odpowiedzi 2017-06-27 01:06 View MongoDB array in order of indices with Compass » I am working on a database of Polish verbs and I'd like to find out how to display my results such that each verb conjugation appears in the following... (0) odpowiedzi 2017-05-28 23:05 Rails 5 nested form find or create » I have two models Run and Patient. Run belongs_to Patient and Patient has_many runs. On the Run model I'm using accepts_nested_attributes in which t... (0) odpowiedzi 2017-04-30 22:04 inner nested ng repeat section not binding to scope variable and getting commented » I have a nested ng-repeat like this. I have a scope variable boards which is array and has further nesting with another array called tasks which is ag... (1) odpowiedzi 2017-03-27 10:03 Passing a nested array from node.js server to ejs page » I have this object: { id: 22, user: 'Username', born: '06/10/1975', ralationships: [ { profession: 'Joiner' }, { profession: ... (2) odpowiedzi 2017-03-26 17:03 Displaying entire row of max() value from nested table » My table CUSTOMER_TABLE has a nested table of references toward ACCOUNT_TABLE. Each account in ACCOUNT_TABLE has a reference toward a branch: branch_r... (1) odpowiedzi 2017-03-23 22:03 Dictionary w nested dicts to list in specified order » Sorry for the post if it seems redundant. I've looked through a bunch of other posts and I can't seem to find what i'm looking for - perhaps bc I'm a ... (1) odpowiedzi 2017-03-17 22:03 Nested Tables Using a DTO » I need help getting my WebApi Controller to work. I have a 3 table Models like this. First Table public class MainTable{ public int MainTab... (2) odpowiedzi 2017-03-10 11:03 nested div tags in rails » I want to achieve this HTML code in my view <div class="progress "> <div class="progress-bar bgclre3559b" role="progressbar" style="width:... (1) odpowiedzi 2017-01-20 14:01 Python: nested 'for' loops » I'd like to go through all n-digit numbers such that second digit of the number is always lower or equal to the first, third is lower or equal to the ... (5) odpowiedzi 2017-01-15 21:01 Trying to nest React components » I'm new to React.js and am trying to use it to build myself a website. What I'm trying to do is to nest a child component within a parent component; t... (1) odpowiedzi 2017-01-04 23:01 Nested routes with react router v4 » I am currently struggling with nesting routes using react router v4. The closest example was the route config in the React-Router v4 Documentation. ... (1) odpowiedzi 2016-12-22 18:12 Why is a local function not always hidden in C#7? » What I am showing below, is rather a theoretical question. But I am interested in how the new C#7 compiler works and resolves local functions. In C#7... (3) odpowiedzi 2016-12-03 20:12 Recursive string decompression » I'm trying to decompress strings that looks as follows: Input: 4(ab) Output: abababab Input: 11ab Output: aaaaaaaaaaab Input: 2(3b3(ab)) Outpu... (3) odpowiedzi 2016-12-01 10:12 How do I find a method's nesting in Ruby? » In Ruby, constant lookup is affected by nesting and methods retain their nesting. For example, if I have these modules: module A X = 1 end module... (2) odpowiedzi 2016-11-25 21:11 Nested Loops Using Loop Macro in Common Lisp » I am trying to implement a basic nested loop in CL, but the Loop macro is resisting this. Basically, I would like to find all possible products of 3-d... (1) odpowiedzi 2016-08-07 10:08 Extract nested JSON element using scala » I have the following code in Scala. My goal is to extract the value(s) of a key without knowing how many and how deep they are. import org.json... (2) odpowiedzi 2016-07-09 23:07 Saving checkbox value in erb - rails » I have a nested form that saves information to three different models. One section of the form uses checkboxes and is supposed to save values 1-5. Ho... (2) odpowiedzi 2016-05-30 20:05 Cleaning up pathologically-nested "if { } else { if { } else { if { ... } } }" » I currently have the misfortune of working on Somebody Else's C# code which has truly blown my mind. I have no idea how the person before me ever main... (1) odpowiedzi 2016-05-11 14:05 When I run this, it states that the list constraints is unbound. Why is that? » (defun combinations (&rest lists) (if (car lists) (mapcan (lambda (inner-val)(mapcar (lambda (outer-val) (cons outer-val inner-val)) (car lists)))... (1) odpowiedzi 2016-03-30 23:03 How to set Rails routes on nested resources? » This may seem redundant because similar question has been asked here and here, but I haven't found a solution yet. I am running an RSpec to test :up... (1) odpowiedzi 2016-02-08 13:02 HTML: Reverse Order of 4 nested tags » First of all, I think I found an interesting/similar use case, in order to invert the order of some elements, here on S.O. Anyway, I need to change ... (2) odpowiedzi 2015-12-08 04:12 How do I handle consecutive multiple try's in swift 2.0 » I have a block of code that needs to execute 2 statements that require a try. Is it better to nest the try's and each one has their own do { } catch ... (1) odpowiedzi 2015-12-03 19:12 How to use join in Entity Framework to make output Json objects in levels - not the same level » I am trying to fetch data from a SQL Server database. The database has 3 tables as shown here: The tables relate to each other using primary and f... (1) odpowiedzi 2015-10-13 15:10 Nested sets with Baum and Laravel Commentable: children comments are being inserted without the commentable id and type » I am trying to achieve multi threaded comments using Laravel Commentable which uses Nested Sets with Baum I have managed to make the root comments wo... (1) odpowiedzi 2015-10-12 17:10 Nested relationships within Join Tables - Rails » I'm rather new to Rails and have a question about how to successfully add sub-categories to an existing Join Table relationship. For example, assume ... (1) odpowiedzi 2015-03-13 15:03 Python - zagnieżdżone słownika. Gdzie jest błąd? » Mam pliku CSV, że już filtrowane na listę i pogrupowane. Przykład: 52713 ['52713', '', 'Vmax', '', 'Start V... (2) odpowiedzi 1
http://jakzaprogramowac.pl/lista-pytan-jakzaprogramowac-wg-tagow/122/strona/1
CC-MAIN-2017-43
refinedweb
1,444
71.04
Hi guys, Package: python-matplotlib Version: 0.98.3-5 Severity: normal python-matplotlib installs its own copy of pyparsing.py when it should in fact be using the copy that is shipped in python-pyparsing. We've just receive this bug report about the internal copy of pyparsing included in mpl. The situation in Debian is: Stable 1.5.0-1 Testing 1.5.1-2 Unstable 1.5.2-1 Currently mpl ship: $ grep "^__version" lib/matplotlib/pyparsing.py __version__ = "1.5.0" __versionTime__ = "28 May 2008 10:05" In the changelog I can see: $ egrep -A2 "2007-11-09.*pyparsing" CHANGELOG 2007-11-09 Moved pyparsing back into matplotlib namespace. Don't use system pyparsing, API is too variable from one release to the next - DSD So there seems to be a reason for this "private" copy. The question is: is this reason still valid nowdays? should we (at least packagers) remove the private copy and rely on the system pyparsing (or at least introduce a "check if system has pyparsing, if not fallback on private" wrap)? I haven't checked, but maybe you already know the answer Cheers, ··· On Fri, May 29, 2009 at 11:51, Daniel Watkins <daniel@...723...> wrote: -- Sandro Tosi (aka morph, morpheus, matrixhasu) My website: Me at Debian:
https://discourse.matplotlib.org/t/python-modules-team-bug-531024-duplicate-version-of-python-pyparsing-included/11389
CC-MAIN-2019-51
refinedweb
216
68.26
Eclipse Web Tools Platform Uncovered The breadth and width of applications on the Internet have reached a level that was never seen before. Every few months we hear about a new application, a technology or a new startup company that opens fantastic new opportunities for exploration in new Internet territory. Mobile devices, iPods, computers and laptops are converging to offer platforms that can access the Internet and run these new applications everywhere. And, to fuel their creation and deployment, the Open Source movement has created an unprecedented array of high quality, freely available middleware and tools. The Eclipse project, and the Web Tools Platform (WTP) as the name implies was started a mere 5 years ago to extend Eclipse into the domain of Web applications. Since then it became arguably the most popular Eclipse project providing a very rich set of tools for Web application developers and a set of platform application programming interfaces (API) for tool vendors. This article is the first in a series of articles introducing WTP. In this first article we will introduce WTP and cover the underlying concepts and Java Web Application Development tools. The series is organized into the following topics: Part II - WTP Uncovered: JavaServer Faces (JSF) Part III - WTP Uncovered: EJBsand Java Persistence Architecture (JPA) Part IV - WTP Uncovered: XMLand XSL Tools Part V - WTP Uncovered: WebServices Part VI - WTP Uncovered:Building your own tools The History Eclipse users were building Web applications long before the WTP project. Of course eclipse did not support them very well, launching web servers, source editors for Web pages were among the top feature requests [1]. There were other eclipse based solutions such asthe open source Lomboz project, and the Eclipse-based WebSphere StudioApplication Developer product from IBM. WTP project in many ways served as afirst experience for Eclipse to launch new platforms, and starting from agenuine need create a new communities and common platform. WTP is an example ofhow open source process can lead to a wonderful outcome. Compared to VisualStudio, Java Web development was very fragmented, and these initial beginnings ledto a few informal meetings at very first EclipseCON in 2004. We (the committersof the very popular open source Lomboz project) met with the IBM team that werebuilding WSAD, and other groups who had various interest in making this projecta reality. WTP which formally began in spring 2003 as a proposal to Eclipse.orggathered a community. There were contributions of a core set of plug-ins fromWebSphere Studio Application Developer and Lomboz projects. WTP has irreversibly changed the fragmentedWeb application development and tools space. Especially with WTP support forJava Web application development, eclipse achieved a good level of maturity andsuccess for Web development. Today WTPis the reason that many major Java EE application server vendors adopt Eclipseas its primary IDE platform. ProjectScope As the name implies, the scope of WTP is Web application development. However, Web application development is too broadsince there are many competing development technologies, including major platforms;Java EE, .NET, and Linux-Apache-MySQL-Perl/PHP/Python (LAMP). The scope of WTPis is inclusive but the tools limited to fundamental Web standards and JavaEE-based Web application development. This scope includes the common underlyingstandard Web technologies such as HTML, XML, and Web services. WTP has a richset of tools and APIs, and not suprisingly it is very large. The projectcharter is inclusive of a wide array of standard and de-jure Web technologies (see Figure 1). Figure 1 WTP initially started with two subprojects: Web and Java Standard Tools. These were later refactored into architecturally significant, smaller and more focused components (see Table 1).We also needed a place for new technologies to grow; the WTP incubator was created to provide room for experimentation and to make it easier to start working on new tools and technologies with the least amount of buerocracy. Table 1: WTP Projects Developing Web Applications with WTP To develop Java Web applications you will need to extend Eclipse in at least two dimensions: Be able to create and edit development artifacts such as xml, html, servlets and jsps, and run or debug these artifacts using server runtime environments. The simplest server runtime environment is a Web servers, and on the Java EE side, the application servers with Web containers, such as Jetty and Apache Tomcat. More advanced servers have containers that support Enterprise JavaBeans (EJB), Web Services and more. WTP seamlessly extends eclipse to provide the capabilities needed by Web developers. Source editing functions such as code completion, syntax coloring, refactoring, error highlighting, and quick fixes all have direct analogs for Web artifacts. In the following section we will show how to create and execute a basic Java Web application that produces WTP news using an RSS feed. RSS is a standardized [3] XML format used to publish things such as blog entries, news headlines, audio, and video. Later we will enhance this application with JSF, persistence and service components. Let us start with the following: - Prepare the development environment - Add a Server Runtime. - Create a Dynamic (Java) Web Application Project. - Create and Edit a simple XSD for rss news feeds. - Map the XSD to Java using JAXB. - Use a JSP to provide the news using rss feed. - Run the JSP Development Environment To follow this article you will need to setup a development environment that contains an eclipse with WTP, an application such as Apache Tomcat, and full Java version 6+ JDK. We need a recent JDK because we will use some features such as Java XML Binding (JAXB), that is included with this version. The easiest way to obtain WTP is to get a "EclipseIDE for Java EE Developers" from the eclipse downloads area [2]. You can obtain Apache Tomcat from or WTP can assist you with the download and installation when you add the server runtime environment. We have used the eclipse ganymede SR1 release, Apache Tomcat v6.0.18, and Sun JDK v1.6.0.07. You should be able to complete your installation very easily by unzipping the archives to a suitable location on your disk. Adding a server runtime environment: After launching Eclipse, you can invoke the command Window Preference from the menu bar to open the Preference dialog. Expand the Server preferences category and select the Installed Runtimes page (see Figure 2). Initially there are will be no server runtime environment definitions. You can click the Add.. button to add a new server runtime. There are server runtime definitions provided with WTP as well as others that can be downloaded from the server vendors. Figure 2: Server Runtime Environment Creating a Dynamic Web Project An Eclipse workspace contains a set of projects, typically projects that are related to each other in the sameworkspace. Each project has a set of builders that give the project its intelligence and know how to process the artifacts in a project. WTP adds Dynamic Web Projects which provides builders for Java Web applications. For example, it knows how to package the artifacts in Java EE Web modules so that they can be deployed to application servers. You can create a new Dynamic Webproject by invoking the File New Project menu command to open the New Project wizard and selecting the Dynamic Web Project as the project type (see Figure 3). This wizard presents the parameters you can change to create a project. Obviosly a project name is needed, as well as the server runtime to use for the project. In this case Apache Tomcat must be selected as the Target Runtime. The Configurations field lets you select a predefined configuration of project facets. Once the project is created you will have a workspace that looks like the project in Figure 3. We are ready to build and run our application. Figure 3: Creating a Dynamic Web Project Creating a simple XSD for RSS XML Schema Description (XSD) is the W3CRecommendation for describing the format or schema of XML documents, and is the preferred schema description language for use with Web services. WTP has a powerful XSD editor that includes both a source and a graphical view as well as an outline view and property sheets that greatly simplify the editing task. We will design a simple XSD for RSS to create and publish new items. Our simplified news feed will look like the following XML: xml version="1.0"encoding="UTF-8"?> Eclipse Web Tools Platform Project: Newstitle> link> This RSS feed contains the latest news from the Eclipse Web Tools Platform(WTP) project. The Eclipse WTP project contains Web tools andframeworks for the Eclipse platform. description> WTP 3.1 M4 Declared!title>> The fourth development milestone for WTP 3.1 has been declared. Check out what's New andNoteworthy and download it now! description> item> WTP 3.0.3 Released!title>> WTP 3.0.3 is now available. Release 3.0.3 is a scheduled maintenance release to fixserious defects present in the prior 3.0 releases, as well as to improveon their stability and performance. Roughly 170 fixes have beenadded since the 3.0.2 release. The next scheduled maintenancerelease, 3.0.4, will be available in February as part of GanymedeSR2.description> item> channel> rss> Now, you can create a new XML Schema file named rss.xsd. In general, there are many equivalent ways to describe a given format using XSD. It's a good practice to describe formats in a way that works well with XML data binding toolkits such as JAXB. After you open the file withthe XSD editor you can define complex types for the RSS content model of each element. The XSD editor lets you edit in the source tab, the graphical tab, the outline view, and the property view (see figure 4). xml version="1.0" encoding="UTF-8"?> xs:sequence> xs:complexType> xs:sequence> xs:complexType> xs:sequence> xs:complexType> xs:schema> Fig. 4: XSD Editor JAXB Mapping The Java Architecture for XML Binding (JAXB) [4] allows us to map XML to Java classes. The binding compiler, xjc, is used to generate Java classes from an XML Schema. To run xjc with our schema, we will use the following ant script: The Java classes generated with JAXB can be used to marshall and unmarshall the XML document. JAXB creates these classes and annotates them for XML mappings. @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "RssItem", propOrder = { "title", "link", "description" }) public class RssItem { @XmlElement(required = true) protected String title; @XmlElement(required = true) @XmlSchemaType(name = "anyURI") protected String link; @XmlElement(required = true) protected String description; Create and Edit a JSP The WebContent folder in our project is the root of the Web application and is where the normal Web content, such as HTML, pages, JSPs, and images, go . Now, we will add a JSP file named rss.jsp to the project to produce the news feed. Use the File New JSP command to open the New JavaServer Page wizard. Give the new file the name rss.jsp. The wizard lets you pick a template for the new JSP. Select any template for JSP with HTML markup, and click the Finish button. The wizard creates the JSP file with the content filled in from the template and opens it in the JSP editor. The JSP editor provides full content assist on HTML tags, JSP tags, and Java code scriptlets. Replace this content with the following: <%@ pagelanguage="java" contentType="text/xml; charset=UTF-8"pageEncoding="UTF-8"%><%@pageimport="org.eclipse.wtp.news.rss.Rss"%><%@pageimport="org.eclipse.wtp.news.NewsService"%><% Rss rss = NewsService.getRssNewsFeed(); NewsService.printRss(out, rss); %> The NewsService is a Java utility class that is used to create the Rss object and marshallit to the output stream. The getRssNewsFeed method reads the Rss from a sample XML file. The printRss method writes the Rss object to the output stream as an XML document. At this point, this may look like a redundant operation: Why not just access the XML document directly? This is just a quick way to test our JSP. Later we will build a Web interface to enter news items and read the Rss feed from a database. public class NewsService { public static Rss getRssNewsFeed() throws JAXBException { JAXBContext context = JAXBContext.newInstance("org.eclipse.wtp.news.rss"); Unmarshaller unmarshaller =context.createUnmarshaller(); JAXBElement rssElement = (JAXBElement)unmarshaller .unmarshal(Rss.class.getResourceAsStream("/simple-rss.xml")); return rssElement.getValue(); } public static void printRss(Writer out, Rss rss) throws JAXBException { JAXBContext context = JAXBContext.newInstance("org.eclipse.wtp.news.rss"); ObjectFactory factory = new ObjectFactory(); Marshaller marshaller =context.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); marshaller.marshal(factory.createRss(rss),out); } } Run the JSP on the Server WTP extends the Run As command to Webartifacts such as HTML and JSP files. Simply select the file and invoke the RunAs > Run on Server command from the context menu. The user interface of a Web application is hosted in a Web browser. To run our JSP file select rss.jspand invoke the Run As > Run on Server command from the context menu. Since this is the first time you have tried to run any artifact from the Dynamic Web Project, WTP will prompt you to define a new server and default to the server runtime environment for Apache Tomcat, which was previously associated with the project. It will also ask us to add the project to the server's configuration (see Figure 5). In WTP a server consists of both a server runtime environment and configuration information such as the port numbers to use and the set of projects to deploy or publish on it. A project may be deployed on several servers, which is handy when you are testing a Web application for portabilityto different vendors. Fig. 5: Run on Server When you click the Finish button to confirm that you want WTP to add the module to the server configuration. WTP then starts the server and opens a Web browser with the Uniform Resource Locator (URL) of the JSP file (see figure 6). Fig. 6: WTP News Feed Conclusion This completes the first section of WTP Uncovered series. In the upcoming articles we will discover more of WTP. We will show how you can use JSF tools to build Web user interfaces, use the Dali JPA tooling to add persistence to applications and use Web Services to build universally accesible services. Common
http://jaxenter.com/eclipse-web-tools-platform-uncovered-10022.html
CC-MAIN-2014-15
refinedweb
2,386
54.42
Hide Forgot Description of problem: When I want to deploy Pagure using RPM from distribution, then I have to alter /usr/share/pagure/pagure.wsgi This is bad as the change is overwritten on next 'dnf upgrade'. The change needed to run the server should be done only in config and everything in /usr/share/ should be left untouched. Version-Release number of selected component (if applicable): pagure-1.0.1-1.fc23.noarch I have always approached the wsgi provided as an example file rather than something functional, but I see the issue. There are two ways to go about this: - Mark the two wsgi files as doc then and don't install them in /usr/share but then the apache configuration is pointing to non-existing files - Keep things as is tag the two wsgi files with ``%config(noreplace)`` Thoughts? The content import __main__ __main__.__requires__ = ['SQLAlchemy >= 0.8', 'jinja2 >= 2.4'] import pkg_resources import os os.environ['PAGURE_CONFIG'] = '/etc/pagure/pagure.cfg' os.environ['TEMP'] = '/var/tmp/' from pagure import APP as application is perfectly sane for production. I see no reason why not uncomment it and use it. Just keep that part about debug or different path comment for users who want to play with it. But the above should be good start for everybody, which does not need to be altered. I had noticed this when I was looking at doing a setup using the Fedora package. There are a few defaults in the sample config files that don't match the Fedora install, that would be noce to change. The WSGI case is worse because if you just use the standard file it gets replaced at update. But altogether it makes setting up a play instance harder than it really should be. One other notable case is having the username default to git rather than gitolite3. If we really want to use the simpler git by default we should set that up on the install. But for a default install which is likely to just be used for testing, change the default config and docs to use gitolite3 might be better. I thought there was some stuff about the sample apache confs that also bugged me, but I don't remember right now. I have limited time, so would probably move slowly with any of this. > I see no reason why not uncomment it and use it. Well, one reason is that I do not want pagure to be "functional" upon install, since it cannot be as the DB needs to be created and all. So I provide both the apache config file and the wsgi commented out, but maybe the wsgi file could be un-commented indeed. Fix proposed upstream at:
https://partner-bugzilla.redhat.com/show_bug.cgi?id=1353862
CC-MAIN-2020-24
refinedweb
458
72.26
Hallo, When I add the following function to my program it crashes. I dont even need to use the function to make it crash. I compiles fine. Any ideas on what I am doing wrong? Here is the error message I get:Here is the error message I get:Code: std::string intToString(int n) { std::stringstream out; out << n; std::string a = out.str(); return a; } And it points me to this code, which is found in xiosbase at line 372And it points me to this code, which is found in xiosbase at line 372Quote: Unhandled exception at 0x004898a6 in HelloWorld.exe: 0xC0000005: Access violation reading location 0xcccccca4. ThanksThanksCode: fmtflags __CLR_OR_THIS_CALL flags() const { // return format flags return (_Fmtfl); }
http://cboard.cprogramming.com/cplusplus-programming/100694-problem-stringstream-printable-thread.html
CC-MAIN-2013-48
refinedweb
119
65.83
InstantRDF for Umbraco InstantRDF for Umbraco (Tool description last modified on 2014-04-28.) Description This package does the following for an Umbraco web site: - Exports the document types structure as an ontology to an RDF graph. - Exports the published content nodes as resources to a separate RDF graph. - Makes all the URIs (IRIs), contained in the above, dereferencable. The ontology IRIs get dereferenced to a page, listing the entire ontology, and the rest of the resources to pages displaying the resources they are linked to. - Sets up a SPARQL endpoint on the Umbraco web site with a file-based triple store. Third party triple stores can also be used. This has been tested with Virtuoso. - The generated datastore can be linked to other datastores by using Umbraco tags. - The produced ontology can be further standardised via the declaration of equivalences between the auto-generated property predicates and predicates declared in standard namespaces(e.g. SKOS or Dublin Core).
http://www.w3.org/2001/sw/wiki/index.php?title=InstantRDF_for_Umbraco&oldid=4558
CC-MAIN-2015-22
refinedweb
159
57.37
- This topic is empty. - AuthorPosts I already made a thread about the issue on stackoverflow.com so I’ll just leave a brief explanation here and if you are interested in more details please head over there. The issue I am facing is that all my attempts to load an external SVG file with internal CSS styles (all the styles are stored in the external file inside a <style>block at the top) result in black and white graphics as if the styles are not applied at all. I’ve tried literally everything I could come up with on the web in the last couple of days without success. Here is once again a link to a DEMO which contains combined code of the external SVG file and the code that tries to load the symbols through <use>(please make note that the external file already has all the required namespaces defined). Chris CoyierKeymaster I think the issue is that when you reference a symbol, you’re just referencing that chunk and that’s it. <svg> <style></style> <symbol id="this"></symbol> </svg> This will go get the symbol, but not the styles: <svg> <use xlink:</use> </svg> You might try putting the style block inside the symbol. I’ve never tried that but it stands to reason it might work. Or, move the style block to the document where you are using <use> I tried embedding the styles into the symbols and many other variations but ultimately all my attempts failed. After I’ve spent a decent amount of time trying to figure out the issue I came to the following conclusions (they are backed up by multiple sources, although none authoritative): 1) If an external SVG file contains styles which are put inside a separate <style>block and then the content of that file is supposed to be reused with <use>, browsers are simply going to ignore anything that is inside the <style>block of that external SVG, completely irrelevant of the way the <style>block is positioned. 2) Putting the code from an external SVG file containing a separate <style>block directly into an HTML file and then reusing it with <use>from within the same file works flawlessly. 3) [bonus point] Trying to <use>any kind of graphics from an external SVG file which contains gradients will result in either unpredictable behavior or the gradients will be completely ignored by browsers (again this seems to be the result of a specification “gray area” and browser inconsistency). 4) All the issues regarding the reuse of resources from external SVG files are present no matter how the files are loaded (directly from HTML, AJAX etc.) What I’ve learned from this small adventure is that despite SVG showing huge potential it still is really inconsistent and painful to use unless it’s used for simple things like monochromatic icons (SVG icon systems) or the content of the SVG is put directly into the HTML file using it. - AuthorPosts - The forum ‘CSS’ is closed to new topics and replies.
https://css-tricks.com/forums/topic/external-svg-fails-to-apply-internal-css/
CC-MAIN-2022-40
refinedweb
510
57.34
Quiz: What managed code runs before managed Main() in your program startup path? Answers: I note “managed code”, because obviously the CLR startup code gets executed, as does other native startup code. 1) The common answer is static constructors referenced from Main(). 2) The less common answer would be managed code in the CLR’s startup path. While much of the CLR is implemented in native code (mscorwks.dll), we try to migrate parts of the CLR itself into managed and into the BCL (mscorlib.dll). You can verify this in MDbg. MDbg normally tries to identify the Main() method and set a breakpoint there and stop there. However, the ‘ca nt’ command tells Mdbg to stop when a thread first enters managed code. Here’s an MDbg transcript: C:\temp>mdbg MDbg (Managed debugger) v2.0.50727.42 (RTM.050727-4200) started. For information about commands type “help”; to exit program type “quit”. mdbg> ca nt <– stop when thread first hits managed code mdbg> run x.exe STOP: Thread Created IP: 0 @ System.Security.PermissionSet..cctor – MAPPING_PROLOG [p#:0, t#:0] mdbg> where Thread [#:0] *0. System.Security.PermissionSet..cctor (source line information unavailable) [p#:0, t#:0] mdbg> go STOP: Breakpoint Hit 22: { [p#:0, t#:0] mdbg> where Thread [#:0] *0. Program.Main (x.cs:22) [p#:0, t#:0] mdbg> So in this case, you can see that System.Security.PermissionSet() is being run before Main. When we continue past that, we hit the breakpoint that Mdbg set on the main() method. 3) And there bonus answer is: any code that the compiler injects before the call to Main(). The user’s Main() method is not necessarily the real entry point for a module. That’s actually why managed pdbs specify a “user entry point” method (see the <EntryPoint> tag in ). The “real” entry point for the CLR has “.entrypoint” in the IL. The user’s entry point is specified in the PDB. Eg, in the ILDasm for the main method: .method private hidebysig static void Main() cil managed { .entrypoint … Why? This gives languages additional flexibility to uphold language semantics and do setup before the user’s Main() method is called. 4) And a very obscure answer: failure code, such as exception constructors. (I alluded to this here) At this point, it should be clear that there’s no reason that Main() has to be the first code that runs. So we could brainstorm other strange cases. A real example with MC++ C# is so clean that it’s not an ideal language for demonstrating wonky features of the CLR. So let’s turn our attention to MC++. Compile the following MC++ app with /clr:pure (so it’s 100% IL and no suspicious native stuff): // mcpp_console.cpp : main project file. #include "stdafx.h" using namespace System; int main(array<System::String ^> ^args) { Console::WriteLine(L"Hello World"); return 0; } F10 into main, and you can see that Main() is not the first managed code on the stack: > mcpp_console.exe!main + 0x15 bytes C++ mcpp_console.exe!mainCRTStartupStrArray + 0xb8 bytes C++ You can verify that mainCRTStartupStrArray in this case is indeed managed code. This example should make it clear that that managed case of ‘code running before main’ can inherit much of the properties that the native case had. A bonus experiment: Here’s a bonus experiment. Take a trivial C# app: // Test using System; class Foo { static void Main() { Console.WriteLine("Main"); } static void Test() { Console.WriteLine("Test"); } } Compile and run it, and it prints “Main”. C:\temp>csc t.cs & t.exe Microsoft (R) Visual C# 2005 Compiler version 8.00.50727.1378 for Microsoft (R) Windows (R) 2005 Framework version 2.0.50727 Main Now use ILasm/ildasm round-tripping to move the .entrypoint to Test. C:\temp>ildasm t.exe /out=t.il Edit t.il to move .entrypoint from ‘main’ to ‘test’. C:\temp>ilasm t.il Microsoft (R) .NET Framework IL Assembler. Version 2.0.50727.1378 Assembling ‘t.il’ to EXE –> ‘t.exe’ Source file is ANSI Assembled method Foo::Main Assembled method Foo::Test Assembled method Foo::.ctor Creating PE file Emitting classes: Class 1: Foo Emitting fields and methods: Global Class 1 Methods: 3; Emitting events and properties: Global Class 1 Writing PE file Operation completed successfully And run it. You see you’ve changed the entry point and Main() doesn’t execute now. C:\temp>t.exe Test PingBack from for Nr 2 (managed Code in the CLR’s startup path) I cannot get the point. In theroy (that the CLR injects some managed code) it sounds ok. Maybe just the example is misleading, because System.Security.PermissionSet..cctor is just the class constructor (static constructor) of PermissionSet. So a do not see a difference from Nr 1. Or did you meen that the cctor is called without a _direct_ reference from Main()? If yes: Where is this call issued? It is hardcoded within the (unmanged) CLR Code? Or is it caused by the initalization of the AppDomain Instance? GP – #2 as a cctor is less distinct than #1. In previous CLR versions, it used to be something like "AppDomain::Setup". Regardless, it’s still issued without a direct reference by Main(). These sort of calls can occur from the CLR. MC++? Is that really C++/CLI? Yuhong – essentially yes. See for more.
https://blogs.msdn.microsoft.com/jmstall/2007/10/14/quiz-what-runs-before-main/
CC-MAIN-2017-22
refinedweb
896
68.67
Today I needed to modify a relatively simple, single table selection [tag]query[/tag] in my web app to include [tag]left join[/tag] with another table. One table holds info about submitted reports, while the other has the results of an evaluation form for a given report. The whole application started as just a very basic online evaluation form. Then I got a request for some sort of online submission mechanism. Initially they just wanted an upload page, but I built in a tracking system to go along with it. They liked it so much that they now decided to join the two. I needed to show which reports were evaluated and which were not. The way my tables are structured, this is essentially a basic left join. Each table has almost a thousand rows in it, so I was expecting a major slowdown. No matter what you do, that [tag]Cartesian product[/tag] necessary to do a join just kills performance-wise. I ran a test query to see exactly how painful is this going to be: SELECT report.id, report.name, eval.id, eval.reportid FROM report LEFT JOIN eval ON report.id=eval.reportid The result: 802 rows in set (13.44 sec). Painful! 13 seconds is unacceptable. People whine that the application is slow as it is. Luckily though, I could fix this! [tag]Indexing[/tag] is your friend! ALTER TABLE eval ADD INDEX(reportid); I re-ran the same exact query, only to see: 802 rows in set (0.02 sec). Wohoo! This is a world of difference! Looking at this improvement, I’m planning to carefully examine all the big queries and put indexes on pivotal columns to [tag]optimize[/tag] the db processing time. If I can shave off few seconds here and there, the overall user experience may improve. Unfortunately the speed bump that most of my users experience is not the query time, but the crappy [tag]IE rendering engine[/tag]. It just can’t handle rendering a large [tag]HTML table[/tag] (20 columns by 50-100+ rows) without temporarily locking up the UI. Firefox users don’t even notice this issue because gecko renders the page incrementally. Any good ideas how to optimize displaying large HTML tables in IE? I was thinking about just using span elements (or some sort of XML makup) and css to space out text on the page. It could potentially render a little bit faster than a table. Worse comes to worse, I’ll just start generating tab delimited output surrounded by <pre> tags. I’m currently working on a feature that will dump a report into an Excel file so that they can play around with the numbers. This could be another potential solution. Try AJAX! You can create a page that returns only some results (say 20 each page), and build a navigation bar with AJAX without having to post back the request to the server. Users seem to like this kind of feature and since the page won’t reload, everything seems faster. All you have to do is build an URL that returns XML with a set of the results and parse it in Javascript. Jorge Regarding that Excel report you are working on, you could try SpreadSheetML. I’ve been working with it lately and it’s pretty cool :-)!!!! All you have to do is to build an XML using that schema/namespace, set the contentype right and you’re ok to go. You can find some info on SpreadsheetML here. Jorge I took out that long URL from you post and made it into a link – sorry. It was breaking the layout. I need to conjure up get a line-breaking script like the slashdot lameness filter for the comments section one of these days. Thanks for the SpreadsheetML link. I will definitely look into it. I was actually using a PHP class (sourceforge is your friend) to generate the files, but this may be a good alternative. The AJAX thing is a good idea – thanks. But then again, I know they like to print out the big tables so I still want to give them the ability to have all the information loaded on the page at once. All tables in a database should really have either a Primary Key or an Index on one of the columns – it’s good practise and, as you found out, *really* speeds it up! You could put a disclaimer on the page saying IE is bad and should not be used :P All my tables have primary keys, so that’s not the problem. :mrgreen: I just want to optimize the queries that do not rely on them. i found this link on google, hanks nice tuts
http://www.terminally-incoherent.com/blog/2006/08/31/indexing-really-helps/
CC-MAIN-2018-05
refinedweb
798
72.66
A Swift DSL for type-safe, extensible, and transformable HTML documents. The popular choice for rendering HTML in Swift these days is to use templating languages, but they expose your application to runtime errors and invalid HTML. Our library prevents these runtime issues at compile-time by embedding HTML directly into Swift’s powerful type system. HTML documents can be created in a tree-like fashion, much like you might create a nested JSON document: import Html let document: Node = .document( .html( .body( .h1("Welcome!"), .p("You’ve found our site!") ) ) ) Underneath the hood these tag functions html, body, h1, etc., are just creating and nesting instances of a Node type, which is a simple Swift enum. Because Node is just a simple Swift type, we can transform it in all kinds of interesting ways. For a silly example, what if we wanted to remove all instances of exclamation marks from our document? func unexclaim(_ node: Node) -> Node { switch node { case .comment: // Don't need to transform HTML comments return node case .doctype: // Don't need to transform doctypes return node case let .element(tag, attrs, children): // Recursively transform all of the children of an element return .element(tag, attrs, unexclaim(children)) case let .fragment(children): // Recursively transform all of the children of a fragment return .fragment(children.map(unexclaim)) case let .raw(string): // Transform text nodes by replacing exclamation marks with periods. return .raw(string.replacingOccurrences(of: "!", with: ".")) case let .text(string): // Transform text nodes by replacing exclamation marks with periods. return .text(string.replacingOccurrences(of: "!", with: ".")) } } unexclaim(document) Once your document is created you can render it using the render function: render(document) // <!doctype html><html><body><h1>Welcome!</h1><p>You’ve found our site!</p></body></html> And of course you can first run the document through the unexlaim transformation, and then render it: render(unexclaim(document)) // <!doctype html><html><body><h1>Welcome.</h1><p>You’ve found our site.</p></body></html> Now the document is very stern and serious 😂. src=" Because we are embedding our DSL in Swift we can take advantage of some advanced Swift features to add an extra layer of safety when constructing HTML documents. For a simple example, we can strengthen many HTML APIs to force their true types rather than just relying on strings. let imgTag = Node.img(attributes: [.src("cat.jpg"), .width(400), .height(300)]) render(imgTag) // <img style="max-width:100%;" src="cat.jpg" width="400" height="300"> Here the src attribute takes a string, but width and height take integers, as it’s invalid to put anything else in those attributes. For a more advanced example, <li> tags can only be placed inside <ol> and <ul> tags, and we can represent this fact so that it’s impossible to construct an invalid document: let listTag = Node.ul( .li("Cat"), .li("Dog"), .li("Rabbit") ) // ✅ Compiles! render(listTag) // <ul><li>Cat</li><li>Dog</li><li>Rabbit</li></ul> Node.div( .li("Cat"), .li("Dog"), .li("Rabbit") ) // 🛑 Compile error The core of the library is a single enum with 6 cases: public enum Node { case comment(String) case doctype(String) indirect case element(String, [(key: String, value: String?)], Node) indirect case fragment([Node]) case raw(String) case text(String) } This type allows you to express every HTML document that can ever exist. However, using this type directly can be a little unwieldy, so we provide a bunch of helper functions for constructing every element and attribute from the entire HTML spec in a type-safe manner: // Not using helper functions Node.element("html", [], [ .element("body", [], [ .element("p", [], [.text("You’ve found our site!")]) ]) ]) // versus // Using helper functions Node.html( .body( .h1("Welcome!"), .p("You’ve found our site!") ) ) This makes the “Swiftification” of an HTML document looks very similar to the original document. Yes! We even provide plug-in libraries that reduce the friction of using this library with Kitura and Vapor. Find out more information at the following repos: Templating languages are popular and easy to get started with, but they have many drawbacks: Stringy APIs: Templating languages are always stringly typed because you provide your template as a big ole string, and then at runtime the values are interpolated and logic is executed. This means things we take for granted in Swift, like the compiler catching typos and type mismatches, will go unnoticed until you run the code. Incomplete language: Templating languages are just that: programming languages. That means you should expect from these languages all of the niceties you get from other fully-fledged languages like Swift. That includes syntax highlighting, IDE autocompletion, static analysis, refactoring tools, breakpoints, debugger, and a whole slew of features that make Swift powerful like let-bindings, conditionals, loops and more. However, the reality is that no templating language supports all of these features. Rigid: Templating languages are rigid in that they do not allow the types of compositions and transformations we are used to performing on data structures in Swift. It is not possible to succinctly traverse over the documents you build, and inspect or transform the nodes you visit. This capability has many applications, such as being able to pretty print or minify your HTML output, or writing a transformation that allows you to inline a CSS stylesheet into an HTML node. There are entire worlds closed off to you due to how templating languages work. The DSL in this library fixes all of these problems, and opens up doors that are completely closed to templating languages. There are a few reasons you might want to still use a templating language: A designer delivers a large HTML document to you and all you want to do is hook in a little bit of value interpolation or logic. In this case you can simply copy and paste that HTML into your template, add a few interpolation tokens, and you're well on your way to having a full page served from your web application. You need to render non-HTML documents. The beauty of templating languages is that it outputs straight to plain text, and so it can model any type of document, whether it be HTML, markdown, XML, RSS, ATOM, LaTeX, and more. Creating very large documents in a single expression can cause compile times to go up, whereas templates are not compiled by Swift and so do not influence compile times. Luckily this isn't a problem too often because it is very easy to break up a document into as many small pieces as you want, which will probably lead to more reusable code in the long run. If you do decide that a templating language better suites your needs, then you should consider HypertextLiteral, which gives you template-like capabilities but in a safer manner. You can add swift-html to an Xcode project by adding it as a package dependency. If you want to use swift-html in a SwiftPM project, it's as simple as adding it to a dependencies clause in your Package.swift: dependencies: [ .package(url: "", from: "0.4.0") ] These concepts (and more) are explored thoroughly in a series of episodes on Point-Free, a video series exploring functional programming and Swift hosted by Brandon Williams and Stephen Celis. The ideas for this library were explored in the following episodes: All modules are released under the MIT license. See LICENSE for details. viewport-fit(thanks @hallee). acceptattribute (thanks @xavierLowmiller). _xmlRender. It is prefixed with an underscore for now. It renders valid XML and avoids rendering "void" (non-closing) HTML tags. swift-htmlto match Apple conventions. srcsetis now rendered in a stable order. Swiftpack is being maintained by Petr Pavlik | @ptrpavlik | @swiftpackco | API | Analytics
https://swiftpack.co/package/pointfreeco/swift-html
CC-MAIN-2022-27
refinedweb
1,284
54.12
SVG::Rasterize - rasterize SVG content to pixel graphics Version 0.003008 my $blockID = 0; my @block_atoms = grep { $_->{blockID} == $blockID } @$text_atoms; while(@block_atoms) { $blockID++; @block_atoms = grep { $_->{blockID} == $blockID } @$text_atoms; }. The following elements are drawn at the moment: path rect, circle, ellipse, line, polyline, polygon.: SVGfiles trefand such : DOMobject to render. Holds the DOM object to render. It does not have to be an SVG object, but it has to offer certain DOM methods (see SVG Input for details). The width of the generated output in pixels. The height of the generated output in pixels.G element, e.g. <svg>, <g>, or even just a basic shape element or so. strokeand fillproperties specified as currentColor, temporarily overrides the current_color attribute. engine_classattribute. See SVG::Rasterize::Engine for details on the interface. The value has to match the regular expression p_PACKAGE_NAME.): engine_args->{width}, given to rasterize $rasterize->engine_args->{width} width, given to rasterize $rasterize->width SVGobject.. SVGInput In principle, SVG input could be present as a kind of XML tree object or as stringified XML document. Therefore SVG::Rasterize might eventually offer the following options: XMLparser and offer a DOMinterface. XMLdata in a file.: Pixels per inch. Defaults to 90. Alias for px_per_in. This is realized via a typeglob copy: *dpi = \&px_per_in Inches per centimeter. Defaults to 1/2.54. This is the internationally defined value. I do not see why I should prohibit a change, but it would hardly make sense. Inches per millimeter. Defaults to 1/25.4. This is the internationally defined value. I do not see why I should prohibit a change, but it would hardly make sense. Inches per point. Defaults to 1/72. According to [1], this default was introduced by the Postscript language. There are other definitions. However, the CSS specification is quite firm about it. Inches per pica. Defaults to 1/6. According to the CSS specification, 12pc equal 1pt. $number = $rasterize->map_abs_length($length) $number = $rasterize->map_abs_length($number, $unit) This method takes a length and returns the corresponding value in px according. Defaults to 90. Alias for PX_PER_IN. This is realized via a typeglob copy *DPI = \$PX_PER_IN Defaults to 1/2.54. Defaults to 1/25.4. Defaults to 1/72. Defaults to 1/6. is expected to return a hash of the same form which will then be handed over to the SVG::Rasterize::State constructor. The default before_node_hook just. Executed right after creation of the SVG::Rasterize::State object. The attributes have been parsed, properties and matrix have been set etc. The method receives the SVG::Rasterize object and the SVG::Rasterize::State object as parameters. Executed right before a SVG::Rasterize::State object runs out of scope because the respective node is done with. The method receives the SVG::Rasterize object and the SVG::Rasterize::State object as parameters. Executed right before die when the document is in error (see In error below. Receives the SVG::Rasterize object and a newly created SVG::Rasterize::State object as parameters. Examples: $rasterize->start_node_hook(sub { ... }) Some hooks have non-trivial defaults. Therefore SVG::Rasterize provides the following methods to restore the default behaviour: Calls all the other restore... methods. Takes an optional named parameter preserve. If this is set to a true value. This attribute defaults to SVG::Rasterize::Engine::PangoCairo. It can be set as an object attribute or temporarily as a parameter to the rasterize method. This attribute can hold a HASH reference. The corresponding hash is given to the constructor of the rasterization engine when it is called by rasterize. engine_args can be set as an object attribute or temporarily as a parameter to the rasterize method. Validation. Readonly attribute. Holds the current SVG::Rasterize::State object during tree traversal. Not internal because it is used by exception methods to retrieve the current state object (in order to store it in the exception object for debugging purposes). ): This value might have been increased to make the ellipse big enough to connect start and end point. If it was negative the absolute value has been used (so the return value is always positive). This value might have been increased to make the ellipse big enough to connect start and end point. If it was negative the absolute value has been used (so the return value is always positive). : This is all if one of the radii is equal to 0. Otherwise, the following additional values are returned: SVGspecification link above) SVGspecification link above). According to the SVG specification (see, a document is "in error" if: XML 1.0specification, such as the use of incorrect XMLsyntax" SVG::Rasterize currently does not parse SVG files and will therefore not detect such an error. SVG DTDand which is not properly identified as being part of another namespace" Currently, SVG::Rasterize will also reject elements that are properly identified as being part of another namespace. This is checked for those attributes and properties that are currently supported by SVG::Rasterize. Values that are currently ignored may or may not be checked.exceptions When SVG::Rasterize encounters a problem it usually throws an exception. The cases where only a warning is issued a rare. This behaviour has several reasons: SVGspecification requires that the rendering stops. The exceptions are thrown in form of objects. See Exception::Class for a detailed description. See below for a description of the classes used in this distribution. All error messages are described in SVG::Rasterize::Exception. content. attribute. did not return a hash). You have given a parameter to the new method which does not have a corresponding method. The parameter is ignored in that case. The engine_class you were trying to use for rasterization could not be loaded. SVG::Rasterize then tries to use its default backend SVG::Rasterize::Engine::PangoCairo. If that also fails, it gives up. The width of output image evaluates to 0. This value is rounded to an integer number of pixels, therefore this warning does not mean that you have provided an explicit number of 0 (it could also have been e.g. 0.005in at a resolution of 90dpi). In this case, nothing is drawn. Like above. The version of the underlying C library. The version of the underlying C library has to be at least 1.22.4. This is not automatically fulfilled by installing a sufficiently high version of the Perl module because the release cycles are completely decoupled. The rest of what has been said about Cairo above is also true for Pango. Both are loaded by SVG::Rasterize::Engine::PangoCairo and that is only loaded if no other backend has been specified. Additionally, testing requires the following modules: Please report any bugs or feature requests to bug-svg-rasterize at rt.cpan.org, or through the web interface at. I will be notified, and then you will automatically be notified of progress on your bug as I make changes. Grouping elements are supposed to be rendered on a temporary canvas which is then composited into the background (see). Currently, SVG::Rasterize renders each child element of the grouping element individually. This leads to wrong results if the group has an opacity setting below 1. The specification at describes how single character transformations in e.g. text elements are supposed to be carried out when there is not a one to one mapping between characters and glyphs. Currently, SVG::Rasterize does not abide by these rules. Where values for x, y, dx, dy, or rotate are specified on a individual character basis, the string is broken into part an rasterized piece by piece. The relative units em, ex, and % are currently not supported. Neither are the font-size values smaller and larger, the font-weight values lighter and bolder, and the font-stretch values narrower and wider. ICCcolors ICC color settings for the fill and stroke properties are understood, but ignored. I do not know enough about color profiles to foresee how support would look like. Unless requested, ICC color profiles will probably not be supported for a long time. XMLnames The XML standard is very inclusive with respect to characters allowed in XML Names and Nmtokens (see). SVG::Rasterize currently only allows the ASCII subset of allowed characters because I do not know how to build efficient regular expressions supporting the huge allowed character class. Most importantly, this restriction affects the id attribute of any element. Apart from that, it affects xml:lang attributes and the target attribute of a elements. eval BLOCKand $SIG{__DIE__} Several methods in this distribution use eval BLOCK statements without setting a local $SIG{__DIE__}. Therefore, a $SIG{__DIE__} installed somewhere else can be triggered by these statements. See die and eval in perlfunc and $^S in perlvar. I do not know much about threads and how to make a module thread safe. No specific measures have been taken to achieve thread safety of this distribution. This documentation is largely for myself. Read on if you are interested, but this section generally does not contain documentation on the usage of SVG::Rasterize. ;-) ). qr/[a-zA-Z][a-zA-Z0-9\_]*/ $SVG::Rasterize::Regexes::RE_PACKAGE{p_PACKAGE_NAME} qr/^$package_part(?:\:\:$package_part)*$/ Package names given to methods in this distribution, namely the engine_class parameters. These attributes and the methods below are just documented for myself. You can read on to satisfy your voyeuristic desires, but be aware of that they might change or vanish without notice in a future version. Current value: %DEFER_RASTERIZATION = (text => 1, textPath => 1); Used by SVG::Rasterize::State to decide if rasterization needs to be deferred. See Deferred Rasterization above. Current value: %TEXT_ROOT_ELEMENTS = (text => 1, textPath => 1); Text content elements (like tspan) can inherit position information from ancestor elements. However, when they find an element of one of these types they do not have to look further up in the tree. Expects a HASH reference as parameter. No validation is performed. The entries width, height, and engine_class are used and expected to be valid if present. Expects two HASH references. The first one contains the node attributes of the respective element. It has to be defined and a HASH reference, but the content is assumed to be unvalidated. The second is expected to be validated. The keys width, height, and matrix are used. Does not return anything. Called by rasterize. Expects a hash with the rasterization parameters after all substitutions and hierarchies of defaults have been applied. Handles the traversal of an SVG or generic DOM object tree for rasterization. Called by _traverse_object_tree. Expects a node object and a hash with the rasterization parameters. Performs This uses the getNodeName DOM method. Takes whatever this method returns. This uses the getAttributes DOM method. The return value is validated as being either undef or a HASH reference. The result is further processed by _process_normalize_attributes. The final result is guaranteed to be a HASH reference. This uses the getChildNodes DOM method. The return value is validated as being either undef or an ARRAY reference. A copy of the array is made to enable addition or removal of child nodes (by hooks) without affecting the node object. At this time, ignored nodes are filtered out of the list of child nodes. Returns a list of the following values. The result is not further validated than listed below. getNodeNameon the object returned undefwith the list of child node objects (as returned by getChildNodes). Called by rasterize. Expects a hash with the rasterization parameters after all substitutions and hierarchies of defaults have been applied. Handles the SAX parsing of an SVG file for rasterization. This method requires XML::SAX. This module is not a formal dependency of the SVG::Rasterize distribution because I do not want to force users to install it even if they only want to rasterize content that they have created e.g. using the SVG module. This method will raise an exception if XML::SAX cannot be loaded. Expects a flag (to indicate if normalization is to be performed) and a HASH reference. The second parameter can be false, but if it is true it is expected (without validation) to be a HASH reference. Makes a copy of the hash and returns it after removing (if the flag is true) enclosing white space from each value. Independently of the flag, it processes the style attribute. If this is a HASH reference it is turned into a string. This means double work, because it is split into a hash again later by State, but it is a design decision that State should not see if the input data came as an object tree or XML string. So this has to be done, and this seemed to be a good place although this method was not started for something like that (maybe it should be renamed).. Called by font_size_scale to generate the font size scale table based on medium_font_size and font_size_scale. The underlying data structure is designed to support different tables for different font families (as mentioned in the respective CSS specification), but the public accessor methods do not support that, yet. Is called for each node, examines what kind of node it is, and calls the more specific methods. Pushes the SVG::Rasterize::State object to the rasterization queue during deferred rasterization. Expects a SVG::Rasterize::State object and optionally a hash of options. The important option is flush. Possibly sets the queued option. All options are passed on to the downstream method. Expects a SVG::Rasterize::State object and optionally a hash of options. If the option queued is set to a true value, nothing is done. Expects that $state->node_attributes have been validated. The d attribute is handed over to _split_path_data which returns a list of instructions to render the path. This is then handed over to the rasterization backend (which has its own expectations). is returned. Expects a path data string. This is expected (without validation) to be defined. Everything else is checked within the method. Returns a list. The first entry is either 1 or 0 indicating if an error has occured (i.e. if the string is not fully valid). The rest is a list of ARRAY references containing the instructions to draw the path. Expects a SVG::Rasterize::State object and optionally a hash of options. If the option queued is set to a true value, nothing is done. Expects that $state->node_attributes have been validated. The rest is handed over to the rasterization backend (which has its own expectations). Same es _process_rect. Same es _process_rect. Same es _process_rect. Same es _process_path. Same es _process_path. Expects a SVG::Rasterize::State object and optionally a hash of options. If the option queued is set to a true value, nothing is done. Expects that $state->node_attributes have been validated. Determines text-anchor and the absolute rasterization position for each text atom. Currently does not do anything. All text processing is done by either _process_text or _process_cdata. Might be deleted in the future. Expects a SVG::Rasterize::State object and optionally a hash of options. If the option queued is set to a true value, nothing is done. Expects that $state->node_attributes have been validated. Calls the draw_text method of the rasterization engine on each of its atoms in the right order. This piece of documentation is mainly here to make the POD coverage test happy. SVG::Rasterize overloads make_ro_accessor to make the readonly accessors throw an exception object (of class SVG::Rasterize::Exception::Attribute) instead of just croaking. Tons of information about what the author calls the "digital space for writing". This distribution builds heavily on the cairo library and its Perl bindings..
http://search.cpan.org/dist/SVG-Rasterize/lib/SVG/Rasterize.pm
CC-MAIN-2017-17
refinedweb
2,599
59.8
Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 1 Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins Jeff Kamerer Adobe Welcome to Part 5 of the article series on creating components using ActionScript 3.0. If you didn’t get a chance to read the articles leading up to this part, you might want to begin with Part 1 of the series. On the first page of Part 1 you can download the sample files for the entire series. Or if you prefer, you can download the sample files for Part 5 to use as a reference as you read this part of the series. The motivation for adding styles to the MenuBar component is to give the component user a way to modify the behavior of the draw() method without subclassing the component and overriding the method. After all, creating a component and overriding the draw() method is pretty complex work for a Flash developer who just wants to update a Button component to be green instead of blue! Styles parameterize the values used when drawing a component and styles can effect how the pieces of a component will layout, how the text will be formatted and determine which symbols should be created for different states of the component. In this article we’ll see how this is accomplished and also provide you with some best practices for working with styles when building components. Working with advanced FLA structure If you’ve been following along with the previous parts of this article series, you’ll remember that I placed the TileList and List components on the assets layer of the MenuBar movie clip. If you open the earlier version of the MenuBar movie clip from the full set of sample files and double-click either of those component instances, Flash displays Frame 2. You’ll see where each editable, labeled skin symbol is located on the Stage. Furthermore, you can click on the CellRenderer movie clip to edit those skins. For the next part of the component development process, I decided to alter MenuBar component so that it would have its own editable skin symbols on the Stage, similar to these other components. Duplicating skin symbols First, it was necessary to create new skin symbols for the MenuBar component. I did not want the MenuBar component to share the skin assets with the TileList and List components, primarily because I wanted to alter the default look of the MenuBar component myself—but I also wanted the users of the MenuBar component to be able to customize the MenuBar skins separately from the skins of the List and TileList components. Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 2 I went into the Component Assets/CellRenderer skin folder and made copies for half of the symbols. I didn’t copy the selected skins. They were not needed since I set the selectable property to false. Here are the steps I took to make the duplicates of the skin symbols: • First, I right-clicked each movie clip and selected Duplicate in the context menu. • In the Duplicate Symbol dialog box, I changed the symbol name to be prefixed with MenuBar_ instead of CellRenderer_. • Then after changing the symbol name, I clicked the option to Export for ActionScript, which filled in the Class field with the symbol name. • I also made sure to uncheck the option to Export on first frame. See the section on the Export on first frame setting for more details about why this setting is important. • Finally, I clicked OK. I repeated the same steps for every symbol in the CellRenderer folder again, but this time I changed the prefix of the linkage name to Menu_. Then, I moved all of these duplicate skin symbols into a new folder under Component Assets called MenuBarSkins. Next, I duplicated List_skin from the ListSkins folder, calling it Menu_skin, and duplicated TileList_skin from the TileListSkins folder, calling it MenuBar_skin. Then I moved both of these duplicate symbols into the MenuBarSkins folder. 9-Slice for skin movie clips I also edited all of the skin symbols to give them custom designs. I won't go over all the details of how I changed the skins, but I do want to point out one important thing; while I was editing them I removed the black border rectangle from all of the Menu_ and MenuBar_ skin movie clips based on the CellRenderer_ movie clips. Then, I unchecked the option to Enable guides for 9-slice scaling for each movie clip in the Symbol Properties dialog box (see Figure 1). Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 3 Figure 1. Deselect the checkbox option to Enable guides for 9-slice scaling Usually when you are developing components you’ll want to enable the 9-slice scaling on a skin movie clip, because it allows the component to scale the skin movie clip without distorting the sides and the corners. So when you create your own skin symbols you will want to make sure the 9-slice scaling option is checked most of the time. If you look through the skin symbols for the User Interface components, you will find that the majority of them use 9-slice scaling. However, since these particular skins are simply filled rectangles, scaling them will not cause any distortion and the 9-slice scaling is not needed. Assets Layer Next, I began modifying the assets layer of the MenuBar movie clip. Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 4 Here are the steps I took to update the assets layer: • I removed the List and TileList components from the Stage. These two components are no longer needed. In previous iterations of the MenuBar component I was using these components to pull in their assets, but going forward I will pull in all the necessary asset symbols directly. • I dragged out all of the skins from the MenuBarSkins folder onto the Stage. • I also placed the focusRectSkin symbol from the Shared folder onto the Stage, since the MenuBar component (along with many others) will share it. Asset names layer The asset names layer will be a guide layer. This is the layer that contains the background rectangle that sits behind the skin symbols on the Stage and the labels that explain the skins. The assets names layer should be set as a guide layer because guide layers are not published to the SWF file and none of the assets in this layer are necessary at runtime. This layer is locked, since the user of the component does not need to edit it. In fact, locking the asset names layer is a best practice, because it will prevent the user from selecting the background rectangle or the labels by mistake. Here are the steps I took to set up the asset names layer: • First I created a new layer below the assets layer and named it asset names. • Next, I created a blank keyframe on Frame 2. • I locked the assets layer for the moment, so that I would not edit it by mistake. • Then I created the labels and the background rectangle in the same style used in the other User Interface components. • I right-clicked on the asset names layer in the Timeline and selected Guide in the context menu. • Then I locked the assets name layer. • And finally, I unlocked the assets layer. In the next part of this article, we’ll take a look at the componentShim to learn more about how it works and how to use it. Understanding the ComponentShim The ComponentShim provides the precompiled definitions of the User Interface Component Infrastructure definitions so that the ActionScript source does not need to be added in the classpath. In our earlier iterations of the MenuBar component, the precompiled definitions of the User Interface Component infrastructure were included via the TileList and List components. However, since I’ve removed these components, it was necessary to include it directly now. Here are the steps I took to add the ComponentShim to the project: Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 5 • First, I created a ComponentShim layer below all the other layers. • Next, I created a blank keyframe on Frame 2 and dragged an instance of the ComponentShim from the Library onto the Stage. • Then I locked the ComponentShim layer. Even though the ComponentShim layer does not contain any visible symbols in it, the layer should not be hidden. You should never hide layers in your component movie clip because the option to Export hidden layers in the Publish setting could be unchecked in the component user's FLA file. The mysterious ComponentShim It can be a little disorienting working with the ComponentShim on the Stage. The ComponentShim’s height and width values are set to zero and it is completely invisible; even when you select it, no selection handles are drawn. You can select it by unlocking the ComponentShim layer and selecting all. Once it is selected you can at least see its information displayed in the Property inspector. The good news is that most Flash developers do not have to interact with the ComponentShim at all. Many will use it without ever realizing it is there. But even a component developer can be confused by it. So what is it? What does it do? Why do you need it? And where does it come from? What is it and what does it do? The ComponentShim is a compiled clip that has all of the User Interface Component Infrastructure classes and definitions compiled into it. It does not contain any of the visual assets, such as the skins, the avatars, etc. The ComponentShim only contains the compiled ActionScript 3.0 byte code, which is also known as ABC. The ComponentShim symbol itself is linked to a class called ComponentShim , which is an auto-generated class only used to shim the rest of the definitions into your Library. Like the other symbols required by the MenuBar component, the ComponentShim symbol does not have the option Export on first frame checked and it is located on Frame 2 of the MenuBar movie clip. This ensures that the ComponentShim will be added into a Flash user's Library at the same time the MenuBar component is dragged into a FLA file. At the same time, it also ensures that the ComponentShim symbol is exported into the resulting SWF file. When the ComponentShim symbol is exported into the SWF file, the empty movie clip and the ComponentShim class are automatically exported into your SWF file. The empty movie clip and the ComponentShim class are not really ever needed, but they do not take up many bytes in the SWF file, and exporting them enables the functionality that is necessary for the MenuBar component to work successfully. When a compiled clip is output to a SWF file, whether it is exported because it is on the Stage or because the option to Export on first frame is checked, all of the ABC definitions within that compiled clip become available to the ActionScript 3.0 compiler. The ABC definitions can also be output to the SWF file, although they will not Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 6 necessarily be output. An ABC definition will only be exported to the SWF file if it is referenced by some ActionScript definition that is being exported or if it is linked to an exported symbol as set in the Linkage dialog box. In other words, even though the ComponentShim contains every class definition for every component, only the classes that are required by the components that you use will be included in your SWF file. Here’s another way to think about it. The ABC definitions in the ComponentShim are added to your classpath. Just as the definitions in ActionScript files are only compiled in if they are needed, ABC definitions are only linked in if they are needed. As I discussed in an earlier sidebar , all ActionScript source files in your classpath are always checked for definitions before a precompiled ABC definition from a compiled clip would be used. If you are familiar with the Adobe Flex 2 compiler, you can think of exporting the ComponentShim for the User Interface components as very similar to having framework.swc in your build path for the Flex 2 components. Why do you need the ComponentShim? Using precompiled ABC definitions makes SWF publishing much faster when the ActionScript source files are not in the classpath. The User Interface component source is included for reference and for component development, and there are situations when you may need them in your classpath, but they are not in the default classpath. As part of your component development, creating compiled clips like the ComponentShim enable you to provide FLA-based components without distributing the source files. Where did the ComponentShim come from? The FLA file used to generate the ComponentShim, ComponentShim.fla, is installed with Flash CS3. You can find it in the following location: Windows Macintosh C:\Program Files\Adobe\Adobe Flash CS3\language\Configuration\Component Source\ActionScript 3.0\User Interface\ /Applications/Adobe Flash CS3/Configuration/Component Source/ActionScript 3.0/User Interface/ If you make any custom edits to the User Interface Component Infrastructure code, you can recreate ComponentShim and replace it in your FLA files. Here are the steps I took to recreate ComponentShim: • First, open ComponentShim.fla. • Right-click the Library symbol ComponentShim source and select Convert to Compiled Clip from the context menu. Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 7 • Rename the resulting compiled clip from the ComponentShim source SWF to ComponentShim. • Next, open the Linkage dialog box for the ComponentShim compiled clip and uncheck the option to Export on first frame. • Drag ComponentShim into the Component Assets/_private folder in the Library of your FLA file. • In the Resolve Library Conflict dialog box, select the option to Replace existing items and click OK. If you do not see this dialog box, it means that you did not drag ComponentShim into the _private folder correctly. • Finally, close the ComponentShim.fla without saving changes. You can also generate the ComponentShim by exporting the ComponentShim source as an SWC file and dragging the file over from the Components panel. However, I do not recommend this approach, because when you use this method of generating the ComponentShim it shows up as a component under the User Interface folder. It is a best practice to use the steps outlined above. If you regenerate the ComponentShim, you will need work carefully, because every time you drag a component from the Components panel the ComponentShim from the component will write over your custom ComponentShim. If you are concerned, you can replace ComponentShim in the User Interface.fla file to avoid this problem. Creating a compiled clip like the ComponentShim is a technique you can use in your own components. The instructions for doing this are detailed in the section titled Creating a Shim Compiled Clip. Setting the Edit frame To take a user straight to Frame 2 where all the skins are, (instead of Frame 1), I opened the Component Definition dialog box again by right-clicking the MenuBar movie clip in the Library and selecting Component Definition from the context menu. The only change I made was updating the Edit frame text field to 2 (see Figure 2). Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 8 Figure 2. In the Component Definition dialog box, change the Edit frame field from 1 to 2 After I made these changes, I opened the MenuBar component again and the Timeline view reflected the recent updates (see Figure 3). Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 9 Figure 3. The MenuBar FLA file displays as expected after changing the Edit frame to Frame 2 Library Cleanup At this point in my development process, I had accumulated a large quantity of symbols in the Library that were not necessary. I thought it was time for some housekeeping, so I Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 10 went through the Library and deleted them. If you are unsure which symbols are needed for your project, here’s a good method for identifying them: Create a brand new FLA file, then drag the MenuBar component on to the Stage and note which symbols are imported with it. Any symbol in the Library that is not imported into the new FLA file is not used for the MenuBar component. Here’s the list of the unnecessary symbols I deleted for this project: • TileList • List • Component Assets/_private/Component_avatar • Component Assets/CellRenderer • All symbols in the Component Assets/CellRendererSkins folder • Component Assets/ListSkins/List_skin • Component Assets/ScrollBar • All symbols in the Component Assets/ScrollBarSkins folder • Component Assets/Shared/arrowIcon • Component Assets/TileListSkins/TileList_skin After doing this cleanup, I reviewed the list of symbols in my Library (see Figure 4). Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 11 Figure 4. After deleting the unneeded symbols, the Library of MenuBar.fla contained a more manageable list of items Now it was time to update the Library of test.fla. Before dragging the new version of the MenuBar component into the Library of test.fla, I cleaned up that Library by deleting the entire Component Assets folder, the List component and the TileList component. Completing this step assured me that the assets I just deleted from the Library of MenuBar.fla did not linger in the Library of test.fla. When I dragged the MenuBar component over from MenuBar.fla, the Component Assets folder reappeared in the Library of test.fla. Since I previously used the Button component in test.fla and it also contained assets in the Component Assets folder, I dragged the Button component back into the Library from the Components panel to recreate its asset symbols in the Library of test.fla. Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 12 Finally, I had to do some troubleshooting in test.fla to get the code working as desired. After making all of these changes to the Library, running Control > Test Movie resulted in many, many runtime errors! MenuBarTileList, MenuList and NoScrollBar The removal of the ScrollBar skins caused some problems. To resolve the issues, I created three classes in the package fl.example.menuBarClasses : MenuBarTileList , MenuList and NoScrollBar . The only changes required in the MenuBar code to use these classes was to create a new MenuBarTileList instance instead of a TileList instance in configUI(), and to create a new MenuList instance instead of a List instance in createMenu() . I discuss these three classes in depth in the sidebar titled Replacing Display Objects in configUI(). Using the StyleManager class The class fl.manager.StyleManager defines the StyleManager, which manages all styles on all component instances. In order to use styles, one of the first things a component needs to do to is call StyleManager.registerInstance(this) —which initializes all styles on the instance and also registers the instance with the StyleManager so that it will update when the styles are changed. Styles can be changed globally, per component type or per component instance. It is not necessary to write any code to register with the StyleManager because the UIComponent constructor does it for you. The StyleManager's main task is managing the different levels of styles to ensure that each component instance uses the correct styles. For example, there are default styles (predefined by the component’s ActionScript code) and there are customized styles, which the component user defines with ActionScript. There are four levels of styles, and they are listed below in the following order of priority, from highest to lowest: • Customized instance level styles • Customized component level styles • Global styles • Default component level styles Developers using components can customize styles at three different levels: the instance level, the component level and the global level. There are symmetric APIs to set, clear and get styles at each level. Note that since StyleManager is a class, the code StyleManager.setStyle() calls a static method on a class. This means that you must include import fl.managers.StyleManager in your code to avoid compile errors. Here’s a list of the methods you can use to set, get or clear styles: Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 13 • instance.setStyle(styleName, styleValue) • instance.clearStyle(styleName) • instance.getStyle(styleName) • StyleManager.setComponentStyle(componentClass, styleName, styleValue) • StyleManager.clearComponentStyle(componentClass, styleName) • StyleManager.getComponentStyle(componentClass, styleName) • StyleManager.setStyle(styleName, styleValue) • StyleManager.clearStyle(styleName) • StyleManager.getStyle(styleName) All of the set methods take a string name and a value parameter. Each component understands a predefined list of styles. The style value parameter passed into the set methods is Object so that a value of any type ( Boolean , Number , Class , Object , flash.text.TextFormat , etc.) can be passed in. Depending on the style being set, a specific type is required. All of the styles supported by each component and all of the types required for each style are listed in the online ActionScript 3.0 Language and Components Reference . Generally speaking, the process you would use involves calling a clear method to undo a previous setting to a set method, and this acts as though the style for that instance or component had never been customized. Similarly, the get methods return the value set at that level. To get the active style for an instance based on the settings for all style levels and the priorities of those levels, call the method getStyleValue() . Understanding the different levels of styles In the previous section of this series, we looked at the available methods for setting, getting or clearing styles. Now let's take a closer level at the style levels and the APIs that operate on them. I’ll describe these in order of priority level, beginning with the lowest priority. Default component styles Each component has default styles, which are defined by the return value of getStyleDefinition() for the component class. If a component class does not implement the static method getStyleDefinition(), then the StyleManager walks up its inheritance chain, starting with its base class, until it finds an implementation. The object returned from this call defines both the list of styles that the component uses and the default values for these styles. When a style from this list is updated, StyleManager will update every instance of this component, causing an invalidation. The StyleManager keeps track of the settings and it will not notify a component instance when a style updates that it does not use. The set, get and clear methods for component styles do not operate on the default component style level. There are in fact no set, get and clear methods that operate on the default component styles. The default component styles are immutable and cannot be changed by the component user’s code. Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 14 To see how this works, let’s try the following example. This illustrates how the get and clear component level methods do not interact with the default component styles. Drag a Button component onto the Stage and put the following code on Frame 1: import fl.managers.StyleManager; import fl.controls.Button; trace(StyleManager.getComponentStyle(Button, "textFormat")); StyleManager.clearComponentStyle(Button, "textFormat"); Test Movie traces null to the Output panel and the Button’s appearance displays the default values. The getComponentStyle() method call returned null because there isn’t a customized component level style, and the clearComponentStyle() method call did not alter the display of the Button component for the same reason. Global styles The StyleManager.setStyle() method sets styles globally for all components. For this example, drag two Button components and a Checkbox onto the Stage. Then, put the following code on Frame 1 to see all three instances displayed with italic labels: import fl.managers.StyleManager; var tf:TextFormat = new TextFormat(); tf.italic = true; StyleManager.setStyle("textFormat", tf); The previous example changed one of the default global styles initialized by UIComponent.getStyleDefinition() , but any style, including a new style used by your custom component, can be set at the global level, and when it is updated every component that uses that style will be updated. For example, the Button, RadioButton and CheckBox components all use the style named upIcon, but have different default values. Drag each of these three components to the Stage and put the following code on Frame 1: import fl.managers.StyleManager; StyleManager.setStyle("upIcon", RadioButton_upIcon); In contrast to component level styles, the StyleManager does not track customized and default global styles separately. So calling StyleManager.getStyle() will return values set by calls to StyleManager.setStyle() and will also return the default values initialized by UIComponent.getStyleDefinition() . Calls to StyleManager.clearStyle() are similarly able to clear styles set by the user and can also clear out the default values. For example, drag out a Button component onto the Stage and add the following code to Frame 1. When you select Control > Test Movie, the SWF file will trace the default value, 2, to the Output panel: import fl.managers.StyleManager; trace(StyleManager.getStyle("focusRectPadding")); Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 15 The result of clearing the focusRectSkin style is an interesting situation that occurs because there are no separate customized global styles and default global styles levels. All components that use the focusRectSkin style set it to null in their defaultStyles variable to register it with the StyleManager for updates to that style. The reason the value is always set to null is because the global style value has precedence over the default component level style, so the default component level style would never be used in a normal situation. So, if the user clears the global focusRectSkin, there are no component default levels to fall back on. Component instances will not have a valid focusRectSkin, resulting in runtime errors every time focus tabs from one component to another. To see this effect in action, drag a couple of Button component instances onto the Stage. Then put the following code on Frame 1. When you select Control > Test Movie make sure that the keyboard shortcuts are disabled under the Control menu: import fl.managers.StyleManager; StyleManager.clearStyle("focusRectSkin"); You could avoid this situation by setting the styles at the component level or at the instance level. For example, the following code would work if you’ve only dragged Button component instances onto the Stage, but the same code will break if a CheckBox is also placed on the Stage: import fl.managers.StyleManager; import fl.controls.Button; StyleManager.clearStyle("focusRectSkin"); StyleManager.setComponentStyle(Button, "focusRectSkin", "focusRectSkin"); Finally, while the StyleManager does not track default global styles, the defaultTextField and defaultDisabledTextField are special styles that are treated as default global styles. They are only used when the effective value for textField or disabledTextField for a component instance is null . As you can see in the code snippet in the section of this article on TextField Styles , the component code always calls UIComponent.getStyleDefinition() to get these values, so customizing their values in the StyleManager has no effect. Customized component styles The StyleManager.setComponentStyle() method sets the custom style for all instances of a particular component. For example, to put italic labels on two Button component instances and leave a plain text label on a CheckBox component, drag two Button components and one CheckBox component to the Stage. Then put the following code on Frame 1: import fl.managers.StyleManager; import fl.controls.Button; var tf:TextFormat = new TextFormat(); tf.italic = true; StyleManager.setComponentStyle(Button, "textFormat", tf); Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 16 As I mentioned in the default component styles section of this article, the get and clear methods for component styles only operate on the customized styles and do not get or clear default component styles. When component styles are customized for a class, they are customized for the component implemented by that class—but not for any components that inherit from that class. For example, if you created a custom component with a class named MyButton that extended Button , when a user set custom component styles for Button , it would not affect instances of your component. A user would need to specify MyButton when customizing component styles to affect instances of your component. Customized instance styles The setStyle() method called on an instance sets a custom style for a single instance. For example, to make the label of one Button component display with italic text and leave one label displaying normal text, drag two Button component instances onto the Stage. Then, give one of the buttons the instance name myButton in the Property inspector and put the following code on Frame 1: var tf:TextFormat = new TextFormat(); tf.italic = true; myButton.setStyle("textFormat", tf); Calling getStyle() on an instance before calling setStyle() for the same style will always return null . Use the protected method getStyleValue() to get the proper active style for an instance, and use getStyle() to learn whether the style is customized at the instance level. Similarly, calling clearStyle() on an instance before calling setStyle() for the same style does nothing. Clearing a custom instance level style is only useful once it has been set. An example of setting multiple style levels The priority of different types of styles is explained in the StyleManager section. To see how these three different ways of setting components interact with one another, drag two Button components onto the Stage. Then, give one of them the instance name myButton in the Property inspector. Next, drag a RadioButton and a CheckBox component onto the Stage and put the following code on Frame 1: import fl.managers.StyleManager; import fl.controls.CheckBox; var globalTF:TextFormat = new TextFormat(); globalTF.italic = true; StyleManager.setStyle("textFormat", globalTF); var myButtonTF:TextFormat = new TextFormat(); myButtonTF.color = 0xFF0000; myButton.setStyle("textFormat", myButtonTF); Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 17 var checkBoxTF:TextFormat = new TextFormat(); checkBoxTF.bold = true; StyleManager.setComponentStyle(CheckBox, "textFormat", checkBoxTF); When you select Control > Test Movie you will see one Button component with a red label, another Button component with an italic label, a RadioButton component with an italic label and a CheckBox component with a bold label. Implementing getStyleDefinition() and UIComponent methods The first step I took to ensure that styles were supported in MenuBar was to implement the static method getStyleDefinition() . The first time the StyleManager encounters a type of component, it calls getStyleDefinition() on that component's class to get the default styles for the component. The static method getStyleDefinition() returns an object with style names and default values. Every style that a component uses must appear on this object, because the StyleManager assumes that if a style is not in this list, then instances of this component do not need to be updated when that style changes. MenuBar implements getStyleDefinition() in the standard way, returning a private defaultStyles static object. private static var defaultStyles:Object = { // styles set on the TileList menuBarCellRenderer: fl.example.menuBarClasses.MenuBarCellRenderer, menuBarSkin:"MenuBar_skin", menuBarContentPadding:1, // styles set on menu Lists menuCellRenderer: fl.example.menuBarClasses.MenuCellRenderer, menuSkin:"Menu_skin", menuContentPadding:1, // both List and TileList support disabledAlpha, but we will never show // a drop-down menu when the MenuBar is disabled, so we don't put a prefix // on it and only pass it through to the TileList disabledAlpha:0.5, // These styles are in the list of default global styles defined by // UIComponent.getStyleDefinition(). We list them to tell the StyleManager // that MenuBar uses these styles. We leave the values as null because // the global styles defined by UIComponent would override the default // component style anyways focusRectSkin: null, focusRectPadding: null }; public static function getStyleDefinition():Object { return defaultStyles; } Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 18 I got this specific list of styles by simply mimicking the styles supported by List and TileList , minus the ScrollBar skins. I made this list of skins by looking at the defaultStyles declarations for List and TileList , and then I also reviewed their superclasses, SelectableList and BaseScrollPane ; this was actually easier than looking at the documentation, because there were so many ScrollBar skins to weed out. I made two sets of styles: one for the menu bar and one for the drop-down menus. UIComponent.mergeStyles() I created two new classes, both extending CellRenderer to change its default styles. The cell renderer styles for MenuBar default to these classes: MenuBarCellRenderer and MenuCellRenderer . I put these classes in the fl.example.menuBarClasses package, following the same pattern the User Interface components use. Generally speaking, auxiliary classes for a specific component should be put into a package following this naming convention. These two classes are very similar, so I'll just look at one of them. package fl.example.menuBarClasses { import fl.core.UIComponent; import fl.controls.listClasses.CellRenderer; public class MenuCellRenderer extends CellRenderer { public function MenuCellRenderer() { super(); } private static var defaultStyles:Object = { upSkin:"Menu_upSkin", downSkin:"Menu_downSkin", overSkin:"Menu_overSkin", disabledSkin:"Menu_disabledSkin" }; public static function getStyleDefinition():Object { return UIComponent.mergeStyles(defaultStyles, CellRenderer.getStyleDefinition()); } } } The getStyleDefinition() implementation by MenuCellRenderer uses the static mergeStyle() method defined in UIComponent . The mergeStyles() method takes two or more objects and puts all name/value pairs found in any of the objects into a single object, which is returned. If multiple objects define the same name/value pair, then the value found on the first object is used unless it is null . So it was critical that I passed defaultStyles as the first parameter into mergeStyles() , because I was overriding styles that are also defined by CellRenderer . Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 19 While usually it makes sense to merge in the styles of the base class, in some cases you may not and in some cases you may merge in some other styles. For example, CheckBox and RadioButton simply return defaultStyles without merging in LabelButton.getStyleDefinition() . Another example is the BaseScrollPane , which merges in the styles from ScrollBar , which is not a superclass but is instead a subcomponent of BaseScrollPane . UIComponent.getStyleDefinition() and Global Styles The StyleManager calls UIComponent.getStyleDefinition() to initialize the global styles level. Here’s a list of the global styles that are initialized by UIComponent : • focusRectSkin • focusRectPadding • textFormat • disabledTextFormat • defaultTextFormat • defaultDisabledTextFormat You should never merge the return value of UIComponent.getStyleDefinition() into your component's styles definition. Instead, you should add any global style your component uses to your defaultStyles variable with a value of null . You should never add the style defaultTextFormat or defaultDisabledTextFormat to your component's list of styles. I added two global styles initialized by UIComponent to the defaultStyles of MenuBar : focusRectSkin and focusRectPadding. Calling getStyleValue() via the draw() method When your draw() method needs to get a style's value, it should call the protected method getStyleValue() . It returns the proper style, whether per instance, per component or global, for the component instance as determined by the StyleManager. There is also a public getStyle() method, so be sure not to get the two confused; it returns a value if the instance's style has been customized, otherwise getStyle() returns null . The draw() method for MenuBar is pretty simple since it just gets the styles and passes them on to the proper subcomponents. There were also some other changes to the draw() method that altered the layout code to account for the content padding style, but I won't cover those changes in detail. Very similar code was added for forwarding styles to the menu bar TileList instance and the drop-down menu List instances. The following code was added near the beginning of the draw() method to set up the TileList instance: // set styles Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 20 myMenuBar.setStyle("skin", getStyleValue("menuBarSkin")); myMenuBar.setStyle("cellRenderer", getStyleValue("menuBarCellRenderer")); var menuBarPadding:Number = getStyleValue("menuBarContentPadding") as Number; myMenuBar.setStyle("contentPadding", menuBarPadding); myMenuBar.setStyle("disabledAlpha", getStyleValue("disabledAlpha")); getDisplayObjectInstance() Most components do not exclusively forward styles to subcomponents, but instead create skin instances in their draw() methods—and to do this they use getDisplayObjectInstance() . A great example of this is the drawBackground() method from BaseButton , which is called by the draw() method. protected function drawBackground():void { var styleName:String = (enabled) ? mouseState : "disabled"; if (selected) { styleName = "selected"+styleName.substr(0,1).toUpperCase()+styleName.substr(1); } styleName += "Skin"; var bg:DisplayObject = background; background = getDisplayObjectInstance(getStyleValue(styleName)); addChildAt(background, 0); if (bg != null && bg != background) { removeChild(bg); } } First, the code above includes some logic with the current mouse state, enabled state and selected state to determine the style name for the correct skin. It then caches the current background instance in the bg variable and calls getDisplayObjectInstance(getStyleValue(styleName)) to get the correct instance. Your code does not have to worry about how the skin instance is created; this method just does the right thing. It adds the new background instance to the display list at the back (of course). Finally, before it removes the cached bg instance from the display list, it checks to make sure it is not the same exact instance that was just added. It is not common, but there are situations where a call to getDisplayObjectInstance() could return the same instance you were already using for a skin, so as a best practice you should always handle this case. Skin styles Skin styles are all declared to take the type Class , but the reality is a bit more complicated than that, which is why you should always use getDisplayObjectInstance() to get the proper DisplayObject instance for a skin. A valid value for a skin style is not in fact only a Class instance, but can be any of the following three types: • Class • String • flash.display.DisplayObject Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 21 A Class parameter specifies the class to be instantiated to create the skin. The class's constructor must accept zero arguments. For example, if you had a symbol in your Library called MyPurpleFocusRectSkin and it was exported for ActionScript as MyPurpleFocusRectSkin , then you could call StyleManager.setStyle("focusRectSkin", MyPurpleFocusRectSkin) to set the focusRectSkin style for all components to use your symbol. (You can try this by duplicating the Component Assets/Shared/focusRectSkin symbol and using that code. Remember that if you are in Test Movie you must disable keyboard shortcuts under the Control menu in order to tab among components and see the focus rect behavior.) A String parameter also specifies the class for the skin symbol, but as a string instead of a direct reference to the class. All default skin styles in the User Interface components use strings, and all your default skin styles should do the same. It is important to ensure your component’s default skin styles use strings, because otherwise the user of your component might delete some of the default skin symbols from the Library—in which case they would encounter a runtime or a compile error. (The type of error depends on whether the class for the removed skin symbol was automatically generated or not and whether the component source was compiled from source or provided precompiled, as it is with the ComponentShim.) A DisplayObject parameter specifies a specific instance to be used as a skin. The DisplayObject instance passed into setStyle() could be a movie clip that you placed on the Stage and assigned an instance name in the Property inspector, or it could be an instance created dynamically with code like new MyPurpleFocusRectSkin() , or it could even be an instance created with the Drawing API. The following code demonstrates how to create a gradient ellipse with the Drawing API and use it as a Button component instance's upSkin: var skinSprite:Sprite = new Sprite(); var matrix:Matrix = new Matrix(); matrix.createGradientBox(100, 22); skinSprite.graphics.beginGradientFill(GradientType.LINEAR, [0xFF0000, 0x00FF00, 0x0000FF], [1, 1, 1], [0, 128, 255], matrix); skinSprite.graphics.drawEllipse(0, 0, 100, 22); skinSprite.graphics.endFill(); theButton.setStyle("upSkin", skinSprite); There is a significant restriction on DisplayObject style values for skins, which is that they can only be applied to a component instance with the syntax instance.setStyle(styleName, styleValue) and they can never be used as global or component styles. This restriction occurs because it is not possible for multiple instances to share a single skin instance. For example, if you try to set the global style for the Button upSkin with an instance on the Stage using the code StyleManager.setStyle("upSkin", mySkinInstance) , you’ll discover that if you have two Button components on the Stage, only one button will draw correctly and the other will be missing its upSkin. One notable exception to this restriction is the focusRectSkin; since only one component has the focus at any given time, a single instance can be shared by all components on the Stage. Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 22 Examining renderer styles The components based on SelectableList —DataGrid, List and TileList—support renderer styles. Renderer styles can only be set and cleared at the instance level with the methods setRendererStyle() and clearRendererStyle() . Renderer styles allow the component user to set styles on all cell renderers for a component instance. For example, to make the labels in each cell of a List component display with italic text, you can drag a List component onto the Stage, give it the instance name myList in the Property inspector and put the following code on Frame 1: import fl.data.DataProvider; var dp:DataProvider = new DataProvider(); dp.addItem({label: "one"}); dp.addItem({label: "two"}); dp.addItem({label: "three"}); dp.addItem({label: "four"}); myList.dataProvider = dp; var tf:TextFormat = new TextFormat(); tf.italic = true; myList.setRendererStyle("textFormat", tf); It is important to note that the style cellRenderer determines which class does the cell rendering, which is not set with the setRendererStyle() method, but using the ordinary methods of setting styles at the instance, component or global level. The cellRenderer style should always be set to a class which implements fl.controls.listClasses.ICellRenderer . The value of the cellRenderer style determines what styles are available via setRendererStyle() . MenuBar renderer styles To support customizations of the cell renderers for the menu bar and for the drop-down menus, I implemented the following four public methods: • setMenuBarCellRenderer() • clearMenuBarCellRenderer() • setMenuCellRenderer() • clearMenuCellRenderer() The implementation of these methods was largely copied from SelectableList . A case- insensitive search through SelectableList.as on "rendererstyle" found everything I needed. To implement most of the changes I could have just copied the code directly, but it was necessary to use search and replace to change rendererStyle to menuBarRendererStyle and also to menuRendererStyle. The biggest change was that rather than actually calling setStyle() on the individual ICellRenderer instances, the MenuBar version only has to call setRendererStyle() on the TileList instance and the Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 23 List instances. The code example below shows how this works. I’ve only included the changes for the menu renderer styles, since the changes are so similar: // support for setMenuRendererStyles(), etc protected var menuRendererStyles:Object; protected var updatedMenuRendererStyles:Object; public function MenuBar() { ... // init renderer styles info menuRendererStyles = new Object(); updatedMenuRendererStyles = new Object(); } ... override protected function draw():void { ... // if we have menus, then we set up the rowHeights, heights, widths and locations of everything if (myMenus.length > 0) { // distribute the menus evenly across the menu bar myMenuBar.columnWidth = ((myMenuBar.width - (menuBarPadding * 2) - 1) / myMenus.length); // get menu styles var menuSkin:Object = getStyleValue("menuSkin"); var menuCellRenderer:Object = getStyleValue("menuCellRenderer"); var menuPadding:Number = getStyleValue("menuContentPadding") as Number; // update all the renderer styles before we call drawNow() on each updateMenuRendererStyles(); ... } ... } ... public function setMenuRendererStyle(name:String, style:Object):void { if (menuRendererStyles[name] == style) { return; } updatedMenuRendererStyles[name] = style; menuRendererStyles[name] = style; invalidate(InvalidationType.RENDERER_STYLES); } public function getMenuRendererStyle(name:String):Object { return menuRendererStyles[name]; } public function clearMenuRendererStyle(name:String):void { delete menuRendererStyles[name]; updatedMenuRendererStyles[name] = null; // Do not delete, so it can clear the style from current renderers. invalidate(InvalidationType.RENDERER_STYLES); Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 24 } protected function updateMenuRendererStyles():void { for (var i:int=0; i < myMenus.length; i++) { var theMenu:List = myMenus[i] as List; for (var n:String in updatedMenuRendererStyles) { theMenu.setRendererStyle(n, updatedMenuRendererStyles[n]); } } updatedMenuRendererStyles = {}; } While testing the movie and clicking on the various buttons, I discovered a problem with my menu cell renderers. The changes to the display updated after I applied them, however, if I applied any other change, (like a size change, data change or style change) the menu cell renderer styles stopped working. After performing a little debugging, I realized my issue was occurring because when all of the List instances are destroyed, it is necessary to force all of the renderer styles to be set on the new list instances. To resolve this issue, I made the following change to the code: protected function clearMenus():void { closeMenuBar(); myMenuBar.dataProvider = new DataProvider(); while (myMenus.length > 0) { var theMenu:List = (myMenus.shift() as List); removeChild(theMenu); } // This line forces all renderer styles to be refreshed on // the drop-down menu Lists, which will be necessary // since brand new ones will be created updatedMenuRendererStyles = menuRendererStyles; } I am pointing out this particular debugging experience to illustrate how important it is to work with these simple test buttons in test.fla. It is essential that you keep testing the movie and keep trying to break it in order to discover errors that are easy to make but hard to catch. Make sure to create some test cases that alter the styles of your components after initialization! Working with TextField styles If your component uses TextField instances, it should follow the best practices for TextField styles to format these instances. By supporting consistent TextFormat styles, your component will be able to share text-formatting changes with all of the components. Components that use TextField instances support the following styles: • textFormat • disabledTextFormat (optional) Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 25 • embedFonts The textFormat and disabledTextFormat styles accept a flash.text.TextFormat instance and are set on the TextField instances with the setTextFormat() method. The disabledTextFormat style is optional because some components do not display text at all when disabled, (such as the ColorPicker component), and some do not display text differently when disabled, (such as the Label component). The embedFonts style accepts a Boolean value and sets the embedFonts property on the TextField instances. UIComponent also defines the styles defaultTextFormat and defaultDisabledTextFormat, which your component should fall back on if textFormat or disabledTextFormat is null . When applying these properties, you should use code very similar to the following code from fl.controls.TextInput : protected function drawTextFormat():void { // Apply a default textformat var uiStyles:Object = UIComponent.getStyleDefinition(); var defaultTF:TextFormat = enabled ? uiStyles.defaultTextFormat as TextFormat : uiStyles.defaultDisabledTextFormat as TextFormat; textField.setTextFormat(defaultTF); var tf:TextFormat = getStyleValue(enabled?"textFormat":"disabledTextFormat") as TextFormat; if (tf != null) { textField.setTextFormat(tf); } else { tf = defaultTF; } textField.defaultTextFormat = tf; setEmbedFont(); if (_html) { textField.htmlText = _savedHTML; } } protected function setEmbedFont() { var embed:Object = getStyleValue("embedFonts"); if (embed != null) { textField.embedFonts = embed; } } Many components use code very similar to this, and most of it is boilerplate that you can copy and paste. Your component must have a textField property to which to apply the format. Also this code uses some private properties, _html and _savedHTML . If you allow the htmlText of the textField to be set (some components, like the Button component, do not) then you will need code similar to this. This is because calling setTextFormat() replaces all previous formatting on the TextField instance, including HTML formatting, and setting the htmlText property again reapplies the HTML formatting. Creating ActionScript 3.0 components in Flash CS3 Professional – Part 5: Styles and skins 26 Where to go from here In the next part of this article series we’ll take a look at the invalidation model. We’ll discuss how to maximize performance of the MenuBar component by consolidating the events received (mouse events, frame scripts, whenever a new dataProvider is set or whenever properties are updated) so that the changes to the component are updated at once. I’ll also cover the different types of invalidation and how they work with the draw() method to limit the amount of updates made to the component. If this article has sparked your interest and you’d like to learn more about customizing the look and feel of ActionScript 3.0 components, be sure to check out these useful articles to get more details: Skinning the ActionScript 3.0 FLVPlayback component Designing Flex 2 skins with Flash, Photoshop, Fireworks or Illustrator About the author Jeff Kamerer is a computer scientist at Adobe Systems who has worked on the Flash authoring team since
https://www.techylib.com/el/view/anthropologistbarren/creating_actionscript_3.0_components_in_flash_cs3
CC-MAIN-2017-22
refinedweb
8,264
53.41
In the old days when I first learned about Windows programming using Microsoft Foundation Class, the first thing that I learned was on creating menu items as I thought that was Windows programming. Now, I realized that there is actually not much to creating menu items. You create it once and you can use it for a whole lifetime of your application. In the example below, I have used HBox to position the menu bar to be at the top of the window. The menu bar will stretch across the window as I have set it with priority.ALWAYS. package javafxapplication30; import javafx.application.Application; import javafx.scene.Scene; import javafx.scene.control.Menu; import javafx.scene.control.MenuBar; import javafx.scene.layout.HBox; import javafx.scene.layout.Priority; import javafx.stage.Stage; public class JavaFXApplication30 extends Application { @Override public void start(Stage stage) { stage.setTitle("JavaFX Program"); HBox pane = new HBox(); Scene scene = new Scene(pane, 400, 350); stage.setScene(scene); final Menu menu1 = new Menu("File"); final Menu menu2 = new Menu("Options"); final Menu menu3 = new Menu("Help"); MenuBar menuBar = new MenuBar(); menuBar.getMenus().addAll(menu1, menu2, menu3); HBox.setHgrow(menuBar, Priority.ALWAYS); //So that the menuBar will take the whole HBox pane.getChildren().add(menuBar); stage.show(); } public static void main(String[] args) { launch(args); } }
https://codecrawl.com/tag/menu/
CC-MAIN-2018-47
refinedweb
218
61.43
NAME pmap_remove, pmap_remove_all, pmap_remove_pages - remove pages from a physical map SYNOPSIS #include <sys/param.h> #include <vm/vm.h> #include <vm/pmap.h> void pmap_remove(pmap_t pmap, vm_offset_t sva, vm_offset_t eva); void pmap_remove_all(vm_page_t m); void pmap_remove_pages(pmap_t pmap); DESCRIPTION The pmap_remove() function removes the range of addresses between sva and eva from the physical map pmap. If eva is less than sva, then the result is undefined. It is assumed that both sva and eva are page-aligned addresses. The pmap_remove_all() removes the physical page m from all physical maps in which it resides, and reflects back the modify bits to the appropriate pager. The pmap_remove_pages() function removes all user pages from the physical map pmap. This function is called when a process exits to run down its address space more quickly than would be the case for calling pmap_remove(). SEE ALSO pmap(9) AUTHORS This manual page was written by Bruce M Simpson 〈bms@spc.org〉.
http://manpages.ubuntu.com/manpages/jaunty/man9/pmap_remove.9freebsd.html
CC-MAIN-2014-41
refinedweb
158
65.22
Ok, so Im trying to write a program that computes factorials recursively. Here is what Ive got (when I asked my teacher for help, he said my factorial.cpp has one key flaw but I cant find it) Factorial.cpp Code:int factorial(int N) { //cout << "into factorial with " << N << endl; if ( N < 1 ) return 1; // base case //cout << " not a base case with N = " << N << endl; int answer = factorial(N-1); //cout << " returning " << answer << " as answer at N = " << N << endl; return answer; } driveFactorial.cpp Code:#include <iostream> #include <fstream> using namespace std; #include "factorial.cpp" int main() { int FactForN = 1; cout << "\n Factorial of what number? (negative to end) "; while ( cin >> FactForN && FactForN > 0) { int answer = factorial(FactForN); cout << "\n The factorial of " << FactForN << " is " << answer << endl; cout << "\n Factorial of what number? (negative to end) "; } } any help would be appreciated. Thanks!
http://cboard.cprogramming.com/cplusplus-programming/102954-factorial-computation-program.html
CC-MAIN-2014-35
refinedweb
143
54.63
HPL_indxg2l man page HPL_indxg2l — Map a global index into a local one. Synopsis #include "hpl.h" int HPL_indxg2l( const int IG, const int INB, const int NB, const int SRCPROC, const int NPROCS ); Description HPL_indxg2l computes the local index of a matrix entry pointed to by the global index IG. This local returned index is the same in all processes. Arguments - IG (input) const int On entry, IG specifies the global index of the matrix entry. IG must be at least zero. - INB (input) const int On entry, INB specifies the size of the first block of the global matrix. INB must be at least one. - NB (input) const int On entry, NB specifies the blocking factor used to partition and distribute the matrix. NB must be larger than one. - SRCPROC (input) const int On entry, if SRCPROC = -1, the data is not distributed but replicated, in which case this routine returns IG in all processes. Otherwise, the value of SRCPROC is ignored. - NPROCS (input) const int On entry, NPROCS specifies the total number of process rows or columns over which the matrix is distributed. NPROCS must be at least one. See Also HPL_indxg2lp (3), HPL_indxg2p (3), HPL_indxl2g (3), HPL_numroc (3), HPL_numrocI (3).
https://www.mankier.com/3/HPL_indxg2l
CC-MAIN-2017-47
refinedweb
203
66.23
I have about 12 GB of image tiles made up of about 2 million files. I'd like to zip these up to make transferring them to a server simpler. I just plan on storing the files in the zip files for transferring, no compression. Helm is present on the web server and can handle unzipping files. I'd like to point a program at all these files in one go and get it to zip them up into files of approx 1 GB each, but each zip file needs to be independent of the others. I have 7-zip installed with supports splitting across volumes, but these volumes are dependent upon one another to be unzipped. Anyone have any suggestions? Thanks in advance! The freeware on Windows called "Spinzip" should do the work for your purpose ! ;) It is based on IZARCC (automatically included in Spinzip). You have to check but the full original path may be kept in the zipped files ! See ya I'm not aware of a program that can do that, since if you are making one zip in multi-volumes they will all be related. Your best bet may be to make 12 folders and put a GB in each one, then zip the folders individually. OK here is a way out of it, but not all that good. You can try if you really need. Assumptions: You need to divide in 12GB of data into 3 4GB DVDs. Solution Now you have your data divided. Write them to DVDs or whatever you want to write on. If you don't have 3 pen drives, you can write the first DVD there itself and then delete whole of the data on pen drive before you resume the copy process. G: In the end I created a quick python script to split the files in to sub directories for me before zipping each individually. In case it's useful to anyone else, here's my script: import os import csv import shutil def SplitFilesIntoGroups(dirsrc, dirdest, bytesperdir): dirno = 1 isdircreated = False bytesprocessed = 0 for file in os.listdir(dirsrc): filebytes = os.path.getsize(dirsrc+'\\'+file) #start new dir? if bytesprocessed+filebytes > bytesperdir: dirno += 1 bytesprocessed = 0 isdircreated = False #create dir? if isdircreated == False: os.makedirs(dirdest+'\\'+str(dirno)) isdircreated = True #copy file shutil.copy2(dirsrc+'\\'+file, dirdest+'\\'+str(dirno)+'\\'+file) bytesprocessed += filebytes def Main(): dirsrc='C:\\Files' dirdest='C:\\Grouped Files' #1,024,000,000 = approx 1gb #512,000,000 = approx 500mb SplitFilesIntoGroups(dirsrc, dirdest, 512000000) if __name__ == "__main__": Main() We now use a free program called DirectorySlicer. It makes "copies" (uses "hardlinks" if destination is the same drive, so it doesn't use up more drive space) the files into folders of a specified size. This helps us create folders of files that will fit a 700MB CD. NOTED DOWNSIDE: the files aren't necessarily in the same order; meaning the sequenced filenames (like photo images) might be spread across "chunks" to fit better You can then create ZIP files of each folder. Take a look at it is something we've used in the past to split a batch of files into smaller chunks to fit onto 4.7GB DVDs. SpinZip is the rigth tool for no compression. I wanted to use compression, so the result was unsatisfactory. Zipsplit does not work for files above 2GB, so I ended up to write my own quick and dirty perl script, which does its work. Its adds files to the archive as long as the file + archive is lower than the max. specified size: # Use strict Variable declaration use strict; use warnings; use File::Find; # use constant MAXSIZE => 4700372992; # DVD File size use constant MAXSIZE => 1566790997; # File size for DVD to keep below 2GB limit # use constant MAXSIZE => 100000000; # Test use constant ROOTDIR => 'x:/dir_to_be_zipped'; # to be zipped directory my $zipfilename = "backup"; # Zip file name my $zipfileext = "zip"; # extension my $counter = 0; my $zipsize = undef; my $flushed = 1; my $arr = []; find({wanted =>\&wanted, no_chdir => 1}, ROOTDIR); flush(@{$arr}); # Callback function of FIND sub wanted { my $filesize = (-s $File::Find::name); LABEL: { if ($flushed) { $zipsize = (-s "$zipfilename$counter.$zipfileext"); $zipsize = 0 unless defined $zipsize; printf("Filesize Zip-File %s: %d\n", "$zipfilename$counter.$zipfileext", $zipsize); $flushed = 0; if (($zipsize + $filesize) >= MAXSIZE) { $counter++; $flushed = 1; printf("Use next Zip File %d, Filesize old File: %d\n", $counter, ($zipsize + $filesize)); goto LABEL; } } } if ( $zipsize + $filesize < MAXSIZE ) { printf("Adding %s (%d) to Buffer %d (%d)\n", $File::Find::name, $filesize, $counter, $zipsize); push @{$arr}, $File::Find::name; $zipsize += $filesize; } else { printf("Flushing File Buffer\n"); flush(@{$arr}); $flushed = 1; $arr = []; goto LABEL; } } # Flush File array to zip file sub flush { # open handle to write to STDIN of zip call open(my $fh, "|zip -9 $zipfilename$counter.$zipfileext -@") or die "cannot open < $zipfilename$counter.$zipfileext: $!"; printf("Adding %d files\n", scalar(@_)); print $fh map {$_, "\n"} @_; close $fh; } By posting your answer, you agree to the privacy policy and terms of service. asked 4 years ago viewed 2124 times active 8 months ago
http://superuser.com/questions/168207/how-to-split-a-zip-file-across-independent-volumes/171902
CC-MAIN-2015-18
refinedweb
844
69.31
Biweekly update 10th June — 25th June Greetings to all, we are pleased to present a new company under review — Bluzelle. To cut a long story short, Bluzelle is a decentralized service provider that aims to allow users to create on-demand, scalable databases for blockchain applications. ICO ended 20th January of 2018. However, this company is not a large one and is not rapidly developing. However, a certain degree of progress has been observed lately. The main point is new whitepaper. It hadn’t been as triumphant as we had hoped, but it might be helpful in dealing with some of the issues involved. For instance, it can be helpful in finding out answers to questions like ‘What problems games that are chasing a global audience are facing?’, ‘What an edge cache is and the benefits gained from it?’, ‘How blockchain empowers both high availability and security?’, ‘How companies can move from DevOps to NoOps?’, ‘What are the specific use cases for an edge data cache?’ Moreover, one member of the tech team explained the comparison between Edge and Cloud and the development team has released some code updates. To conclude, we are counting on further development and are about to monitor the progress. Intro Since we are looking at Bluzelle for the first time, we feel that we have to explain to you what Blezelle is. Bluzelle is a decentralized service provider that aims to allow users to create on-demand, scalable databases for blockchain applications. Bluz. Development - Bluzelle published a new white paper, Smart Data Caching at the Edge. Download now. This one includes answers to such questions as: - What problems games that are chasing a global audience are facing - What an edge cache is and the benefits gained from it - How blockchain empowers both high availability and security - How companies can move from DevOps to NoOps - What are the specific use cases for an edge data cache - “bzapi” for internal testing and bug verification. Bluzelle (by utilizing our new C++ common runtime) is being used to reproduce and validate database bugs. Multiple test-cases were implemented on the bzapi and will be added to future automated regression testing. - Multiswarm initialization via automation. With the addition of our multi-swarm functionality, topology complexity has dramatically increased. The operations team has made significant progress in ability to deploy and upgrade our various networks of swarm nodes. With a single command line, a swarm instance or entire swarm can be instantiated and registered to the ESR. The process of spinning up a new node to add capacity takes minutes. New tutorials: - Sprint Highlight: Adding Information onto Bluzelle Ethereum Smart Registry (ESR) In this video Software Developer Matthew Ilagan will show you a demo of adding information onto Bluzelle Ethereum Smart Registry (ESR). Social encounters Live Webinar: Edge vs Cloud Computing In this webinar you will learn about: - Current state of the cloud and proliferation of IoT devices, smart phones and so on - What is the edge and how it is different from the cloud - How the next generation of applications are different and will need edge computing — the cloud will not cut it - How Bluzelle is leading the charge with edge computing, with the Bluzelle edge cache - Today: The Cloud, Internet of Things, Smart Phones, and Status Quo We are having an explosion of internet of things. You can find small and big devices all over places like manufacturing industries and consumer businesses. Smartphones are becoming mainstream, especially in developing countries. We can almost assume that every average citizen has a smart phone. The Internet is becoming democratized. It is no longer limited to users in traditional western industrialized nations. - What is Edge? There are lots of definitions of what it is. Think of a single client computer (e.g. someone from the web, or someone that is playing the game) that is accessing data, traditionally, it has to retrieve data from the cloud server which may be set up in other side of the world. This would take quite a while. However, an edge cache server (Bluzelle) will sit much nearer to the client, which largely accelerate the data accessing speed. The idea here is to bring the data as close to the customer as possible — that’s what the edge is. Not like traditional cloud servers, edge servers are not in a single location but distributed everywhere in the world. No matter where the clients are, they will be close to one of these edge servers. Edge computing is about bringing its edge as close as possible to the customer. The closer it is, the better performance gets. - Edge vs Cloud Edge and cloud are generally opposites. Cloud concentrates power while the edge paradigm spreads it out. The cloud is about consolidating power into data centers. You have to use services like AWS, Azure or Google Cloud platforms to access computing and networking resources in centralized locations. Edge computing’s goal is to move these resources away from data centers and bring them as close to the user as possible. However, edge and cloud are not mutually independent. Edge networks can interact with cloud networks or can exist purely on their own. This is where decentralization comes into play. - Tomorrow: The Next Generation of Applications and Why They Need the Edge The next generation of applications will not necessarily run close to these traditional data centers. They will run on smartphones and IoT devices which are widely distributed and closer to the consumers, but much further away from the cloud data center than before. Remember that traditional cloud data centers have to be physically set up in locations, which are expensive and time consuming. Data needs are therefore now far more spread out all over the globe. There is a large opportunity for businesses to capture these new customers. Competitive advantage will strongly be defined by speed to access application data, and availability of this data. This requires bringing the data as close to the customer as possible, and ensuring it is always available. The users are no longer consolidated to large cities in traditionally “developed” nations. The users are now far more spread out and away from the confines of established cloud data centers. Global competitiveness means reaching these users too. The future is highly lucrative, with an explosion in IoT devices, smart phones, and applications that run on these devices. Developers must future-proof their applications, by having a dynamic architecture that adapts to the changing needs of the users, at the edge. - How the Blockchain Helps Bluzelle to Power its Edge Computing Network There are four areas that where Bluzelle is using blockchain and decentralization to power our edge cache network. However, please note that Bluzelle itself is not a blockchain but borrows the concept of decentralization (e.g. consensus etc) to make our edge product more competitive. Decentralization — There are no special servers or power authorities and where every node in the blockchain network is equally expendable. Bluzelle uses decentralization to achieve extreme edge computing abilities. Every server in Bluzelle is also equally expendable. Reliability — Like blockchain networks, data is replicated as needed across Bluzelle’s network. If a server or even a whole bunch of them go down, Bluzelle continues to run smoothly. Security — Attempts to attack a server or a group of them will fail as the rest of the network will reject such attacks. Availability — Without defined points of failure, Bluzelle is robust — this means high availability (99.999% or better). - Bluzelle: Leading the Charge with its Edge Cache Traditional caching layer was limited to Redis in singular cloud data centers. Bluzelle has a global network of edge servers, to cache data as close to the customer as possible, far beyond the reaches of the cloud. Bluzelle introduces an edge caching layer to replace or augment the traditional caching layer. Works with existing or new applications. Bluzelle will replace DevOps with NoOps, meaning companies can focus on software development without managing backups, scaling, and security. Bluzelle is always on and already deployed. - Other advantages of Bluzelle include: - Uses the best of blockchain tech to provide consistently high performance and availability. - Partnered with global Tier-1 network to ensure Bluzelle will always have the consistently fastest connections with its network. - Comparable in cost to existing caches. - Can deploy into new or existing applications in < 10 mins. - Support for all major programming languages. - How do you make sure all data is synced in real-time when the data is all processed at the edge? Will it be slow? It’s about 850ms delay from the time when data is written to the point it is propagated across the network. That is comparable to the speed of other key-value store services like Google, Cloudflare etc. One of our top areas in research is to have the fastest consensus algorithm to ensure data synchronization is fast. - How does Bluzelle handle content security and privacy issues? The main thing that Bluzelle is protecting is the access rights. If you have a namespace of a key-value pair stored on Bluzelle, Bluzelle will ensure nobody except authorized owners of that data can write to it, which is dictated by the private key. So if you own the private key to the namespace, you can write to that data. That is to ensure the authenticity of the data. Bluzelle currently does not encrypt data for you. The data owner can do this in the application layer. - How does token come into play? Bluzelle will release more details on token economy further down the road. It will see BLZ tokens being used in the upcoming release in November. BLZ tokens will be used as payment utility as well as compensation to farmers (network nodes). - Will all blockchains move towards edge computing in the future? The development team thinks the blockchain is using edge computing or is part of edge computing. Think of Bitcoin blockchain or Ethereum, their nodes are distributed everywhere. Every time you write data, it will propagate across the whole network. When you read data, it is reading from the nearest node. Therefore blockchain is a good example of an edge network. - Why are big cloud companies like Google & Amazon not moving into edge computing? Big cloud storage companies are adopting serverless computing. They are operating cloud data centers. They can’t technically adopt edge computing because edge is about moving beyond where the cloud data center is to much closer to where the customer is. Traditional cloud companies are limited to where the data center is. This is where Bluzelle’s opportunity lies. - Examples of real world adoption of edge computing? Edge computing applies to many existing applications, e.g. gaming. In a global multi-player game, you can have a client pulling the data from the edge rather than waiting 100ms for the data from a centralized server on the other side of the planet. Bluzelle is designed in a way that you don’t have to replace your backend. You don’t have to replace any of your current DBs including MySQL, NoSQL, or caching services like Redis, but use Bluzelle directly on top of these to add global availability instantly to your data. In this way we can drive adoption of edge computing faster. - Are there any regulatory hurdles? For the tech itself, Bluzelle is providing infrastructure-as-a-service, which is not subject to any regulatory hurdles as far as we know. Tokens, financials or other areas in the company are separate issues from the tech. Interviews: - CEO Pavel Bains was interviewed by Jordon Heal from Coin Rivet on how the decentralised data cache network plans to enhance game performance and reduce latency. - CTO Neeraj published explains the differences between Bluzelle and BitTorrent File System (BTFS) - CEO Pavel spoke about Bluzelle’s Data Delivery Network at True Global Ventures Conference at Seoul Korea. He also joined a panel on blockchain gaming. Events: No updates. Finance - Holders and Transfers: - Market: Roadmap MEIER release, late April/early May, 2019 -. CURIE release, late October/early November, 2019 - Pricing. BLZ tokens are required to pay for and use the. - In-place mutability. Functions on native data types, performed directly on Bluzelle. Partnerships and team members No updates. Social media metrics Social media activity: The graph above shows the dynamics of changes in the number of Twitter followers. The information is taken from Coingecko.com.
https://medium.com/paradigm-fund/bluzelle-new-whitepaper-edge-vs-cloud-bd1c1b090b56?source=collection_home---4------2-----------------------
CC-MAIN-2019-30
refinedweb
2,069
64.41
How to rebuild corrupt Postgres indexesHow to rebuild corrupt Postgres indexes Rebuilding indexesRebuilding indexes There are multiple databases to rebuild indexes in. Repeat the below process for: pgsql codeintel-db We need to ensure there’s nothing writing or reading from/to the database before performing the next steps. In Kubernetes, you can accomplish this by deleting the database service to prevent new connections from being established, followed by a query to terminate existing connections. export DB=pgsql # change this for other databases kubectl delete "svc/$DB" kubectl port-forward "deploy/$DB" 3333:5432 # doesn't use the service that we just deleted psql -U sg -d sg -h localhost -p 3333 In docker compose, you will need to scale down all the other services to prevent new connections from being established. You must run these commands from the machine where sourcegraph is running. export DB=pgsql # change for other databases docker-compose down # bring all containers down docker start $DB # bring only the db container back up docker exec -it $DB sh psql -U sg -d sg -h localhost -p 3333 Terminate existing client connections first. This will also terminate your own connection to the database, which you’ll need to re-establish. select pg_terminate_backend(pg_stat_activity.pid) from pg_stat_activity where datname = 'sg' With a Postgres client connected to the database, we now start by re-indexing system catalog indexes which may have been affected. reindex (verbose) system sg; Then we rebuild the database indexes. reindex (verbose) database sg; If any duplicate errors are reported, we must delete some rows by adapting and running the duplicate deletion query for each of the errors found. After deleting duplicates, just re-run the above statement. Repeat the process until there are no errors. At the end of the index rebuilding process, as a last sanity check, we use the amcheck extension to verify there are no corrupt indexes — an error is raised if there are (you should expect to see some output from this command).; Duplicate deletion queryDuplicate deletion query Here’s an example for the repo table. The predicates that match the duplicate rows must be adjusted for your specific case, as well as the table name you want to remove duplicates from. begin; -- We must disable index scans before deleting so that we avoid -- using the corrupt indexes to find the rows to delete. The database then -- does a sequential scan, which is what we want in order to accomplish that. set enable_indexscan = 'off'; set enable_bitmapscan = 'off'; delete from repo t1 using repo t2 where t1.ctid > t2.ctid and ( t1.name = t2.name or ( t1.external_service_type = t2.external_service_type and t1.external_service_id = t2.external_service_id and t1.external_id = t2.external_id ) ); commit; Selective index rebuildingSelective index rebuilding In case your database is large and reindex (verbose) database sg takes too long to re-run multiple times as you remove duplicates, you can instead run individual index rebuilding statements, and resume where you left of. Here’s a query that produces a list of such statements for all indexes that contain collatable key columns (we had corruption in these indexes in the 3.30 upgrade). This is a sub-set of the indexes that gets re-indexed by reindex database sg. select distinct('reindex (verbose) index ' || i.relname || ';') as stmt from pg_class t, pg_class i, pg_index ix, pg_attribute a, pg_namespace n where t.oid = ix.indrelid and i.oid = ix.indexrelid and n.oid = i.relnamespace and a.attrelid = t.oid and a.attnum = ANY(ix.indkey) and t.relkind = 'r' and n.nspname = 'public' and ix.indcollation != oidvectorin(repeat('0 ', ix.indnkeyatts)::cstring) order by stmt; You’d take that output of that query and run each of the statements one by one.
https://docs.sourcegraph.com/admin/how-to/rebuild-corrupt-postgres-indexes
CC-MAIN-2021-43
refinedweb
621
55.03
A Pastebin API Wrapper for Python Project description Pastebin API wrapper for Python (pbwrap) **Python API wrapper for the Pastebin Public API. Only Python 3 supported! Documentation This wrapper is based on Pastebin API read their Documentation here. for extra information and usage guide. Usage For a full list of the methods offered by the package Read. Quickstart Import and instantiate a Pastebin Object. from pbwrap import Pastebin pastebin = Pastebin(api_dev_key) Examples Get User Id Returns a string with the user_id created after authentication. user_id = pastebin.authenticate(username, password) Get Trending Pastes details Returns a list containing Paste objects of the top 18 trending Pastes. trending_pastes = pastebin.get_trending() Type models Paste Some API endpoints return paste data in xml format the wrapper either converts them in a python dictionary format or returns them as Paste objects which contain the following fields: - key - date in UNIXTIME - title - size - expire_date - private - format_short - format_long - url - hits License pbwrap is released under MIT License Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pbwrap/
CC-MAIN-2021-39
refinedweb
189
52.8
# Summary There are some questions I see quite a lot. Let me try to answer them - both in the video and this article! - Do you need a complex project setup? (0:27) { }vs {{ }}? (2:38) - Functional (stateless) vs class-based (stateful) Components (3:59) - Different setStateSyntaxes? (6:10) - What does this map()Thing Mean? (7:33) - Styling React - So many Choices! (9:04) - Integrating 3rd Party Libraries (12:15) - What’s Up with this “Immutability” Thing? (14:17) - Everyone can see my code! (18:10) - React without Redux? (18:49) - Can you use React with PHP/Node/…? (19:29) - My State is Lost after a Page Refresh! (21:33) - “Can I host my app on Heroku etc?” (23:55) - My Routes are broken after Deployment! (25:29) # Do I need a Complex Setup? Or can you just import react.js and react-dom.js into your .html file and get started writing awesome React code? The answer is: Yes, you can do that, you don’t need a complex setup (i.e. one that uses Webpack, Babel and what have you). The easiest way to get started with React is indeed to simply add the following imports to your HTML file: <body> ... <script src="path/to/react.js"></script> <script src="path/to/react-dom.js"></script> </body> Alternatively, you can of course use CDN links. next-gen JavaScript features as well as JSX - a nice way of creating HTML elements from within JavaScript. But if you’re just trying to add some React magic to your app, definitely go with the simple drop-in approach. If you want a more complex workflow, I strongly recommend using the create-react-app though - it’s basically a CLI that makes the project setup a breeze! # { } vs {{ }} Here’s another thing that can be confusing: { } vs {{ }} - when do you use which for outputting dynamic content? The answer is simple: Unlike in Angular or Vue, the syntax for outputting dynamic content in your “templates” (you don’t really have templates in React - you’re just writing JavaScript code in the end) is { } not {{ }}. Since you’re just writing JavaScript code in React, you stick closer to the original JavaScript syntax if you get used to using { }. It is a syntax you use regularly, after all, when writing JavaScript code. You still sometimes see {{ }} in React apps, too, though. Let’s have a look at an example: <p style={{ color: 'red' }}>I'm red!</p> This can be confusing but in the end, you still only used { } to output dynamic content here. The dynamic content just happens to be a JavaScript object: {color: 'red'}. And that’s all that’s to it! # Functional (stateless) vs class-based (stateful) Components In React Apps, you can create two types of components: Functional, stateless ones and class-based, stateful components. Here’s an example for a functional component: import React from 'react' const MyComponent = props => { return <p>I'm a functional component!</p> } And here’s the class-based alternative: import React, { Component } from 'react' class MyComponent extends Component { render() { return <p>I'm a class-based component!</p> } } Both components will render exactly the same result, so which approach should you use? What’s the advantage of each of the two approaches? # The class-based - or stateful - Approach Let’s start with the class-based approach. It’s also called stateful and this second name already shows what it’s all about: Components that are written by creating a class that extends Component can manage an internal state. Functional - stateless - components can’t do that! More on that in a second. Back to the class-based, stateful components for now. Here’s an example where we do manage state: import React, { Component } from 'react' class NameInput extends Component { state = { name: '' } // Initialize state nameChangedHandler = event => { this.setState({ name: event.target.value }) // Update state with entered name } render() { return ( <div> <p>Hello {this.state.name}</p> <input type="text" onChange={this.nameChangedHandler} value={this.state.name} /> </div> ) } } In this example, the component is able to output the user name. To allow the user to change that name, it’s stored in the state property of the class. Since the class extends Component this is a special property. The key thing about state is, that it can be changed, during runtime, from within the component. Changing the state will also trigger React to re-render the component. Additionally to state, you can also implement lifecycle hooks to your stateful components. # The functional - stateless - Approach So we learned about the class-based approach. You can use state and lifecycle methods there. What about the functional - so called stateless - approach? Well, as the name implies, you can’t use state there. The only datasource you have in such components is props - i.e. data received from outside. When props change, this will also trigger React to re-render the component. Here’s an example for a component using props: import React from 'react' const GreetUser = props => { return <p>Hello {props.name}</p> } export default GreetUser This component would be used like this: import GreetUser from './path/to/greetUser' // omit the .js ... <GreetUser name="Max" /> // OR something like this: <GreetUser name={this.state.name} /> Why would you use this “limited” version of a component instead of the “more powerful” stateful alternative? Because it makes your apps easier to maintain and understand. You have clearly defined responsibilities and a predictable flow of data. Stateful and stateless components work well together!. # When to use which Type of Component Let’s see an example for how stateful and stateless components may work well together: import React, { Component } from 'react' import GreetUser from 'path/to/greetUser' class NameInput extends Component { state = { name: '' } // Initialize state nameChangedHandler = event => { this.setState({ name: event.target.value }) // Update state with entered name } render() { return ( <div> <GreetUser name={this.state.name} /> <input type="text" onChange={this.nameChangedHandler} value={this.state.name} /> </div> ) } } Here, we have on stateful component - NameInput - where we allow the user to change his or her name. And we use one stateless component - GreetUser - where we simply render a name. We receive it via props. The advantages here are: GreetUseris really re-usable. It makes no assumptions other than that it receives a name props. - We only change and manage our statein one component. This makes it easy for us to trace the flow of data. # Different setState Syntaxes When working with this.setState, you might’ve come across two different ways of using it: this.setState({ counter: this.state.counter + 1 }) and this.setState(prevState => { return { counter: prevState.counter + 1 } }) The first syntax is actually wrong/ suboptimal in this case. Why? It’s not that widely known but setState actually updates the state asynchronously (Source:). Due to this possible delay between your code execution and the actual state update, the first syntax could lead to a counter that’s not incrementing correctly. this.state.counter simply may not have been updated yet when you use it to update the state again. The recommend way of updating the state - if you depend on the old state - is to use the function-form of setState: this.setState(prevState => { return { counter: prevState.counter + 1 } }) With this syntax, you pass a function to setState that receives two arguments: prevState and prevProps - so the state and props AFTER the last successful state update. Inside of the function, you then return the object that should be merged with the old state. By using this approach, you’re guaranteed to correctly access the old state you’re depending on. # What does this map() Thing Mean? When writing React apps, you’ll use map() quite a bit. Typically to output an array of elements as JSX. Here’s an example: const hobbies = ['Sports', 'Cooking', 'Reading'] return hobbies.map(hobby => <p key={hobby}>{hobby}</p>) This will output three <p> elements that print the respective hobby. map() does one simple thing: It maps one array into a new one (you can dive deeper here). So in the above snippet, it transforms an array of strings into an array of JSX elements that print the text to the screen. This array can then be rendered to the DOM by React. # Styling React - So many Choices! When it comes to styling React apps - we got plenty of choices. You typically start with inline styles: <p style={{ color: 'red' }}>Hello there!</p> The advantage of this approach is, that the style is scoped to this component and only applies to the element you added it two. You can also set the styles dynamically like this: <p style={{ color: this.state.chosenColor }}>Hello there!</p> The disadvantage is, that you don’t use real CSS here and that you can’t use pseudo selectors or media queries therefore. Additionally, it quickly bloats your JSX code if you put more complex styling code in there. There are many packages and approaches that try to solve these disadvantages and make styling of React apps easier. I can recommend looking into the following libraries and solutions (ordered in no particular order). - Radium - Styled Components - CSS Modules - This is my favorite! # How to integrate 3rd Party CSS and JS Libraries “How can I add library XY to my React project?” That’s a question I see quite a lot. The good thing is: It’s really easy to add libraries to a React project. I’m going to assume a project which uses a setup created via create-react-app. If you just dropped React into a HTML file, you can do the same for other packages. So how do you add a library then? The easiest solution is to use npm: npm install --save my-library Once that’s done, you can import it into your main.js or index.js (whatever you have) file. For a CSS file it would look like this: import 'path/to/my/file.css' // DON'T omit .css - Webpack needs that # What’s up with this “Immutable” Thing? Once you dive a little bit deeper into React, you’ll read it all the time: “Update state immutably”. But what does this mean and why is this important? It means that you update arrays and objects by replacing the old value with a totally new one. You don’t mutate the old value. Why is this recommended? Because objects and arrays are reference types in JavaScript. This means, that a variable (or class property) storing an object (or array) doesn’t really store the value but only a pointer to some place in memory where the object/ array value lives. If you then assign the object to a new variable, that variable will only copy the pointer and therefore still point to the same, single object/ array in memory. Here’s an example to what this can lead: const person = { name: 'Max' } const copiedPerson = person console.log(copiedPerson.name) // prints 'Max' copiedPerson.name = 'Sven' console.log(person.name) // prints 'Sven' In this example, console.log(person.name) also prints 'Sven' even though only copiedPerson.name was changed. This happens due to this reference-type behavior. You can learn more about it - and the difference to primitive types - in this video: Now that we know why we should update objects and arrays immutably - how does that actually work? You can use different approaches - let’s have a look at immutably updating objects first: const person = { name: 'Max' } const copiedPerson = Object.assign({}, person) Object.assign() assigns all properties of the object which is passed as the second argument to its first argument - typically an empty object if you want to copy an object. This will yield you a brand-new object which simply copies the values of all the properties of the old object. If you use next-gen JavaScript syntax, this becomes even easier: const person = { name: 'Max' } const copiedPerson = { ...person } Here, we use the next-gen Spread Operator for Objects. This also creates a new object and copies all the old properties. We can also copy arrays immutable: const hobbies = ['Sports', 'Cooking'] const copiedHobbies = hobbies.splice() // OR: const copiedHobbies = [...hobbies] Just as with objects, we can use the spread operator for arrays, too. Alternatively, you can use the slice method to create a brand-new copy of an array. Important: All these approaches still only create a shallow copies of arrays and objects! If your copied object or array had some nested properties or elements (i.e. properties/ elements that held another object or array), these objects would not be copied. You’d still only copy a pointer to them! For deeply copying objects and arrays, have a look at the following article: # React without Redux?” When diving into React, you sometimes get the impression you can’t use it without Redux. This is a wrong impression though! Redux is extremely helpful if you’re building bigger React apps as it makes state management a lot easier and leads to far more maintainable code. But especially in smaller projects, it might be overkill to include Redux and set everything up for it. Additionally, there are alternatives - most importantly MobX. MobX also makes state management easier, it’s also suited for very big apps but it follows a different, observable-based approach. Definitely check it out and decide if you need Redux, no Redux or want to use MobX! # “Can I use React with PHP/Node/…?” I often see the question whether you can use React with PHP, Node.js or some other server-side language. And the answer is: Yes, absolutely! React doesn’t care about your backend and your server-side language, it’s a client-side framework, it runs in the browser! It also doesn’t matter if you drop your React imports into the (server-rendered) HTML files or if you’re building a SPA. In both cases, React only manages the frontend! If you’re building a SPA, you only communicate with servers through Ajax requests (via libraries like axios), hence your backend needs to provide a (RESTful) API to which React can send its requests. And that’s all! One exception is important though: If you’re using server-side pre-rendering of React apps (e.g. via Next.js), you’ll still not use React to write server-side code (i.e. to access databases, work with file storage or anything like that) but you pre-render your React React React React componentDidMountlifecycle method in your main.jsfile React React doesn’t use any server-side language! It’s - after your ran npm run build - just a bunch of JavaScript and CSS files as well as the index.html file. You don’t need a Node server for that! Therefore, for React React in server-side rendere apps, i.e. where some server-side templating engine generates the views, you will need a host that runs that server-side language of course. # How to fix broken Routes after Deployment After deploying a React React single page apps! Therefore, your server can’t do anything with an incoming request pointing at a /products route - it’s totally unknown to it! Due to the way the web works, requests reach the server though and your client (i.e. the React React app gets loaded and gets a chance to handle the request. If you still want to render a 404 error page for routes that are really unknown, i.e. that are also not configured in your React app, you’ll ne to add a catch-all route like this: // Configure all other routes first or they'll not be considered! <Route component={404Page}/>
https://academind.com/learn/react/react-q-a/
CC-MAIN-2019-18
refinedweb
2,609
67.04
Nice keyboard inputs in Elm. It is quite tedious to find out the currently pressed down keys with just the Keyboard module, so this package aims to make it easier. You can use Keyboard.Extra in two ways: North, NorthEast, etc. directions work updateWithKeyChangeto show when a key is pressed down and when it is released All of the examples are also in the example directory in the repository. If you use the "Msg and Update" way, you will get the most help, such as: Keytype, such as ArrowUp, CharAand Enter Shiftis pressed down when any kind of a Msghappens in your program { x : Int, y : Int }or as a union type (e.g. South, NorthEast) When using Keyboard.Extra like this, it follows The Elm Architecture. Its model is a list of keys, and it has an update function and some subscriptions. Below are the necessary parts to wire things up. Once that is done, you can get useful information using the helper functions such as arrows and arrowsDirection. Include the list of keys in your program's model import Keyboard.Extra exposing (Key) type alias Model = { pressedKeys : List Key -- ... } init : ( Model, Cmd Msg ) init = ( { pressedKeys = [] -- ... } , Cmd.none ) Add the message type in your messages type Msg = KeyMsg Keyboard.Extra.Msg -- ... Include the subscriptions for the events to come through (remember to add them in your main too) subscriptions : Model -> Sub Msg subscriptions model = Sub.batch [ Sub.map KeyMsg Keyboard.Extra.subscriptions -- ... ] And finally, you can use update to have the list of keys be up to date update : Msg -> Model -> ( Model, Cmd Msg ) update msg model = case msg of KeyMsg keyMsg -> ( { model | pressedKeys = Keyboard.Extra.update keyMsg model.pressedKeys } , Cmd.none ) -- ... Now you can get all the information anywhere where you have access to the model, for example like so: calculateSpeed : Model -> Float calculateSpeed model = let arrows = Keyboard.Extra.arrows model.pressedKeys in model.currentSpeed + arrows.x isShooting : Model -> Bool isShooting model = List.member Space model.pressedKeys Have fun! :) PS. The Tracking Key Changes example example shows how to use updateWithKeyChange to find out exactly which key was pressed down / released on that update cycle. With the "plain subscriptions" way, you get the bare minimum: Keytype, such as ArrowUp, CharAand Enter Setting up is very straight-forward: type Msg = KeyDown Key | KeyUp Key -- ... subscriptions : Model -> Sub Msg subscriptions model = Sub.batch [ Keyboard.Extra.downs KeyDown , Keyboard.Extra.ups KeyUp -- ... ] There's an example for this, too: Plain Subscriptions
https://package.frelm.org/repo/733/3.0.4
CC-MAIN-2020-16
refinedweb
410
66.64
#include <vtkStripper.h> Inheritance diagram for vtkStripper: vtkStripper is a filter that generates triangle strips and/or poly-lines from input polygons, triangle strips, and lines. Input polygons are assembled into triangle strips only if they are triangles; other types of polygons are passed through to the output and not stripped. (Use vtkTriangleFilter to triangulate non-triangular polygons prior to running this filter if you need to strip all the data.) The filter will pass through (to the output) vertices if they are present in the input polydata. The ivar MaximumLength can be used to control the maximum allowable triangle strip and poly-line length. By default, this filter discards any cell data associated with the input. Thus is because the cell structure changes and and the old cell data is no longer valid. When PassCellDataAsFieldData flag is set, the cell data is passed as FieldData to the output using the following rule: 1) for every cell in the output that is not a triangle strip, the cell data is inserted once per cell in the output field data. 2) for every triangle strip cell in the output: ii) 1 tuple is inserted for every point(j|j>=2) in the strip. This is the cell data for the cell formed by (j-2, j-1, j) in the input. The field data order is same as cell data i.e. (verts,line,polys,tsrips). Definition at line 64 of file vtkStripper.h.
https://vtk.org/doc/release/5.0/html/a02053.html
CC-MAIN-2021-04
refinedweb
243
60.65
On Fri, Sep 4, 2009 at 07:50, Niclas Hedhman <niclas@hedhman.org> wrote: > On Fri, Sep 4, 2009 at 1:23 PM, Kevan Miller<kevan.miller@gmail.com> > wrote: > > > So, let's assume that one or more OSGi spec implementations are a core > part > > of Aries -- with specific features/customization for Aries. Personally, > it > > seems reasonable that an Aries project would want these customized spec > > implementations close at hand -- within their project. > > I would like to draw attention to "Pax Web", which is both a OSGi Http > Service implementation as well as many extensions, such as war, jsp, > filter support. Yet it is done in a way of a custom extension > mechanism (Pax Web Extender), so that if you only use Pax Web, you > only get the basic spec implementation, nothing more, nothing less. I > am sure a similar approach could be made here, where the clean spec > part is at one place and the customizations are "close at hand". > My real problem is not the location, but really the fact that we'd split things. Do you really see how easy it would be if pax-web was developed at apache and pax-web-extender in ops4j ? The two are somewhat tied together, and the value for having pax-web somewhere else is much lessened by the problems it would bring (it would be much more difficult to keep both in sync, meaning the communities have to be roughly the same, and what about the release process which would be much more difficult too). For Aries, I would see the same problem. That might be even worse given some of the things that are supposed to be done don't exist yet. So when you have an existing code base, you can envision splitting it into two when it's mature enough. Another thing, is that some things are not yet OSGi spec, but might become so in the future (such as application metadata and such), so it might be difficult to choose a destination right now. For blueprint, some aspects are not yet standardized (such as custom namespaces), so they are tightly bound to the implementation. I don't think it would make much sense to split that either. Let's try to find a solution to this problem. > > >. > > >>... > > Richard is over-reacting, and statements like that should be left for > it is; an attempt to blow things out of proportion to gain sympathy, > possibly out of frustration. > > >
http://mail-archives.apache.org/mod_mbox/incubator-general/200909.mbox/%3Cb23ecedc0909040421x61e0c076pb447fd0df8786342@mail.gmail.com%3E
CC-MAIN-2016-36
refinedweb
411
66.37
[ ] Chris Douglas commented on HADOOP-5307: --------------------------------------- bq. Of course the patch is technically backwards-incompatible, what I say is that, the frequency is expected to be so small that it is negligible. Neither of us has enough data to assert anything about the frequency of any case because it's a public class. While my intuition matches yours, "incompatible change" isn't a statistical definition, let alone one based on our expectations. bq. passing an array, possibly containing null values seems to me generic enough to be introduced in StringUtils rather than a custom solution in the context(DBOutputFormat). I disagree. By supporting an escape for null references, this is defining a serialization for arrays of String, rather than representing an array of String in a single String. For this, the Stringifier seems like a more appropriate choice than changing the semantics of a method on StringUtils. bq. The API indicates to use the supplied column names, when in fact it did not, so 4955 is a bug fix, fixing the expected behavior from the API, rather than an improvement, introducing a new API. I see. From the description, it sounded like this was adding functionality by using the fields provided. I don't think adding a new API is necessary to differentiate an improvement from a bug; it's sufficient to change functionality. If the intent was to use them, then OK. bq. What would you suggest? Not sure. Since this seems to be serializing an array in and out of configs, I'm leaning towards the Stringifier work as a solution local to DBConfiguration. Would that work? >.
http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/200903.mbox/%3C821121899.1236991130535.JavaMail.jira@brutus%3E
CC-MAIN-2018-34
refinedweb
269
52.6
Agree with David, it's not there and thinking about how the data is laid out on disk, it can't be done without changing core code or harming something else. > if this is a performance concern It's not, it was to supply an administrative function on SuperColumns, but it would be good to not crush a node. > a separate column family which just contains the column names and > timestamps with empty values. Eventually, you'd want to ask a client library to do this, at the cost of two writes everytime you add a new column. But then you'd need re-ahead to check if the column is new, which would kill write performance. Bill Jonathan Shook wrote: > I think you are correct, David. What Bill is asking for specifically > is not in the API. > > Bill, > if this is a performance concern (i.e., your column values are/could > be vastly larger than your column names, and you need to query the > namespace before loading the values), then you might consider keeping > a separate column family which just contains the column names and > timestamps with empty values. > > On Sun, May 16, 2010 at 4:37 AM, David Boxenhorn <david@lookin2.com> wrote: >> Bill, I am a new user of Cassandra, so I've been following this discussion >> with interest. I think the answer is "no", except for the brute force method >> of looping through all your data. It's like asking for a list of all the >> files on your C: drive. The term "column" is very misleading, since >> "columns" are really leaves of a tree structure, not columns of a tabular >> structure. >> >> Anybody want to tell me I'm wrong? >> >> BTW, Bill, I think we've corresponded before, here: >> >> >> On Fri, May 14, 2010 at 2:23 AM, Bill de hOra <bill@dehora.net> wrote: >>> A SlicePredicate/SliceRange can't exclude column values afaik. >>> >>> Bill >>> >>> Jonathan Shook wrote: >>>> get_slice >>>> >>>> see: under get_slice and >>>> SlicePredicate >>>> >>>> On Thu, May 13, 2010 at 9:45 AM, Bill de hOra <bill@dehora.net> wrote: >>>>> get_count returns the number of columns, not the names of those columns? >>>>> I >>>>> should have been specific, by "list the columns", I meant "list the >>>>> column >>>>> names". >>>>> >>>>> Bill >>>>> >>>>> Gary Dusbabek wrote: >>>>>> We have get_count at the thrift level. You supply a predicate and it >>>>>> returns the number of columns that match. There is also >>>>>> multi_get_count, which is the same operation against multiple keys. >>>>>> >>>>>> Gary. >>>>>> >>>>>> >>>>>> On Thu, May 13, 2010 at 04:18, Bill de hOra <bill@dehora.net> wrote: >>>>>>> Admin question - is there a way to list the columns for a particular >>>>>>> key? >>>>>>> >>>>>>> Bill >>>>>>> >>
http://mail-archives.apache.org/mod_mbox/incubator-cassandra-user/201005.mbox/%3C4BF10584.20003@dehora.net%3E
CC-MAIN-2016-18
refinedweb
438
71.65
Creating the Watcher Builder When I build this kind of library, I like to separate the configuration of the component from the actual creation of the component internally. I use a common module defining the configuration syntax, which I share between a builder and the actual implementation. When the user starts using the DSL, he's actually talking to the builder and the builder object creates the implementation instance a little bit later. That way, chance of errors is lower and the syntax is explicitly defined separately so it's easier for somebody else to learn. We'll start with the parts of the builder component that can exist on their own without any other classes. Listing 2 shows the output of the specs for that part. loading the library - should raise an exception when IRONRUBY_VERSION is undefined WatcherSyntax when initializing - should raise an error when no path is given - should have a path - should have an empty filters collection when no filter is provided - should register the filters - should by default not recurse in subdirs - should register and execute the block when initialized - should allow setting the path - should allow setting extra filters - should disable recursing into subdirs - should enable recursing into subdirs when registering handlers - should register a handler without a filter - should register a handler with a filter - should register a handler by method name The specs above are actually about the behavior that will be shared between the builder and the watcher implementation object. Implementing this class would bring us pretty close to where we need to be in terms of syntax. Afterwards, it's a matter of providing an entry point and actually building the implementation. Playing Nice With Other Libraries The library we are creating will only work on IronRuby. It might be good form to inform other Ruby implementations that this library needs to be loaded from IronRuby to work. There are several ways to solve that and most of them involve checking for a constant or the value of a constant. In this case, I opted to check for the constant IRONRUBY_VERSION, because I know that constant is unique for the IronRuby implementation. Shri Borde of the IronRuby/DLR developer team recommends the following statement to check if you're running in IronRuby: do_some_ironruby_stuff if defined?(RUBY_PLATFORM) and RUBY_PLATFORM == "ironruby". The latter way is probably the proper way of checking for the Ruby platform you're running on. Because I know where this is heading, I'm going to include this straight into a module. I started out with this as a part of the WatcherBuilder class and later extracted it into the WatcherSyntax module shown in Listing 3. module WatcherSyntax attr_accessor :path, :filters, :subdirs, :handlers def path(val = nil) #1 @path = val if val @path end def filter(*val) #2 @filters = register_filters @filters, *val end def top_level_only #3 @subdirs = false end def recurse @subdirs = true end alias_method :include_subdirs, :recurse def on(action, *filters, &handler) #4 @handlers ||= {} @handlers[action.to_sym] ||= {} hand = @handlers[action.to_sym] filters = [:default] if filters.empty? filters.each do |filt| filt = :default if filt.to_s.empty? hand[filt] ||= [] hand[filt] << handler end end private def register_filters(coll, *val) val.inject(coll||[]) { |memo, filt| memo << filt unless memo.include?(filt); memo } end end #1 Overloading the path attribute getter #2 Registering filters #3 Switching for subdirectories #4 Registering event handlers The code in Listing 3 first overloads the path getter (#1) that is created by the attr_accessor method. When the provided value is not nil, it will set the path value. Next, we define the filter method (#2). This can be used to register extra filters. The actual registration of a filter is handled by a private method register_filters that ensures the new filters are added only if they don't already exist. The next bit is two methods where one is the inverse of the other to function as a switch to flick on recursing into subdirectories or not (#3). The last method is the on method (#4), which is probably the most complex method in this chunk of code. The reason that this code is more complex than it could be is because I didn't create a proper class to encapsulate a handler registration but instead I am using a Hash as data structure. As a consequence, it also needs to be initialized with default values. After the data structure initialization, it will add the handler to the registration's handler collection and it will register the filters in the filter collection for this handler. When the filters parameter of the on method is empty, it gets initialized with a :default symbol for the key in the Hash. If we were to include the module presented here at this point into a class we'd pass about 90% of the specs from Listing 2. We'll now write some specs for the methods that are specific to a builder: WatcherBuilder when building watches - should create a watcher - should register a new watcher in the watcher bucket - should register a new watcher and start it This output of the specs mentions a watcher bucket. This is a class I did create for encapsulating watcher registrations. I chose to create a class in this case because we need extra behavior over the behavior of a standard Array, we want it to be able to stop and start the registered watchers and the map method should return a new WatcherBucket instead of an Array. For brevity, this class and its specs aren't included in the listings in this chapter, but they are provided with the code samples for this book. Listing 4 shows the complete implementation of the builder class. class WatcherBuilder include WatcherSyntax #1 def initialize(path, *filters, &configure) #2 @path = path @filters = register_filters [], *filters @subdirs = false @handlers = {} instance_eval &configure if configure #3 end def build Watcher.new @path, @filters, @subdirs, @handlers #4 end def method_missing(name, *args, &b) #5 if name.to_s =~ /^on_(.*)/ self.on $1, *args, &b else super end end def self.watch(path, *filters, &b) @watchers ||= WatcherBucket.new #6 @watchers << WatcherBuilder.new(path, *filters, &b).build end def self.build(&b) @watchers = WatcherBucket.new instance_eval(&b) @watchers.start_watching @watchers end end #1 Include the Syntax module #2 Initialize defaults #3 Call instance_eval with extra config #4 Build an implementation object #5 Method missing for named handlers #6 Entry point for the DSL The first thing we do is include the WatcherSyntax module in the WatcherBuilder class (#1). This gives us access to all the instance methods defined in that module. Next, we set up a constructor method (#2) that will set up some defaults. The last thing it does is call instance_eval and pass it the &configure block (#3). Using instance_eval in this case is what gives us clean syntax without defining a receiver for the methods. When you call instance_eval in an object instance, the contents of that block are evaluated with the receiver of self; that is, self becomes the context of the block. After the initialization of the builder, we defined a build instance method that creates a new instance of a Watcher implementation (#4). The last instance method that is defined is method_missing (#5). You may have noticed that we've only defined the on method in the WatcherSyntax module, which takes an action as first argument. We're going to leverage method_missing as a method dispatcher for all methods that start with on_. The remainder of the method name is assumed to be an action name and it gets dispatched to the on method. If it doesn't start with on_, the call gets passed on to the old behavior. The last two methods are class methods, and both could serve as entry points into our DSL. The first one defines a single watch on a path with its handlers and adds those to the WatcherBucket (#6) contained by the @watchers singleton variable. The last method build is the actual entry point for our DSL and can handle many calls to the watch, thereby allowing the creation of multiple watches on multiple paths. All that is left to do now for us to get to the syntax shown in Listing 1 is define a global method that forwards its call to the build method of the WatcherBuilder class. def filesystem(&b) FsWatcher::WatcherBuilder.build(&b) end If we were to run the specs at this point, they should all pass, allowing us to move on to the actual implementation of the Watcher class. (master)» ibacon -q spec/**/*_spec.rb ..................... 21 tests, 29 assertions, 0 failures, 0 errors All specs pass, the natural order of the universe is preserved -- in short, all is well with the world. We now have a configuration DSL that can build watchers and start them if only they would have been implemented. The next and last part of this article talks about implementing the actual functionality.
https://www.drdobbs.com/open-source/sweetening-the-plain-vanilla-net-filesys/225200551?pgno=2
CC-MAIN-2020-16
refinedweb
1,491
59.74
I need to be able to time very short loops accurately (As in "see how long it takes to execute". I DON'T want to have a short-interval timer. There's a big difference). This is important in my code, because I'm writing a DLL that's a plug-in to a real-time multitrack HD audio recorder. The plug-in provides audio processing functions to the main program. A single file sometimes gets 50MB, and this program works with multiple such files. You can see how a slow loop that operates on each sample can severely bring down the system. Anyway, I tried various ways to time the loops accurately, but haven't found anything accurate enough. The best I found is to use QueryPerformanceCounter(), which gives me a timing resolution of 838ns, quite small steps. The problem is not with the timer's resolution, but with the fact that every time I do the timing, I get different results. This varies from day to day, and appears to be connected with what mood the OS is in on that day, and also the position of the moon. I try to make the timing intervals small, so that my code isn't pre-empted before the timing period is over. I also tried putting a Sleep(0), or Sleep(10), call just before I start timing, to make sure the likelihood of my code being pre-empted is small. I also boost the PriorityClass and ThreadPriority to it's max level just before the timing starts, and restore it just after. These techniques does help to some extent, but I still get results that vary as much as 35% from one to the next run, and from day to day. Obviously I can't trust this to see if one version of my code is 10% faster than another or not. It appears to me as if there's some low-level hardware interrupts that is happening that I have no control over. I also looked at the "Zen Timer", but it appears as if that's for DOS only. So, my simple question is: How can I actually know how many cycles MY code takes to complete. If I knew that, it wouldn't matter if my code was interrupted, because the cycle count would be for my code only. Imagine this feature in VC++: You compile your code with debug info on, but the release build. Then you start debugging, and place a breakpoint just before the code you want to time. When you start single-stepping, you open a debug window (just like the Watch, Memory, etc), and there you have some options as to what type of CPU you want to simulate, and you can also reset the cycle counter. Then, as you single-step (or run up to the next breakpoint), the debugger adds cycles to the counter, depending on the selected CPU. I see no reason why it can't do this. It already knows the assembly instructions, it just needs to be taught how long each one takes, as well as about pairing, overlapped instructions, etc etc. This would be SO easy to time critical code then, because you can know EXACTLY how may cycles a piece of code will take to execute, as well as simulating running it on different CPUs. When can we expect such a cool feature in VC++? Anyone have any idea? And why would it NOT be possible to do it? Would it be possible to add a third-party plug-in to VC++ to do this, and are there any available at this point? Any insight, advice, comments welcome. Steven Schulze Concord, CA > Then, as you single-step [...], the debugger adds > cycles to the counter You have correctly summarized the problems inherent in timing small sections of code, but I don't think your fix will work. The time to access a *single* aligned DWORD may vary from an apparent zero cycles (if run in parallel to, say, an FP instruction, with a memory barrier following) to some 10 million (!) cycles (cache miss, TLB miss, PTE faulted in from disk, data page faulted in from disk). Unless you run this on a box with no VM, you are out of luck. Oh, and I hope you unplugged the network card. Even if we ignore all this, the debugger won't be able to accurately keep track, as a single-step or other sort of interrupted flow of execution means that the instruction queue is empty when your code is resumed. On the other hand, in real life, your DLL will also run in a real, busy, system ... so maybe it would make more sense to run a test series under heavy load and _then_ measure the total execution time, to arrive at the slack time left on that config under such and such a load. In that case, just run the code long enough to average out the asynchronous nature of interrupts and the like. -- Cheers, Felix. If you post a reply, kindly refrain from emailing it, too. Note to spammers: fel...@mvps.org is my real email address. No anti-spam address here. Just one comment: IN YOUR FACE! >I need to be able to time very short loops accurately Did you try GetThreadTimes? This should give you the amount of time that the thread spent executing, no matter how many times it was preempted by other threads. Now, I'm not sure who gets charged by NT for time spent in hardware interrupts, or exactly how and when NT charges the time (it should be doing it either at the end of a quantum, or when the thread goes to sleep/wait state), but I think you ought to try this function out and see if the results are more stable. Hope that helps. --------------------------------------------------------------------------- Dimitris Staikos dsta...@unibrain.com (Business mail only please), dsta...@softlab.ece.ntua.gr (Personal mail), ICQ UIN 169876 Software Systems & Applications Chief Engineer, UniBrain, --------------------------------------------------------------------------- Any sufficiently advanced bug is indistinguishable from a feature. Some make it happen, some watch it happen, and some say, "what happened?" How can we remember our ignorance, which our progress requires, when we are using our knowledge all the time? How a man plays a game shows something of his character, how he loses shows all of it. --------------------------------------------------------------------------- Yes, I understand all this, but the idea is to simulate a "perfect" machine so that what you see is the best possible performance of your code under the best possible situation. This way, you can concentrate on your code to get IT'S cycle count as low as possible. There's nothing you can do in your code to guard against a noisy system, but if you can get YOUR OWN code optimized as good as possible, then you've done all you can. BTW, I have a Beta copy of Intel's VTune 3.0, and it actually has a similar feature, where it will show you in detail a selected range of code's cycle time, pairing, penalties, etc, etc. It makes some basic assumptions, as in that the data is already in the cache, etc, etc, and then does a simulation on the CPU of your choice. Unfortunately it's a little buggy, and it's REALLY slow. And it's a pain to switch between VC++ and VTune for every change you make in your code. (About 10 minutes turnaround on my computer). The fact that it's able to give me this kind of info in the first place tells me it's definitely possible. Also, you can give a listing of you disassembled code to a assembly programming guru, and in a short time he can tell you: "Under perfect conditions, this code will take xxx cycles to execute". Why can't the debugger do the same for me? Steven Schulze Concord, CA Excellent suggestion! While this is definitely a good way to go, unfortunately, I'm using W98, and, of course, it's not available under W98 :( Oh well... But thanks for telling me about this function. I was unaware of it. Sometime in the near future, I should switch to NT, and then I can start playing with it. Until then, I guess I'm stuck in W98-land... Steven Schulze Concord, CA > and in a short time he can tell you: "Under perfect > conditions, this code will take xxx cycles to execute". > Why can't the debugger do the same for me? The debugger can't do it because there are not enough people asking for just that. And while I do not claim any honorific (except "slob", maybe), I can tell you that I hate cycle-counting. Passionately. I'd rather have a real result, with imprecision, from a profiler than a manual count from me. :-\ > Now, I'm not sure who gets charged by NT for time spent in hardware > interrupts I fear this won't help, as the time spent in ISRs and such is charged to the thread currently running. (Not 100% sure, though.) For a definitive answer, we lean back and wait if Jamie H. notices us. :-) If u r using a pentium or above chip, then you may want to use an opcode which just does what you need, it gets a cycle count from the chip. Lookup on a RDTSC in a pentium assembly coding guide. Here is some code which emits the correct assembly code. It compiles with VC 5.0 but i am sure you can modify it to work with any other compiler. /* cl -W3 -O2 -Ox rdtsc1.c */ #include <stdio.h> #define RDTSC __asm _emit 0x0f __asm _emit 0x31 static __inline unsigned __int64 get_clock () { unsigned long lo; unsigned long hi; _asm { RDTSC mov lo,eax mov hi,edx } return (((unsigned __int64) hi)<<32) + lo; } int main() { unsigned __int64 t0, t1; t0 = get_clock (); t1 = get_clock (); printf ("Cycles elapsed = %I64d\n", t1 - t0); return (0); } -bobby Steven Schulze wrote in message ... So do I, which is why I'd love such a feature. >Passionately. I'd >rather have a real result, with imprecision, from a profiler than a >manual count from me. :-\ Yes, but while it might be ok for some people, other people might need more precise info, since their projects might require it. BTW, I was thinking - the compiler DEFINATELY already know this info, since it needs it to do the optimizations. Why can't we be privy to this info as well, if we need it? Steven Schulze Concord, CA Thanks, I'll DEFINITELY look into. BTW, does the RDTSC use the same clock as QueryPerformanceCounter()? Steven Schulze Concord, CA I believe you will still have some accuracy problems using this (RDTSC) instruction due to the context switches that can occur while you are timing code. I'm not sure how much this helps but I recall reading in one of the programming journals (WDJ, MSJ or WinTech) about a VxD someone wrote to monitor context switches so that you could account for the number of CPU cycles executed out of context from your timed code. The VxD allowed you you register your RDTSC counter variable and thread ID with it. The VxD would then subtract from your counter variable, the number of CPU cycles executed outside of your threads context. Perhaps someone else remembers this article more specifically. You might also want to take a look at for other issues that can affect accuracy of this instruction. Hope this helps, -- Ian MacDonald I have somewhat of an answer to this problem. What I did (using QueryPerformanceCounter()), was that I wrote a class that has a function called Reset(), Start(), Stop(), and Show(). First you call Reset(), which resets all members, then you run your code multiple times (I run it up to 1000 times), and every time you first call Start(), then do the code, then call Stop(). The class then adds this time to the total time, as well as keeping track of the fastest as well as the slowest run. When you then call Show(), it shows the fastest, slowest and average times. This gives pretty good results, because there's GOT to be at least one run where the code wasn't pre-empted. Also, I try to keep the segments of code I time pretty short, so that it's possible to get through it without pre-empting at least once. The problem is simply that I think QueryPerformanceCounter is flaky. Sometimes I get a time of 0 (even WITH my code in-between Start() and Stop()), which should be impossible, given that the resolution is so small (838ns). It's almost as if the counter itself is updated from software that needs to be pre-empted first, although the info I have on it suggests it's a hardware item. But by combining the RDTSC with the method I describe above, I might get satisfactory results. Steven Schulze Concord, CA Note that perfect conditions are: 1. No other code running while the test code is running. 2. No caches enabled. 3. No other hardware process like DMA or refresh working. 4. No weird programming tricks being done in interrupt service routines. In other words, it's not a real number, but just a guess. The only way to make these decisions is to start with the cycle counts, program up a test case, and TEST IT! This will still be only an approximation of what happens when it gets out in the field. RDTSC uses the clock speed of the processor, so if you are using a 400MHZ processor each cycle would be 2.5 nanosecs. What i use this most for is measuring cycles of assembly code. Also, you may want to subtract the overhead of RDTSC call itself so that it does not affect your measurements, to do this call RDTSC twice in a row and make a note of the cycles elapsed this would be the overhead. >I believe you will still have some accuracy problems using this (RDTSC) >instruction due to the context switches that can occur while you are >timing code. > To avoid context switches as much as possible, do a yield before u start timing an assembly code section. This ensures that you have a new timeslice from the OS when you wakeup. -bobby I think my original point is being lost here... Here's my main point. If the Cycle Guru can tell me: "This code of yours will execute in 755 cycles under perfect conditions". I then re-write my code, and ask him again, and he says "Now your code will execute in 621 cycles under perfect conditions". Now, that kind of info can help me immensely to speed up my code. See, I don't need to worry about DMA interrupts, etc, etc, because if I can get my code to perform as fast as possible under "perfect" conditions, I know it'll also perform better than my original code under a noisy system. The problem with "TEST IT!" as you put it, is that I can't get an accurate reading of how long MY code takes to execute, because the results vary too much. I can't make informed decisions as to whether version A or version B of my code is faster for that very reason. BTW, Intel has an "Optimizing Tutor" that shows you how to optimize code, and shows how many cycles a certain version of code will take to execute vs another version. Obviously this is a legitimate way to analyze code, but you insist that it is not. I wish I had a way to see my code in the same way as Intel shows in it's examples. That's what I'm saying. Also, tell me, how does the people that hand-optimize code figure out which version of code will run faster? Steven Schulze Concord, CA <snip> > Also, tell me, how does the people that hand-optimize code figure out which > version of code will run faster? Steve, there is an ancient rule of thumb that suggests that 10% of an application's code will take 90% of the CPU time. My rule for optimization is to wait until the end of the development, then if the code executes too slowly for comfortable use, instrument the code with some performance tool; find the 10%; hand optimize it; repeat the process until the code executes "fast enough". If there is no 10% (or 20%) then there may be some basic problem in the design. I'm afraid that the thread is simply saying you can't get exactly what you want in a absolute sense. You can get what you want in a relative sense. IMO Gus Gustafson gus...@gte.net I know EXACTLY where time is spent in my code. It's the loop that executes 50 million times for a 50 million byte file. I can pinpoint it down to 7 lines in my code. Also, FOR THIS SPECIFIC application, there's no "fast enough", there's only "faster is better". It's an application that processes multiple audio files (up to 44), each with a size of to about 50MB for 5 minutes of audio (that's the extreme case, though). It's a program that tries to do stuff in real-time. Anytime you have the slightest performance bottleneck, you could lose the ability to process one or two more of those 44 files in real-time. So, this isn't a simple case of having the spell-checker take 5 instead of 6 seconds to finish - how nice, this is a REAL case where speed makes a difference in how the program can be applied. >I'm afraid that the thread is simply saying you can't get exactly what you want >in a absolute sense. You can get what you want in a relative sense. BTW, as I asked before, how does the compiler make it's decisions as to which version of your code will execute faster when it does it's optimizations, since it's not really executing your code to make a "relative" decision, as you say? How on earth does it do it, since any cycle counting on assembly code is being shot down as being unrealistic? Why should I not be able to look at my code and make conclusions based on information about cycle times, just as the compiler does (yes, I know about pairing, penalties, etc, etc)? Or are we as programmers simply not able to work down to this level anymore? I guess so. Steven Schulze Concord, CA Steven, Why don't you share those 7 lines of code with us? We can trade messages forever, but your code won't get any faster this way! If you are shy, or those seven lines are very dear, try the book "Zen of Code Optimization" by Michael Abrash. Consider this an FYI which I offer in case any of this is new to you. If not, please ignore it, I'm not trying to heighten your exasperation. You can get a listing of the machine instructions generated by the compiler with the /FA switch. There used to be a time when this listing didn't take into account optimizations, I don't know if that is still true. If so, you should be able to look at the machine instructions with a debugger. Intel's processor manual list the number of clock cycles an instruction takes. MS reprinted that info with the copy of MASM I bought several years ago. As it is now fashionable to exclude printed docs under the guise of saving forests I'm not sure that the assembler includes the info any more. The trouble with the info it is that is not all simple. The number of clock cycles an instruction takes depends on the processor, the addressing mode and perhaps even the value of the arguments. For example, if I am reading the table correctly, the integer multiply instruction (IMUL) takes from 13 to 42 clocks on a 486 using double word operands. Regards, Will Also, I'm sure later versions of MASM can generate the processor timing information on a listing, so I guess that by assembling the compiler generated assembler output with MASM, you could get the timing information on a listing. It's a bit long winded, so it's not something you'd want to do very often. Dave ---- Address is altered to discourage junk mail. Please post responses to the newsgroup thread, there's no need for follow up email copies. IMHO it is not true. If you get your code run 100 cycles less it will not be a point in the system there all other threads may take a several millions cycles to execute. It is just a drop in the ocean. I think you better develop your code in a way then no other applications will be permitted to run (in case you do it under win95). This will save you some valuable time. Yan Actually, that was a generalization. I have about 40 - 50 such small functions, varying from about 4 to 100 lines of code. BTW, I've been able to write a class now that uses the RDTSC to measure elapsed cycles. While it's not perfect, it's pretty good. I can get repeatable results (which is what I need) down to less than 0.1% (after doing multiple runs and taking the minimum result). Not bad. Steven Schulze Concord, CA No, it's not. If the LOOP takes 700 cycles for one version of the code, and the loop takes 600 cycles for a different version of code (but doing the same thing), then that's a 17% improvement for that specific loop. Now, if my program spends an awful lot of time in that loop (as my code does, processing a large file of audio data), then it's a SIGNIFICANT improvement. I'm pretty aware of the fact that trying to optimize the WHOLE program is fruitless, but since my program does what it does, it can benefit a lot from optimizing the small sections of code that the program probably spends 95% or more of it's time in (while processing). Boaz Tamir. Ian MacDonald wrote: > > In article <#U7t9OGt...@uppssnewspub05.moswest.msn.net>, > > I believe you will still have some accuracy problems using this (RDTSC) > instruction due to the context switches that can occur while you are > timing code. > May '95 !? Holy smokes. It seems like just yesterday. I guess it just goes to show that you should never throw away any of your old magazines. Thanks for remembering. -- Ian
https://groups.google.com/g/microsoft.public.win32.programmer.kernel/c/Qh3k8bxath8
CC-MAIN-2022-27
refinedweb
3,808
70.02
On Fri, 9 Mar 2001, Andreas Dilger wrote: > > Please run "vgscan -v -d" and post the output (compressed if very long). OK. There is gzipped output from vgscan -v -d I'm posting patched pv_read_all_pv_of_vg.c, too verify it. -- *[ Łukasz Trąbiński ]* SysAdmin @polvoice.com /* * tools/lib/pv_read_all_pv_of_vg.c * * Copyright (C) 1997 - 2000 Heinz Mauelshagen, Sistina Software * * March-May,October-November 1997 * May,August,November 1998 * January,March,April,September,October 2000 * * * This LVM library is free software; you can redistribute it and/or * modify it under the terms of the GNU Library General Public * License as published by the Free Software Foundation; either * version 2 of the License, or (at your option) any later version. * * This LVM LVM library; if not, write to the Free * Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, * MA 02111-1307, USA * */ /* * Changelog * * 03/02/2000 - use debug_enter()/debug_leave() * 04/04/2000 - enhanced to find physical volumes on UUID base * rather than on device special name * 20/09/2000 - WORKAROUND: avoid dual access pathes for now (2.4.0-test8) * 30/10/2000 - reworked to fix UUID related bug * */ #include <liblvm.h> int pv_read_all_pv_of_vg ( char *vg_name, pv_t ***pv, int reread) { int i = 0; int id = 0; int p = 0; int pp = 0; int np = 0; int pv_number = 0; int ret = 0; int uuids = 0; static int first = 0; char *pv_uuid_list = NULL; static char vg_name_sav[NAME_LEN] = { 0, }; pv_t **pv_tmp = NULL; static pv_t **pv_this = NULL; pv_t **pv_this_sav = NULL; #ifdef DEBUG debug_enter ( "pv_read_all_pv_of_vg -- CALLED with vg_name: \"%s\"\n", vg_name); #endif if ( pv == NULL || vg_name == NULL || ( reread != TRUE && reread != FALSE) || vg_check_name ( vg_name) < 0) { ret = -LVM_EPARAM; goto pv_read_all_pv_of_vg_end; } *pv = NULL; if ( strcmp ( vg_name_sav, vg_name) != 0) { strcpy ( vg_name_sav, vg_name); reread = TRUE; } if ( reread == TRUE) { if ( pv_this != NULL) { free ( pv_this); pv_this = NULL; } first = 0; } if ( first == 0) { if ( ( ret = pv_read_all_pv ( &pv_tmp, FALSE)) < 0) goto pv_read_all_pv_of_vg_end; /* first physical volume who's volume group name fits starts work on PV UUID list */ for ( p = 0; pv_tmp[p] != NULL; p++) { if ( strcmp ( pv_tmp[p]->vg_name, vg_name) == 0 && pv_check_consistency ( pv_tmp[p]) == 0) { uuids = pv_read_uuidlist ( pv_tmp[p]->pv_name, &pv_uuid_list); break; } } /* pass to find the number of PVs in this group anid to prefil the pointer array */ for ( p = 0; pv_tmp[p] != NULL; p++) { if ( strncmp ( pv_tmp[p]->vg_name, vg_name, NAME_LEN) == 0) { pv_this_sav = pv_this; if ( ( pv_this = realloc ( pv_this, ( np + 2) * sizeof ( pv_t*))) == NULL) { fprintf ( stderr, "realloc error in %s [line %d]\n", __FILE__, __LINE__); ret = -LVM_EPV_READ_ALL_PV_OF_VG_MALLOC; if ( pv_this_sav != NULL) free ( pv_this_sav); goto pv_read_all_pv_of_vg_end; } pv_this[np] = pv_tmp[p]; pv_this[np+1] = NULL; np++; } } #if 0 /* in case this PV already holds a uuid list: check against this list */ if ( uuids > 0) { for ( p = 0; pv_this[p] != NULL; p++) { for ( id = 0; id < uuids; id++) { if ( memcmp ( pv_this[p]->pv_uuid, &pv_uuid_list[id*NAME_LEN], UUID_LEN) == 0) goto uuid_check_end; } pv_this[p] = NULL; uuid_check_end: } for ( pp = 0; pp < p - 2; pp++) { if ( pv_this[pp] == NULL) { pv_this[pp] = pv_this[pp+1]; pv_this[pp+1] = NULL; } } np = 0; while ( pv_this[np] != NULL) np++; } #endif /* avoid multiple access pathes */ for ( p = 0; pv_this[p] != NULL; p++) { /* avoid multiple access pathes for now (2.4.0-test8) and MD covered pathes as well */ for ( i = 0; i < np; i++) { if ( p != i && strncmp ( pv_this[p]->vg_name, vg_name, NAME_LEN) == 0) { if ( pv_this[i]->pv_number == pv_this[p]->pv_number && memcmp ( pv_this[i]->pv_uuid, pv_this[p]->pv_uuid, UUID_LEN) == 0) { if ( MAJOR ( pv_this[p]->pv_dev) == MD_MAJOR) pp = i; pv_this[pp] = NULL; } } } for ( pp = 0; pp < p - 2; pp++) { if ( pv_this[pp] == NULL) { pv_this[pp] = pv_this[pp+1]; pv_this[pp+1] = NULL; } } np = 0; while ( pv_this[np] != NULL) np++; } /* no we only have pointers to single access path PVs in pv_this belonging to this VG */ if ( np == 0) { ret = -LVM_EPV_READ_ALL_PV_OF_VG_NP; goto pv_read_all_pv_of_vg_end; } /* pass to find highest pv_number */ for ( p = 0; pv_this[p] != NULL; p++) { if ( pv_number < pv_this[p]->pv_number) pv_number = pv_this[p]->pv_number; } if ( pv_number != np) { ret = -LVM_EPV_READ_ALL_PV_OF_VG_PV_NUMBER; goto pv_read_all_pv_of_vg_end; } /* Check for contiguous PV array */ for ( p = 0; pv_this[p] != NULL; p++) if ( pv_this[p] == NULL && p < np) ret = -LVM_EPV_READ_ALL_PV_OF_VG_NP_SORT; first = 1; } if ( ret == 0) *pv = pv_this; pv_read_all_pv_of_vg_end: #ifdef DEBUG debug_leave ( "pv_read_all_pv_of_vg -- LEAVING with ret: %d\n", ret); #endif return ret; } /* pv_read_all_pv_of_vg() */ Attachment: vgscan.log.gz Description: GNU Zip compressed data
http://www.redhat.com/archives/linux-lvm/2001-March/msg00035.html
CC-MAIN-2014-42
refinedweb
689
64.24
Hey, I am relatively new to C#, and realize that with the advent of namespaces, .h files and the like are no longer needed. But how can I code in multiple files? (is this even in the framework). Since everything is now in classes, I dont like having 1 huge file with miles and miles of code in it. So I am writing most of my classes in a namespace in a seperate file, then would like to have import them into my file containing "main." How do i do this? I figure if i just typed using MyNamespace; that VS.Net would not know where to find it, so I guess where do I go after I have coded my own namespace? Also, if anyone can point me in the right direction of how to go about using multiple threads of execution inside c#, i would appreciate it. I have previously only done execution with a single thread, but my current project needs multiple threads to accomodate different sensors and waiting devices, and I don't even know where to begin with it. Thanks jbj
https://cboard.cprogramming.com/csharp-programming/50681-custom-namespaces.html
CC-MAIN-2017-22
refinedweb
186
71.44
Hey, so I've been just practicing writing some simple programs trying to use loops and calling on functions. So I wrote this thing up real quick, and tried to get it to run, and I don't understand what's wrong. It's compiling with no errors or warnings, but every time I run it, the program just adds the two numbers no matter what I do. So I feel really stupid, but here it is. I've been staring at it for about 20 minutes making sure my FOR/ELSE loop is good and I've not forgotten anything obvious (I hope!) I wrote an add function and a subtract function. I know you guys can probably figure out what I'm trying to do, but this is an explanation of how I see it running. 1. The user selects whether he wants to add or subtract. 2. Based on the selection it's a for loop that calls on the correct function. 3. The functions imports the stored numbers, does the math and posts it 4. The program finishes. Code:#include <iostream> using namespace std; //subtract function int Subtract(int alpha, int omega){ cout<<"The total of " << alpha << " minus " << omega << " is...\n"; cout<< alpha - omega << endl; return 0; } //addition function int Additup(int apple, int orange){ cout<<"The total of " << apple << " plus " << orange << " is...\n"; cout<< apple + orange << endl; return 0; } //main function time! int main(){ int choice, num1, num2, num3; cout << "1 for addition, 2 for subtraction: "; cin >> choice; cin.ignore(); if(choice = 1){ cout<<"Enter the two numbers with a space inbetween then press enter: "; cin >> num1; cin >> num2; num3=Additup(num1,num2); } else{ cout<<"Enter the two numbers then press enter: "; cin >> num1; cin >> num2; num3=Subtract(num1, num2); } return 0; } Also, one more thing. I understand that the code is both A) probably very inefficient for it's purpose, and B) might be a little sloppy. I know this, and I'm just working on the foundations right now. Thanks again for help!
http://cboard.cprogramming.com/cplusplus-programming/127696-function-not-calling.html
CC-MAIN-2014-35
refinedweb
340
71.65
hi!'. may someone help me with this? Israel. Martin Thomas wrote: >. ---------------------------------------------------------------------- Hi Martin, thanks for reply. Here goes my makefile, main.c and the output. Current makefile: NAME = demo2106_blink_flash CC = arm-elf-gcc LD = arm-elf-ld -v AR = arm-elf-ar AS = arm-elf-as CP = arm-elf-objcopy OD = arm-elf-objdump CFLAGS = -I./ -c -fno-common -O0 -g AFLAGS = -ahls -mapcs-32 -o crt.o LFLAGS += -L'C:\Program Files\yagarto\lib\gcc\arm-elf\4.1.1\' -lgcc -lc LFLAGS += -Map main.map -T demo2106_blink_flash.cmd CPFLAGS = -O binary ODFLAGS = -x --syms all: test clean: -rm crt.lst main.lst crt.o main.o main.out main.bin main.hex main.map main.dmp test: main.out @ echo "...copying" $(CP) $(CPFLAGS) main.out main.hex $(CP) $(CPFLAGS) main.out main.bin $(OD) $(ODFLAGS) main.out > main.dmp main.out: crt.o main.o demo2106_blink_flash.cmd @ echo "..linking" $(LD) $(LFLAGS) -o main.out crt.o main.o crt.o: crt.s @ echo ".assembling" $(AS) $(AFLAGS) crt.s > crt.lst main.o: main.c @ echo ".compiling" $(CC) $(CFLAGS) main.c --------------------------------------------- Console; Output --------------------------------------------- make -k all .compiling arm-elf-gcc -I./ -c -fno-common -O0 -g main.c ..linking arm-elf-ld -v -L'C:\Program Files\yagarto\lib\gcc\arm-elf\4.1.1\' -lgcc -lc -Map main.map -T demo2106_blink_flash.cmd -o main.out crt.o main.o GNU ld version 2.17 main.o: In function `main': C:\Documents and Settings\Israel\workspace\demo2106_blink_flash/main.c:64: undefined reference to `memcpy' C:\Documents and Settings\Israel\workspace\demo2106_blink_flash/main.c:65: undefined reference to `strncpy' C:\Documents and Settings\Israel\workspace\demo2106_blink_flash/main.c:66: undefined reference to `puts' make: *** [main.out] Error 1 make: Target `all' not remade because of errors. --------------------------------------------- Partial code (main.c): --------------------------------------------- /********************************************************** Header files **********************************************************/ #include "LPC210x.h" #include <stdio.h> #include "stdio.h" #include "string.h" /********************************************************** Global Variables **********************************************************/ int q; // global uninitialized variable int r; // global uninitialized variable int s; // global uninitialized variable short h = 2; // global initialized variable short i = 3; // global initialized variable char j = 6; // global initialized variable /********************************************************** MAIN **********************************************************/ int main (void) { int j; // loop counter (stack variable) static int a,b,c; // static uninitialized variables static char d; // static uninitialized variables static int w = 1; // static initialized variable static long x = 5; // static initialized variable static char y = 0x04; // static initialized variable static int z = 7; // static initialized variable const char *pText = "The Rain in Spain"; char str[24]; char str1[24]; // Initialize the system Initialize(); // set io pins for led P0.7 IODIR |= 0x00000080; // pin P0.7 is an output, everything else is input after reset IOSET = 0x00000080; // led off IOCLR = 0x00000080; // led on sprintf(str, "Ola! \n"); strncpy(str1,str,20); printf("Olá\n"); // endless loop to toggle the red LED P0.7 while (1) { for (j = 0; j < 5000000; j++ ); // wait 500 msec IOSET = 0x00000080; // red led off for (j = 0; j < 5000000; j++ ); // wait 500 msec IOCLR = 0x00000080; // red led on } } Thanks for attention. For what it's worth, I had the same issue the other day. I was working on an ARM project with the STM32 Cortex-M3 MCU. The quick fix I tried was to use gcc.exe instead of ld.exe for linking. It removed the problem with undefined references, but after debugging the project I discovered that the toolchain inserted illegal instructions for the Cortex-M3. To be specific, every time e.g. memcpy or memset would be called, the assembler instruction generated was BLX address, which generates a HardFault on Cortex M3 (or so I think). On this type of processor, it should only generate BLX register. So I switched back to ld.exe for linking, but now had to specify the -lgcc and -lc switches manually. I also had to specify where ld should look for libraries. After trying a few times, I discovered that the YAGARTO toolchain includes libc.a and libgcc.a libraries specifically for Thumb and ARMV7-M cores. Including these in LFLAGS did help ld.exe find the libraries, but the undefined references remained. So I had a read on the -l option for GCC linker. The example with 'foo.o -lz bar.o' inspired me to move the -L and -l options to the end of the link command. I.e. $(LD) $(FLAGS) -o main.out <objfiles> -L"E:/yagarto/lib/gcc/arm-none-eabi/4.5.1/thumb/v7m" -lgcc -L"E:/yagarto/arm-none-eabi/lib/thumb/v7m" -lc This removed the problem with undefined references, and now the build generates correct BLX instructions (for my project). So it seems to be that the -L and -l options should be placed after the list of object files which are linked. As the GCC documentation states, if an object file listed after the libraries uses functions from those libraries, they may not be loaded (order matters). I am aware that this thread is over two years old, but I thought that I'd provide my two cents in solving a problem which I have myself spent some hours battling. At the same time, present a solution to a STM32-specific problem with invalid BLX instruction being generated. // MS Good day, Thank you for your post - I've had problems for months referencing c library calls. I basically gave up just to finish my project, coding my own versions of memcpy() and memset() The problem was that I was referencing -L/opt/arm_codesourcery/lib/gcc/arm-none-eabi/4.5.1 and -L/opt/arm_codesourcery/lib/gcc/arm-none-eabi/4.5.1/thumb2 in my LDFLAGS Both directories contains libgcc.a It seems the linker picked up the wrong library due to the library order. I've removed the reference to "-L/opt/arm_codesourcery/lib/gcc/arm-none-eabi/4.5.1" and now everything works great!
http://embdev.net/topic/129793
CC-MAIN-2017-04
refinedweb
975
69.48
Previous: A note on namespaces, Up: IMAP IMAP is a complex protocol, more so than NNTP or POP3. Implementation bugs are not unlikely, and we do our best to fix them right away. If you encounter odd behavior, chances are that either the server or Gnus is buggy. If you are familiar with network protocols in general, you will probably be able to extract some clues from the protocol dump of the exchanges between Gnus and the server. Even if you are not familiar with network protocols, when you include the protocol dump in IMAP-related bug reports you are helping us with data critical to solving the problem. Therefore, we strongly encourage you to include the protocol dump when reporting IMAP bugs in Gnus. Because the protocol dump, when enabled, generates lots of data, it is disabled by default. You can enable it by setting imap-log as follows: (setq imap-log t) This instructs the imap.el package to log any exchanges with the server. The log is stored in the buffer ‘*imap-log*’. Look for error messages, which sometimes are tagged with the keyword BAD—but when submitting a bug, make sure to include all the data.
http://www.gnu.org/software/emacs/manual/html_node/gnus/Debugging-IMAP.html#Debugging-IMAP
crawl-003
refinedweb
200
70.53
Re: unsafe????????? - From: "Peter Duniho" <NpOeStPeAdM@xxxxxxxxxxxxxxxx> - Date: Sun, 17 Jun 2007 11:27:33 -0700 You might try spending a little more thought on your Subject: field for your posts, so that it better describes the question you have, as well as on the actual phrasing of your question in the post (for example, putting more than just the code and "error,why"). Also, the Typographical Conservancy phoned, and they want to know why the question mark population had a sudden decrease. I'll suggest that you can help them avoid becoming endangered by not using so many, especially since putting extra question marks in your post doesn't make the post any more likely to be answered. Now, all that said: On Sun, 17 Jun 2007 09:47:39 -0700, wmhnq <wmhnq@xxxxxxx> wrote: public class A { unsafe char* pStr = (char*)0; public unsafe void change() { *pStr = "fghijkl";//error,why?????? } } You are assigning a string reference to a character. You didn't post the compiler error, but I suspect it says exactly that: something about cannot convert a string to a character. You might prefer this line of code: pStr = "fghijkl"; That should work better. Pete . - References: - unsafe????????? - From: wmhnq - Prev by Date: Re: Obtaining exclusive access to file / file lock - Next by Date: Re: Version control - Previous by thread: unsafe????????? - Next by thread: Re: unsafe????????? - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.csharp/2007-06/msg02884.html
crawl-002
refinedweb
230
64.04
After a lot of studying, I am starting to code my first app. In my code I have a class called "Measurement". I want to implement an array of Measurements. This array of Measurements needs to be accessible across multiple view controllers, so I have created a custom class called "MeasurementsArray" for the array, and made it into a singleton. I have done this, and the code works as expected. But now that I have it working, I want to make sure that I have easy to understand code, and that I am following conventional objective-c design patterns. If it weren't for fact that I need the array of Measurements to be a singleton, it would seem that this array belongs inside the "Measurement" class as a class method. But my understanding is that there can be only one instance of a class when it is a singleton. But somehow, having a separate class named "MeasurmentsArray" seems a little hacky to me. My question: Am I approaching this the right way, or am I missing something? If I do need the split off the array of Measurements to a separate class in order to have it be a singleton, does "MeasurementsArray" seem like an appropriate class name? If not, please provide a naming convention you would use for this type of situation. Edit: After some inital answers, some clarification regarding the function of the app might help. It is a fitness application that records, saves and tracks body fat percentage and body weight. Every time the user records his body fat and weight, it becomes an instance of the class "Measurement". The array of Measurements is needed in order to track changes in weight and body fat over time. A singleton is needed because multiple view controllers need access to the array. A singleton is needed because multiple view controllers need access to the array. There are other, better ways to share data between objects than to rely on a singleton. View controllers are usually created either by the application delegate or by another view controller. The application's data model (in this case, that's your measurements array) is often also created ether by the application delegate. So when the app delegate creates a view controller, it can also give that controller a pointer to the data model. If that controller creates any view controllers, it can likewise share its pointer to the data model. Passing the data model along from app delegate to view controller and from one view controller to the next makes your code easier to maintain, test, reconfigure, and reuse because it avoids depending on some predetermined, globally accessibly object. Having a singleton at all may be a wrong move. There are times that a singleton is appropriate, but there's usually a better choice. It sounds like you may already be implementing something resembling the Model-View-Controller pattern, which would be appropriate. In this context, this array of measurements is part of your model, and it may make sense for it to be a separate class, but there's likely no need for it to be a singleton. The name MeasurementsArray is implementation-specific. I would be more inclined to call it just Measurements or to give it a name reflecting what the measurements are measuring. In fact I wonder about the name of your Measurement class. What is it measuring? What does it actually represent? If you post some code, we might be able to provide more specific ideas. Based on your update and a bit of thinking, you might want to think about The Repository Pattern. Rather than having your controllers hold the array, they have access to the repository from which they can get it. My thinking here is that your array of measurements might be supplied by a MeasurementRepository and that while now the data might be a single simple array that the repository just holds, it might evolve to something that is stored in a database per user and with variation over time, so that your repository supplies more complex access. Rather than having this repository be a singleton (though that is certainly sometimes done), it might better be just created once and then injected into everything that needs it. See and Uncle Bob's blog I feel that a separate class is probably overkill unless it has several methods of its own, in which case it might be justified. This problem may just be an artefact of your app's structure not (yet) being well defined. Is data going to be persistent across sessions? If so, will there be a "manager" class on which you could put a property to retrieve the array; something like allMeasurements on the MeasurementStore class? Another option would be to store the array in your app delegate. I find that if I continue working on an app, it becomes obvious how I should structure it. Edit: To elaborate, there's nothing "wrong" with your approach; there's probably just nicer ways to do it. If I understand you correctly, the measurements array represents past measurements of a specific user. If that is the case- you're not looking for a singleton at all. remember that a singleton is a single value PER APPLICATION, and what you're looking for here is a single value PER USER. Don Roby is absolutely right- Measurements is probably a property of the User class. for example (I'm using c# notation, but you get the hang of it...): public class User { public string Name {get; set;} public int Id {get; set;} public Measurement[] Measurements {get; set;} //one array per-user... }
http://m.dlxedu.com/m/askdetail/3/a58f05ceb747b33b98e98deed4928cd0.html
CC-MAIN-2018-22
refinedweb
946
61.46
Whats with the error: must use c++ for header iostream.h? What do i do? Printable View Whats with the error: must use c++ for header iostream.h? What do i do? try to put this in your header: instead of iostream.hinstead of iostream.hCode: #include <iostream> using namespace std; if that doesn't help then post your entire source code so i can look at it matheo917 Also, does your source file have a ".C" or a ".CPP" extension? If it's ".C", then change it. the second one helped... for some reason my compiler(seen below) doesn't accept using namespace std; well, thanks again. :D I can't wait for that c++ book to come, it should come today or tommorrow.
https://cboard.cprogramming.com/cplusplus-programming/24603-error-must-use-cplusplus-printable-thread.html
CC-MAIN-2017-26
refinedweb
125
87.01
Eclipse Community Forums - RDF feed Eclipse Community Forums MWE and Check language <![CDATA[Hi, I have 2 ecores in my project having the same contexts in it with same namespace. I want to validate based on one of the ecores. I use a check language and write workflow files to validate. My workflow file: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <workflow> <component class="org.eclipse.xtend.check.CheckComponent"> <metaModel class="org.eclipse.xtend.typesystem.emf.EmfMetaModel"> <modelFile value="actual path of the ecore"/> </metaModel> <emfAllChildrenSlot value="model"/> <checkFile value="CheckFile"/> </component> </workflow> When i gave a relative path to my ecore, it was not able to find it. When i gave the actual path of the ecore workflow file executed. However it was not reporting any issues as it used to when there was only one ecore. How can i be sure that it picks the correct ecore and validate? I suppose it has something to do with same context names and same namespace. How can i resolve this?]]> Dhanya M 2013-04-30T10:33:29-00:00 Re: MWE and Check language <![CDATA[First, there is no setter "modelFile" in type EmfMetaModel, it must be "metaModelFile". The relative path should work when it is relative to the classpath root. Otherwise "metaModelFile" must be a URI string. You can be sure that the right Ecore metamodel is used just by the fact that it can be located through the URI you are passing. If the file does not exist, you will notice an error. Since your metaModel is local to the CheckComponent defined you can be sure that this EmfMetaModel instance will be another one than if you define a second CheckComponent. Each CheckComponent will use a different execution context. Regards, ~Karsten]]> Karsten Thoms 2013-05-06T08:05:46-00:00
http://www.eclipse.org/forums/feed.php?mode=m&th=485781&basic=1
CC-MAIN-2014-10
refinedweb
304
67.96
We are hugely excited to announce the release of Iglu, our first new product since launching our Snowplow prototype two and a half years ago. Iglu is a machine-readable schema repository initially supporting JSON Schemas. It is a key building block of the next Snowplow release, 0.9.5, which will validate incoming unstructured events and custom contexts using JSON Schema. As far as we know, Iglu is the first machine-readable schema repository for JSON Schema, and the first technology which supports schema resolution from multiple public and (soon) private schema repositories. In the rest of this post we will cover: - On the importance of schemas - The origins of Iglu - Iglu architecture - Using Iglu - Limitations and roadmap - Learning more and getting help Snowplow is evolving from a web analytics platform into a general event analytics platform, supporting events coming from mobile apps, the internet of things, games, cars, connected TVs and so forth. This means an explosion in the variety of events that Snowplow needs to support: games saved, clusters started, tills opened, cars started – in fact the potential variety of events is almost infinite. Historically, there have been two approaches to dealing with the explosion of possible event types: Custom variables as used by Google Analytics, SiteCatalyst, Piwik and other web analytics packages are extremely limited – we plan to explore these limitations in a future blog post. Schema-less JSONs as offered by Mixpanel, KISSmetrics and others are much more powerful, but they have a different set of problems: The issues illustrated above primarily relate to the lack of a defined schema for these events as they flow into and then thru the analytics system. More generally, we could say that the problem is that the original schemas have been lost. The entities snapshotted in an event typically started life as Active Record models, Protocol Buffers, Backbone.js models or N(Hibernate) objects or similar (and before that, often as RDBMS or NoSQL records). In other words, they started life with a schema, but that schema has been discarded on ingest into the analytics system. As a result, the business analyst or data scientist typically has to maintain a mental model of the source data schemas when using the analytics system: This is a hugely error-prone and wasteful exercise: - Each analyst has to maintain their own mental model of the source data schemas - Source data schemas evolve over time. Analysts have to factor this evolution into their analyses (e.g. “what is the proportion of new users providing their age since the optional age field was introduced on 1st May?”) - There are no safeguards that the events have not been corrupted or diverged from schema in some way The obvious answer was to introduce JSON Schema support for all JSONs sent in to Snowplow – i.e. unstructured events and custom contexts. JSON Schema is a standard for describing a JSON data format; it supports validating that a given JSON conforms to a given JSON Schema. But as we started to experiment with JSON Schema, it became obvious that JSON Schema was just one building block: there were several other pieces we needed, none of which seem to exist already. In defining and building these missing pieces, Iglu was born. As you’ve seen, we made the design decision that whenever a developer or analyst wanted to send in any JSON to Snowplow, they should first create a JSON Schema for that event. Here is an example JSON Schema for a video_played event based on the Mixpanel example above: { "$schema": "", "description": "Schema for a video_played event", "self": { "vendor": "com.channel2.vod", "name": "video_played", "format": "jsonschema", "version": "1-0-0" }, "type": "object", "properties": { "length": { "type": "number" }, "id": { "type": "string" }, "tags": { "type": "array", "items": { "type": "string" } }, }, "required": ["length", "id"], "additionalProperties": false } (Note that this is actually a self-describing JSON Schema.) We made a further design decision that the JSON sent in to Snowplow should report the exact JSON Schema that could be used to validate it. Rather than embed the JSON Schema inside the JSON, which would be extremely wasteful of space, we came up with a convenient short-hand that looked like this: { "schema": "iglu:com.channel2.vod/video_played/jsonschema/1-0-0", "data": { "length": 213, "id": "hY7gQrO" } } We called this format a self-describing JSON. The iglu: entry is what we are calling an Iglu “schema key”, consisting of the following parts: We explained the origins of SchemaVer, our schema versioning system, in our blog post Introducing SchemaVer for semantic versioning of schemas. Next, we needed somewhere to store JSON Schemas like video_played above – a home for schemas where: - Developers and analysts could refer back to the schema to help with analytics instrumenting and data analysis - Snowplow could retrieve the schema in order to validate that the incoming events indeed conformed to the schemas that they claimed to represent - Snowplow could retrieve the schema to support converting the incoming events into other formats, such as Redshift database tables - Developers could upload new versions of the schema (with analysts’ oversight), to allow the data model to evolve over time It became obvious that we needed some kind of “registry” or “repository” of schemas: As we worked on Snowplow 0.9.5, we were able to firm up a set of core requirements for our schema repository: - It must store schemas in semantic namespaces – the storage taxonomy should support the four elements (vendor, name, format, version) we had identified for self-describing JSONs - It must allow schemas to be stored privately – because many companies regard their data models as commercially sensitive IP - It must support a central repository of publically available schemas – similar to how RubyGems.org or Maven Central host publically available Ruby gems or Java libraries - It must be de-centralized – meaning that a user can self-host this schema repository system end-to-end, even replacing or ignoring the central repository if they want - It must be storage-agnostic – so that repositories can be embedded inside software, hosted over HTTP(S), stored in private Amazon S3 buckets and so on - It must support efficient schema resolution – in other words, a client library should be able to track down a schema from all the available repositories with a minimum of lookups, and then cache that schema for subsequent use With this laundry list of requirements, we started to look at what open-source software was already available. We looked to see if there were any existing solutions around schema registries or repositories for JSON Schema or other schema systems. We found very little in the way of schema systems for JSON Schema or XML: for JSON Schema we only found this static repository sample-json-schemas by Francis Galiegue, one of the JSON Schema authors. Googling for “XML schema repository” turned up very little: only xml.org, but this seemed to be article-oriented rather than machine-readable. By contrast, the Apache Avro community seemed ahead of the pack. We found two projects to develop machine-readable schema repositories for Avro: - A community-wide effort to build a RESTful schema repository for Avro - An Avro schema registry from LinkedIn to support their Kafka->HDFS pipeline (which is called Camus) The main differences we could ascertain between our requirements and the Avro efforts were as follows: - The focus on Apache Avro versus our priorities around JSON Schema support - The Avro efforts seemed to assume that each user would install one, private schema repository. We envisage a de-centralized set of private schema repositories plus a central public repository – so fundamentally resolving schemas from N repositories, not 1 - The Avro community does not seem to have a semantic versioning approach to schemas like SchemaVer. This would make reasoning about schema-data compatibility difficult Given these differences, we decided to take the learnings from the Avro community and start work on our own repository technology designed to meet Snowplow’s specific requirements around schemas: Iglu. Iglu is a machine-readable, open-source (Apache License 2.0) schema repository, initially for JSON Schema only. A schema repository (sometimes called a schema registry) is like npm or Maven or git but holds data schemas instead of software or code. Iglu consists of three key technical aspects: - Iglu clients that can resolve schemas from one or more Iglu repositories (and eventually publish new schemas to repositories etc) - Iglu repositories that can host a set of JSON Schemas - A common architecture that informs all aspects of Iglu – for example, Iglu schema keys, SchemaVer for versioning schemas, Iglu’s rules for resolving schemas from multiple repositories These pieces fit together like this: Iglu Central is a public repository of JSON Schemas. Think of Iglu Central as like RubyGems.org or Maven Ce ntral but for storing publically-available JSON Schemas. We are using Iglu Central to host all of the JSON Schemas which are used in different parts of Snowplow; the schemas for Iglu Central are stored in GitHub, in snowplow/iglu-central. Here is an illustration of various Iglu clients talking to Iglu Central; we also show an Iglu Central mirror for a client working behind a firewall: As far as we know, Iglu Central is the first public machine-readable schema repository – all prior efforts we have seen are human-browsable directories of articles about schemas (e.g. schema.org). Iglu Central is hosted by Snowplow at. Although Iglu Central is primarily designed to be consumed by Iglu clients, the root index page for Iglu Central links to all schemas currently hosted on Iglu Central. While we have deliberately engineered Iglu as a standalone product, we expect that most initial usage of Iglu will be in conjunction with Snowplow. Based on our early internal testing of Iglu, we envisage that a Snowplow user will want to: - Create JSON Schemas for their unstructured events and custom contexts - Define Iglu schema keys for these JSON Schemas - Create their own Iglu schema repository and host their JSON Schemas in it - Configure their Snowplow Enrichment process to fetch these JSON Schemas from their private repository Separately, we hope that software vendors, analysts and data scientists will contribute their own schemas to Iglu Central; it would be awesome in particular if companies offering streaming APIs or webhooks would publish JSON Schemas for their event streams into Iglu Central. Let’s schema everything! We will discuss how to use Iglu with Snowplow in much more detail following the release of Snowplow 0.9.5. While heavily influenced by our requirements for Snowplow, we have deliberately created Iglu as a standalone product, one which we hope will be broadly useful as a schema repository technology. If you are interested in using Iglu without Snowplow, then we would recommend reading the Iglu wiki in detail. Wherever you find blocking gaps in the documentation, do please raise an issue in GitHub. For an in-depth understanding of how Iglu works, we recommend browsing through the source for the Iglu Scala client. The next Snowplow release, 0.9.5, will make heavy use of our new Scala client for Iglu, so the client code is a good starting point for understanding the underlying design of Iglu. We have deliberately tried to keep the scope of Iglu 0.1.0 as minimal as possible. The major known technical limitations at this time are: - Iglu only supports self-describing JSON Schemas - The only client libraries/resolvers currently available is for Scala (although it may be possible to use this from Java code) - Iglu only supports clients reading schemas from repositories at this time; this is no functionality for clients to publish new schemas to repositories (instead new schemas must be manually added) - There is as yet no way of making a truly private (versus privacy through obscurity) remote repository Our first development priority for Iglu is creating a RESTful schema repository server which allows users to publish new schemas to the repository, and has some basic authentication to keep schemas private. For more details on what is coming next in Iglu, check out the Product roadmap on the wiki. When we created Snowplow at the beginning of 2012, it didn’t need a lot of explanation – as an open source web analytics system, it fitted into a well-understood software category. As a schema repository, Iglu is a much more unusual beast – so do please get in touch and tell us your feedback, ask any questions or contribute! The key resource for learning more about Iglu is the Iglu wiki on GitHub – do check it out. Wherever you find blocking gaps in the documentation, please raise an issue in GitHub. We are hugely excited about the release of Iglu – we hope that the Snowplow community shares our excitement. Let’s work together to make end-to-end-schemas a reality for web and event analytics. And stay tuned for the Snowplow 0.9.5 release (coming soon) for some more guidance on using Iglu with Snowplow!
https://snowplowanalytics.com/blog/2014/07/01/iglu-schema-repository-released/
CC-MAIN-2022-27
refinedweb
2,162
50.09
float numbers I want to find a float number between 2 to 1000 which has the longest set of different digits after the dot. The set has not to repeat. What do I do so that set will be printed because of the limits according types. 10/27/2021 2:36:27 PMTeaserCode 8 AnswersNew Answer TeaserCode Here you can increase the amount of digits displayed by cout... #include <iostream> #include <iomanip> <-- needed using namespace std; int main() { cout<<setprecision(15); <-- needed double myVariable = 0; int n = 2; while (n <= 1000) { cout << 1.0/n++<<endl; } return 0; } You can read about it here and in the c++ documentation And which number can be an example for having such longest set of different digits? What did you mean by "The set has not to repeat" when you already said "longest set of *different* digits" Also what did you mean by "the limits according types"? Mr Paul very good and useful ans. Thank you very much.. In the case 1/7, it has the longest decimal part with different digits. I would like to declare such float variable, which will display as long as possible. I try with string but it goes only to ten digits. Can you use a double? Or long double? I do that but it display only six digits. I need more of them at least fifteen.
https://www.sololearn.com/Discuss/2912578/float-numbers
CC-MAIN-2022-33
refinedweb
230
74.39
note: similar problem as Hi there, Resharper was highly reccomended to me so I got it(v1.5) to try it out but so far it has been a HUGE diappoitment and a LOT of waste of time. Here is what happens: I open a new solution in VS.NET, add a a class library project to it and an aspnet project to it add a few references to some controls, compile, save and close the solution. I reload the solution and Resharper just refuses to recognise some of the symbols , sometimes all of the symbols like System etc. Sometimes the ones I added , and sometimes just symbols from some assembly like "System.Web.UI" and "System.Drawing". I tried deleting the symbol caches too as reccomended in here: (do the resharper folder created in the solution contains anything ? On my system its always empty, the system wide cache folder however does get some files) Sometimes I deleted the solution files, created a new solution and added the projects to the new solution and it would work but would again compalian about symbols when elaoding solution. Then again if I add any new controls the same thing would happen. And some times the same old projects would work fine if added to another new solution but would not work if opened in the old solution. It has taken all the self control that I have to remain polite in this posting as I have wasted two nights fiddling with this thing just to get it to recognise the System.Web.UI symbols and still sometimes it does and sometime it doesnt!!! Is there some way to tell resharper to just go ahead and refresh all the symbols from all the references in a project? If there is not please add it to resharper as it would save a lot of headache. and thats not the end of it.... forget any system symbols or other complicated stuff as references added by a devloper to third party control etc. I opened up a new solution with a VS.NET genrated empty class library project with the standard class1 file etc. and I changed the namespace from "Classlibrary1" to "Classlibrary.ABC" and resharper colored Classlibrary1 red saying unable to resolve symbol !!! does it support nested namespaces ? becaue the C# compiler and VS.NET didnt complain at all and compiled it fine. note: similar problem as Hello, Ahmed, Thank you for evaluating ReSharper. I'm not quite sure about the possible reason, but it looks like ReSharper does not receive Visual Studio project model events on your machine. Does ReSharper display any messages like 'Failed to connect to C# project events'? Are the new classes that you add to a project suggested in the completion list? Are the newly added files highlighted? Thanks, Andrey Simanovsky Well, I had Rational XDE installed on my system, the problem was solved after I 1 uninstalled Resharper 2.uninstalled Rational XDE 3.repaired/reinstalled the Visual Studio.NET 4.reinstalled Resharper. However I have not yet reinstalled Rational XDE. Do you know about any know issues resharper might have with Rational XDE ? And yes the files were getting highlighted but the caches for the solution were not getting saved and any changes in the namspace names were generating "unable to resolve symbol" errors. By the way I was very wrong, your product rocks, sorry for any harsh words earlier. -Ahmed Hello Ahmed, while we're not aware of specific problems with Rational XDE, many applications that integrate with VS.NET may corrupt it by unregistering its COM dlls during uninstallation. I don't claim that XDE causes this problem, but the issue you've faced was almost definitely of such a kind. Regards, Dmitry Shaporenkov JetBrains, Inc "Develop with pleasure!" Hello Ahmed, There were some chance issues reported with Rational XDE. For example, problems with Find Usages ( ). However, we could not reproduce these occasional problems. I think that after Visual Studio is repared the two add-ins will not interfere with each others work, but cannot give a one hundred percent guarantee of that. Thanks, Andrey Simanovsky This still happens to me in Visual Studio 2012 and R# 7.1.3. Everything is fine, until I update from source control and VS asks to reload projects, they reload and R# analysis goes bererk and some files it shows tons of errors, other not. But it still compiles. Hi Nick, Please try to clear the caches as described here: Hope it helps. Thanks.
https://resharper-support.jetbrains.com/hc/en-us/community/posts/206659315-Cannot-Resolve-Symbol-and-Problems-with-basic-features-?page=1#community_comment_206431079
CC-MAIN-2021-49
refinedweb
754
72.97
I am just starting to write C# and i would use Javascript if i could because my knowledge about C# specifically is "small". So, i have a UI Text, and i am trying to make it add 1 to the score each time the player clicks a button. For example, the score is 0. The player clicks the button, and it is changed to one. If he clicks again, it changes to two and it continues. I have tried A LOT OF THINGS but i always got myself in the same two problems: i can't reference something or i can't understand how it works so i can make it. I think i am in the right path, but i couldn't really find anything related to how to reference texts and how to make a variable for it, and i am also struggling in making a trigger for it. Here is my code for now: using UnityEngine; using System.Collections; public class ObjClick : MonoBehaviour { bool click = false; public GameObject fall; public GameObject txt; //I wanted to use this variable to hold the text, but as i said i couldn't reference it. There isn't exactly something like "public Text txt". public Animator anim; public int score; void Start() { fall.SetActive(false); } void Update() { if (Input.GetButtonDown("Fire1")) { anim.SetTrigger("Action"); fall.SetActive(true); //I plan to make something like, on left mouse click, add one to the score. } } void UpdateScore() { txt.text = score; // <- This is my main problem. I don't know how to reference the text. I always get the error "Type 'UnityEngine.GameObject' does not contain a definition for text" etc... } } Could someone help me to finish this? It's very important. Any help would be appreciated. Btw this is my last question. Which do you mean by 'UI Text'? Is this as part of an nGUI canvas, a legacy GUI Text component, or a custom script? Each is accessed differently. Side note; text isn't a member of the GameObject class, you would access it like most components in Unity 5, i.e. txt.GetComponent(); You would also want to add ".ToString()" after the 'score' assignment, otherwise you'll get an error there. It's a text, and it is a child of canvas. Normal text. Answer by Cresspresso · Nov 18, 2016 at 06:20 AM txt.GetComponent<UnityEngine.UI.Text>().text = score.ToString(); Answer by Prezzo · Nov 18, 2016 at 02:38 PM To start, Include UnityEngine.UI so write using UnityEngine.UI on top of your script. using UnityEngine.UI Next, change public GameObject txt; by public Text txt; In the editor, drag your text object into the text slot of your script. public GameObject txt; public Text txt; Access your text of your Txt object by typing txt.text = 'blah blah text'. For displaying numbers, you need to convert them to a string first, so write txt.text = 'blah blah text' txt.text = yourNumberVariable.ToString() Answer by giantkilleroverunity3d · Jan 18 at 06:43 PM @Zitoox The text box is in the inspector panel of Unity. Thought I would state this as the OP comment posted that There isn't exactly something like "public Text txt". There isn't exactly something like "public Text. I need help with a Text Location script (Ho bisogno di aiuto con uno script di posizione del testo) 0 Answers Multiple Cars not working 1 Answer Distribute terrain in zones 3 Answers Unity 2D - Text UI and Textbox scripting issue - HELP ASAP! 2 Answers Continuous autotype script? 1 Answer
https://answers.unity.com/questions/1273054/change-text-value.html?sort=oldest
CC-MAIN-2019-43
refinedweb
592
75.4
The upcoming AngularJS 1.3 release arrives with a heavy focus on improved form data manipulation. While this version solves some real-life pain points, for some developers, it may not be an automatic upgrade. The AngularJS team has starting rolling out Release Candidates of version 1.3. On a Google+ post the Angular team wrote: most of the API for 1.3 is decided upon and the next releases up until 1.3.0 stable will be fixing the remaining bugs. Among the new features for version 1.3 are: - new validators pipeline - asynchronous custom validators - model data binding options - ngMessages module for error message reuse - one-time data binding support The latest version provides developers with a new way to create custom validators, removing the need to use parsers and formatters. To create a custom validator in 1.3 the developer must register it on the new $validators pipeline and return true or false: ngModule.directive('validateLessthanfive', function() { return { require : 'ngModel', link : function($scope, element, attrs, ngModel) { ngModel.$validators.lessthanFive = function(value) { return (value < 5); }; } } }); Matias Niemela, an Angular contributor, wrote a thorough write up on the new forms features including the new ability to create asynchronous validators for providing server based validation. Matias also noted the improved parity with HTML5 validators: Now all of the HTML5 validation attributes are properly bound to ngModel and the errors are registered on ngModel.$error whenever the validation fails. The Angular team introduced breaking changes in version 1.3, which some developers have complained should come with a major version update (e.g. version 2.0). In a recent GitHub commit Chad Moran, Software Development Manager for Woot, warns: Making a breaking change and not bumping the major version is most likely going to create a lot of pain for users. One change with the potential to impact enterprise developers is that 1.3 no longer supports IE8. Developers had plenty of warning since the Angular team announced this on their blog in December of 2013. Part of the reasoning behind this change is that 1.3 only supports jQuery 2.1 or above and jQuery dropped IE8 support for version 2.x. In earlier versions of Angular, showing form validation error message was an exercise in combining ng-if directives and plenty boolean logic to display the right error message at the right time. Version 1.3 introduces the ngMessages module as an improved way to deal with complex validation error messages. An example of this new syntax from the yearofmoo.com blog post: <form name="myForm"> <input type="text" name="colorCode" ng- <div ng-...</div> <div ng-...</div> <div ng-...</div> </div> </form> According to Niemela, beyond reducing the lines of code, the new ng-messages module will "solve the complexity of one error showing up before the other." It's not clear when 1.3.0 will reach a stable release, but for version 1.2, there were three release candidates spread over three months. So far there have been three release candidates for 1.3 in three weeks. Beyond version 1.3 is version 2.0 which, according to a blog post by the Angular team, will focus on making Angular a "framework for mobile apps". AngularJS is a JavaScript framework created by Google. Community comments
https://www.infoq.com/news/2014/09/angular-13-html-forms/
CC-MAIN-2021-04
refinedweb
549
58.58
Back when UNIX was born the <tab> character was a first class citizen in every computer on the planet, and many languages used it as part of their syntax. Static binaries were invented when Sun and Berkeley co-developed shared libraries and there needed to be binaries that you knew would work before shared libraries were available (during boot before things were all mounted, during recovery, etc) It always amazed me when someone looks at computer systems of the 70's through the lens of "today's" technology and then projects a failure of imagination on the part of those engineers back in the 70's. I pointed out to such a person that the font file for Courier (60 - 75K depending) was more than the entire system memory (32KW or 64KB) that you could boot 2.1BSD in. Such sillyness. reply Writing a mere string out to a file on a non-UNIX was nowhere near as easy as ‘fd = open (“a file”, O_WRONLY); write (fd, p_aString, strlen (p_aString)); close (fd);’ on UNIX. Many systems required either a block-oriented or a fixed-record (with the record structure to be defined first) to be opened, the block or the record to be written out and then the file to be closed. Your record-oriented file has grown very large? Brace yourself for a coffee break after you invoke the “file close” system call on it. Did you process get killed off or just died mid-way through? Well, your file might have been left open and would have to be forcefully closed by your system administrator, if you could find one. Your string was shorter than the block size, and now you want to append another string? Read the entire block in, locate the end of the string, append a new one and write the entire block back. Wash, rinse and repeat. Oh, quite a few systems out there wouldn’t allow one to open a file for reading and writing simultaneously. Flawed make? Try to compile a 100 file project using JCL or DEC’s IND using a few lines of compilation instructions. Good luck if you want to have expandable variables, chances are there wouldn’t be any supported. You want to re-compile a kernel? Forget about it, you have to “generate it” from the vendor supplied object files after answering 500+ configuration related questions and then waiting for a day or two for a new “system image” (no, there were no kernels back then outside UNIX) to get linked. Awkward UNIX shell? Compared to crimes a numbers of vendors out there committed, even JCL was the pinnacle of “CLI” design. No matter how perfect or imperfect some things were in UNIX back then, hordes of people ended up running away screaming from their proprietary systems to flock to UNIX because suddenly they could exchange the source code with their friends and colleagues who could compile it and run within minutes, even if some changes were required. Oh, they could also exchange and run shell scripts someone else wrote etc. In the meantime, life on other planets was difficult. But, having said that, I really hate how TAB is used in Makefiles. The first screen editor I used was the Rand Editor and it treated TAB as a cursor motion character. It was a handy way to move horizontally on a line by a fixed amount, something not easy to do in, say, vi. This editor converted all tabs to spaces on input and then re-inserted them on output, which mostly worked, but it could mess up some files that were fussy about tabs. The challenge came when there were devices which operated incorrectly when presented with a TAB character or confused command data with actual data. That became the "killer" issue when computers started substituting for typewriters. Because a typist knows that if you hit the tab key the carriage would slide over to the next tab stop that was set on the platen bar, but the paper was "all spaces". When you try to emulate that behavior in a computer now the TAB becomes a semantic input into the editor "move to the next tab stop" rather than "stick a tab in" and "move to the next tab stop" could be implemented by inserting a variable number of spaces. Computer engineers knew it was "stupid" to try an guess what you wanted with respect to tabs so they wrote a program for formatting text called 'troff' (which had similarities to the program on RSX11, TENEX, and TOPS called RUNOFF. It is always interesting to look back though, if you had told a system designer in the 70's that they would have gigabytes of RAM and HD displays in their pocket when their grandchildren came to visit they would have told you that people always over estimate how fast change will happen. TAB was a stupid choice even in the 1970s. I still prefer to indent C with tabs, and I'm not the only one. It's really not hard to handle tabs well, as a code editor or as a programmer. You can view them as whatever width you like. You can have marked to distinguish from spaces (e.g. vim's "set list"). They've been around for longer than you have so there's no good excuse for forgetting about them. — Stuart Feldman The author even states that UNIX was amazing when it came out, but that doesn't mean all its ideas make sense today. It's not hard to see how this happened: since pretty much all computers that people normally interact with are either running Windows or a Unix-like system, it has set up a dichotomy in people's minds. When the Unix-Haters Handbook was released, there were still other operating systems which could have a plausible claim to being better, but they have all faded away, leaving only these two. And since the "Real Hackers" prefer Unix, Unix and all its decisions must be the right ones. Unix is great, but people need to be more realistic about its shortcomings instead of mindlessly repeating mantras about "The Unix Way" without critical examination. So all that new and advanced technology doesn't really interest me anymore. I'm looking for a 1969 Honda CL350 right now. They're still around and running fine. They're much simpler and much more maintainable. No Engine Computer. No sensors. Everything really easy to understand. I kinda want my OS like that too. With all its warts I can keep it running. Not true. You will be able to get an aftermarket ECU that can just ignore the sensors and run in open-loop mode. That will be exactly the same as running with carburetors: fixed fuel/air ratio that is almost always wrong. This is also the failure mode for OBDII cars - sensor failures lead to the ECU running in open-loop mode, which lowers MPG and increases emissions, which will eventually foul the catalytic converters. > I'm looking for a 1969 Honda CL350 right now. They're still around and running fine. They're much simpler and much more maintainable. My wife has owned a CB175 and a CB550. Both required tons of work and were maintenance nightmares. They really are piece of shit bikes when it comes to reliability when compared to most Japanese bikes from 1990 onward. The prices old Honda bikes command on the market are completely out of whack with what you get because of strong demand from both the vintage bike enthusiast and hipster demographics. I would not ride one if it was given to me for free. Compared to other bikes of their day these are very simple to maintain and they were designed from the start to be kept running by the average person. It really depends on what you are looking for in a bike. If enjoying turning a wrench on a Saturday makes me a hipster then pass the beard wax. * fast cylinder wear (poor materials/manufacturing, engine rebuilds all around) * unreliable electric system ("mostly" fixed on CB550 with Charlie's solid state ignition and rectifier) * Leaking gaskets (design flaw) I know a lot of vintage Honda collectors and a few racers, and also a lot of vintage BMW collectors. BMW motorcycles from the same era do not have these problems. There is a special nut on the oil spinner but that's the only specialty tool I can think of on the bike until you start actually disassembling the whole thing and you don't even have to remove it to do an oil service. I guess the shock/steering head adjuster is a specialty tool? But that was included with the bike so not hard to find either. Parts can be a bit harder but since these things were so popular it's a lot easier than any other bike from 1969. Also the aftermarket is huge if you don't care about staying totally stock. Do you know about the Royal Enfield Bullet [1] from India? It is not at all as technologically sophisticated as the bikes you mention and others, but it is a fantastic bike to ride. They are selling it in the West, too, from some years. Originally from the Enfield company, UK, then was manufactured in India for many decades (maybe starting around WWII), as the same standard model. Then a decade or more back, the new managing director invigorated the company with better quality, newer models, higher engine capacity (cc) models (like 500 cc), etc. - though I would not be surprised that some fans prefer the old one still - maybe me too, except not ridden it enough, I rode a 250 cc Yezdi much more - also a great bike, almost maintenance free, a successor to the classic Ideal Jawa bike from Czechoslovakia, and also made in India for many years. Yezdi was stopped some years ago, last I read, but the Bullet is still going strong and even being exported, a good amount, to the West. [1] A Swiss guy, Fritz Egli (IIRC), was/is a fan and modified some of them (Bullets) over there. It was the subject of a magazine article. I first rode a Bullet in my teens. A real thumper. The number one flaw on these bikes in my opinion is the stock carburetors. Honda used a constant velocity type carburetor which in theory provides very smooth throttle action and is easier to ride. In reality the vacuum diaphragm is a very delicate part that frequently fails with tiny air holes that leak vacuum, causing a mismatch in the throttle input between the cylinders (twin carb two cylinder, one per cylinder). This is a similar failure mode to your modern Suzuki air pressure sensor failing. The other pain point is the mechanical points in the ignition system. This is an area of constant fiddling with adjustment and new condensers. It's much preferred to simply replace the points system with a "modern" (1980s technology) electronic ignition system. This removes the moving parts and greatly extends the life of a tune-up. Old bikes are super cool and the 350 platform is a fantastic one but even back then bikes had "high tech" parts that did not age well and gaps where better tech had not been invented. The great thing about the 350 platform is that due to its popularity people are still coming up with solutions. In this way a Honda 350 is similar to Unix. This was not a fancy engine. It was not a particularly powerful engine. Not a single thing on it or about it was exceptional. Because it was easy to work on and with add-ons and modifications it could be coaxed into doing things it wasn't really meant to do. It was just solid and got out of your way. So engineers and hobbyists tinkered and tinkered and got something over 2.5x the original horsepower out of the thing. Unix commands might be solid, but the 'get out of your way' bit is what troubles me. People will lament otherwise good engines that are hard to work on because of a design flaw or the way they're laid out. Unix is falling down here. It's just that everyone else is at least as bad. But a guy can dream. It's nonsense. Programming as it's done today - which is strongly influenced by UNIX ideas - is the last thing most users want to do. They have absolutely no interest in the concepts, the ideas, the assumptions, the mindset, the technology, or the practical difficulties of writing software. UNIX set human-accessible computing back by decades. It eventually settled into a kind of compromise at Apple, where BSD supplied the plumbing and infrastructure and NeXT/Apple's built a simplified app creation system on top of it, which could then be used to build relatively friendly applications. But it's still not right, because the two things that make UNIX powerful - interoperability and composability - didn't survive. They were implemented in a way that's absolutely incomprehensible to non-programmers. Opaque documentation, zero standardisation for command options, and ridiculous command name choices all make UNIX incredibly user hostile. Meanwhile the underlying OS details, including the file systems and process model - never mind security - fall somewhat short of OS perfection. The real effect of UNIX has been to keep programming professional and to keep user-friendly concepts well away from commercial and academic development. Programming could have been made much more accessible, and there's consistent evidence from BASIC, Hypercard, VBA, the HTML web, and even Delphi (at a push) that the more accessible a development environment is made, the more non-developers will use it to get fun and/or useful things done. UNIX has always worked hard to be the opposite - an impenetrably hostile wall of developer exceptionalism that makes the learning curve for development so brutally steep it might as well be vertical. UNIX people like to talk about commercial walled gardens as if they're the worst possible thing. But UNIX is a walled garden itself, designed - whether consciously or not - to lock out non-professional non-developer users and make sure they don't go poking at things they shouldn't try to understand. Still I'm not so sure that UNIX (or programmers) is the source of it. I started with DOS (later Windows) and TP (later Delphi) so please don't think I'm biased here. I have recently bought an Acer convertible for my mother with Windows 10 and so I'm getting a reality check on the sad state of computer usability in 2017. Teaching her to use an Android phone was difficult, but this is not better. IMHO the reason of user-hostility is not some guild mindset, it's just that computer adoption is needed much faster than the time it would take to develop decent GUIs. The RTFM knee-jerk reaction comes later from people with some deep insecurities and not much imagination. Totally agree on that UNIX is also a sort of walled garden. I'd say the same thing about GPL'd ecosystem, in this case for license issues. Not Linux, surely. That would be more like the Suzuki, but it used to be an older bike so they left the carburetor in there and next year someone will add an electric motor too. IMO you can take any modern distro and strip it down to something understandable. It just takes some time to learn how to strip it down and how what's left works (and this is ongoing, as things are always changing). I'm not saying it's trivial, but neither is learning how to rebuild a motorcycle. On Linux or OpenBSD? Also, Erlang and Elixir run out of the box on it, so it's suitable for nearly all of my personal projects. Spare parts are easy to find and there are a lot of aftermarket parts. The downside is that the bike is basically a late '80's bike. 40ish horsepower, poor fuel economy, weak brakes, poor handling, etc... But, like lots of people say, it's more fun to drive a slow bike fast than a fast bike slowly. I love it. When I thought I wanted a bike, I looked around for a Condor A350. That's why we have both "ls" and "find", even though they do the same thing conceptually. "ps" has column output, but the way it formats, sorts, selects etc. columns is reinvented and not transferable to other tools such as "lsof" and "netstat", not to mention "top", which of course is just a self-updating "ps". Every tool invents filtering, sorting, formatting etc. in their own unique way. The various flags are not translatable to other tools. I've never used Powershell, but it seems to get one thing right that Unix never bothered to. Tools should emit data, or consume data, or display data, but tools should never do all three, because they will invariably reimplement something poorly that has been done elsewhere. The right way to do this is to pick some reasonable text-based structured interchange format - s-exprs, JSON, whatever. Actually, it wouldn't be a bad thing to have a binary protocol as well, so long as all that complexity is negotiable, and is implemented by the standard library. It even has an anti-foreword by Dennis Ritchie which kinda reminds me of the Metropolitan Police spokesman's blurb on the back of Banksy's book. BTW, it starts off with an anonymous quote that I've never heard, Two of the most famous products of Berkeley are LSD and Unix. Unix is of course from Bell Labs. And anyone who knew anything would have said instead, Two of the most famous products of Berkeley are LSD and BSD which at least would have been funny if still inaccurate. Anyways, it seems like a fun rant of a book which I'd never heard of. The above point about not getting it can be applied to Linux as well. Lessons learned elsewhere are stupid until Linus finally understands them and then they're obvious., collectives like the FSF vindicate their jailers by building cells almost com- patible. You claim to seek progress, but you succeed mainly in whining.! Oh for the days when 750kB was considered "massive" for a binary. [1]: >! Pretty good. I think you're suffering from selection bias.... I really wish undergraduate Software Engineering programs included a course that was a survey of operating systems, where you'd write the same program (that did a lot of IPC) on, say, base WinNT (kernel objects with ACLs!); base Darwin (Mach ports!); a unikernel framework like MirageOS; something realtime like QNX; the Java Card platform for smart-cards; and so forth. Maybe even include something novel, like. Hey, that sounds like Go's line. Oh wait .. I use Rust. There's a bunch of things I do not like about Rust. Its macro language is nigh on unusable. A macro language should be an expedient; it should allow you to get textual things done, committing necessary atrocities along the way knowing that the result must still pass muster with the compiler. It isn't supposed to remind you of a 61A midterm. I like the safety and that is why I use Rust. I don't get adding functional programming to a systems programming language. Alas, while it's a mandatory POSIX utility (AFAIU), some Linux distributions have saw fit to remove it from base installs. yeah, totally have never seen that... As a developer, it was a bit painful, but the whole file versioning was handy sometimes. Probably the biggest one is that UNIX is, at bottom, a terminal-oriented multi-user time sharing system. This maps badly to desktop, mobile, and server systems. The protection model is a mismatch for all those purposes. (Programs have the authority of the user. Not so good today as in the 1970s.) The administration model also matches badly. Vast amounts of superstructure have been built to get around that mismatch. (Hello, containers, virtualization, etc.) Interprocess communication came late to UNIX/Linux, and it's still not a core component. (The one-way pipe mindset is too deeply ingrained in the UNIX world.) Yes, you have to use their APIs in order to write graphical programs, but the same is true on any OS with a GUI system. It's possible to write iOS apps in pure C if you want to. Sure, that'd be a pain, but it's possible. Less painful and actually decently reasonable would be to write all the GUI-specific stuff in Objective-C and any other logic in pure POSIX-conforming C or C++, since you can mix all those languages freely in a project. POSIX is more like the extended runtime that C needs to be portable outside UNIX walls, which isn't being fully implemented on iOS and Android.... I disagree, I think this is still the sweet spot between security and utility. Users have been trained to just click approve on any privilege escalation dialog. From the description of Combex (): > Suppose you were running a capability-secure operation system, or that your mail system was written in a capability-secure programming language. In either case, each time an executable program in your email executed, each time it needed a capability, you the user would be asked whether to grant that capability or not.. In reality, users will get sick of being prompted every 30 seconds and learn to automatically approve every request. Capability security works well in theory, but I've never seen an implementation that works well in practice. That's the keyword there. They don't actually demonstrate a lot of common apps and how the user is prompted. It sounds a lot like windows UAC with a default lock down. They don't even mention have permissions are permanently granted or not. I think the next step is a capability runtime OS with a kernel personality for Linux for backwards compatibility. Sort of the converse of what we're doing right now. I feel so privileged to read this random guy's blog, and it's terrific that he eschews inflating his ego so well. The author is taking the time to post his thoughts on a private blog. If you're not happy with the level of rigor, then don't read it, don't share it, and don't believe it. But no, the author has no responsibility to provide you with a comprehensive list of citations and references. What I don't like about that attitude is that it suggests that the blogpost should be excused from criticism of its rigor. But any assertion and opinion by anyone can and should be considered open to criticism. > "But no, the author has no responsibility to provide you with a comprehensive list of citations and references." Of course he is not responsible. But if someone expects or wants people to be convinced of their argument, then prefacing with his rude dismissal does not help. A better way than "This article was written hastily, and I don’t want to further improve it. You’re lucky I wrote it." without the unnecessary rudeness and ego boosting could be more along your lines: "This article was written hastily and shouldn't be subject to PhD-thesis-level scrutiny, so I've posted without any expectation that I will improve it or respond to criticism." Sounds to me like you agree with the substance of what the author said, and just happen to disagree with the delivery. Keep in mind that different people have different writing styles, and people often post things like what you quoted in a tongue-in-cheek manner. Some people prefer a formal, humble style of writing. Others like a more humorous style, and others yet enjoy a Steven Colbert faux-braggadocious style. It would be unfortunate if we browbeat anyone who dares to inject some quirky humor into their writing. The author makes a bunch of assertions that range from misinformed to straight-up wrong. There is 0% Steven Colbert-style self awareness. No one is lucky that it exists. When you engage in personal endeavors, do them for their own merits. And especially if you're doing something that no one and nothing is compelling you to, please at least try. I was envisioning people nit-picking the lack of formality, when it was merely an enthusiastic, from-the-hip post. A day later I realized that wasn't the best way to start things off with, and toned it down. But I get where he's coming from. And I'm grateful he took the time to write it. It was an interesting read. I'm not saying it has to be PhD-thesis-level, but if proven wrong you are morally obligated to update the information or remove it. Otherwise you are spreading false knowledge. This doesn't apply to opinions ofc (the bulk of private blogging). Screw entitled users though, they should fix it themselves :) After 20+ years of working in the business and reading ridiculous amounts of articles, and some books, I can quite easily find clear errors or omissions in most everything written about computers and programming, Knuth excluded. If everyone were obliged to update to fix any errors, even only factual, it's likely that much - or even most - of all I've read would never have been posted for me to read in the first place, and I think both the world and I would be poorer for it. Correctness is important, but so is the ability make judgments of validity, to critique, disseminate, inspire and express ones thoughts, even when they turn out to be wrong. You can just start the countdown until all the unix greybeards come out and slams them with why it's so wrong and how to do it with some obscure bash feature. >How to touch all files in foo (and its subfolders)? find foo -print0 | xargs -0 touch find foo -exec touch {} \; You wouldn't consider it to be immature to not do the right thing just because it was communicated in an unpleasant way that a 'self-respecting person' should have disdain for? I'm not talking about this particular article or defending immaturity, - I just don't see how your position is any better; more so it's just as unhumble as the 'rockstar position' since you would apparently be willing to write worse code just to avoid learning from someone that's an asshole. A lot of people find that cockiness funny and find the informality welcoming. And many of those tend to find cold formality, fake humility or lack of self-confidence as signs of something being boring, uninteresting or outright creepy. EDIT: Comment was edited, original: It's hard to believe this came from a place of humor after reading the otherwise dry article. The article was otherwise fairly interesting, but that certainly left a sour taste in my mouth. He's lucky I read past the first paragraph, if it wasn't so prominent on HN I probably wouldn't have bothered listening to anything he had to say. The edit was because I only ended up scanning what I intended on reading and felt the second part of my criticism overly harsh. > The provocative tone has been used just to attract your attention. You could also have guessed it when you noticed it was a kid student showing up (and off) with a lengthy critic on the historical evolution of UNIX tools and others OSes and tools that he discovered a couple of years ago, while you possibly were using them and watching them evolve while he was still in his father's bollocks (and some of us (not me) were working on UNIXes while his father was a kid). You know what to expect in this case, especially if you remember having produced the same kind of over-confident and yet uninformed rant in your days :-) (I do :-D ) There's irony in people being insulted by arrogant tone and loudly proclaiming they stopped reading something. It may be that the author is not actually a self-important asshole, but starting an article like that makes it look like he likely is. Rob Pike 2004,... In view of that, it's only natural that big segments of software beyond the OS like word processors are also stagnant. We don't actually need a diverse marketplace of competing word processing ideas anymore... the problem is fundamentally solved as far as the public is concerned. It's not as exciting for software developers, but it's totally natural for it to happen. The problem here is that there aren't a lot of alternatives. You could use Windows, which is like listening only to music by MC Hammer, or you could use a Mac, which is like listening only to music by Duran Duran. Because of software/backwards compatibility concerns, and how dependent everything is on the underlying OS, it's really hard to change anything in the OS, especially the fundamental design. It'd be nice to make a clean-sheet new OS, but good luck getting anyone to adopt it: look at how well Plan9 and BeOS fared. You say that like it were a bad thing. Whereas Mac OS, Windows, iOS, Android, ChromeOS have moved into more productive language runtimes, with rich frameworks, improving safety across OS layers, even if they have a few bumps along the way. There's nothing preventing you from running different language runtimes and such on Linux/Unix systems; people do it all the time. Have you not noticed Mono? It's been around for many years. Plus frameworks like Qt; that certainly wasn't around before the late 90s. A few more releases and in around 10 years, Win32 will be drinking beers with Carbon. Mono and Qt don't change the architecture of UNIX and their adoption across UNIX variants isn't a game changer. No, it hasn't. Filenames are still case-insensitive (and in a terrible way, where it seems to remember how they were first typed but that can never be changed), backslashes are still used for path separators instead of escaping characters, and the worst of all is that drive letters are still in use, which is an utterly archaic concept from the days of systems with dual floppy drives. Also, try making a file with a double-quote character in it, or a question mark. I've run into trouble before copying files from a Linux system to a Windows system because of the reserved characters on Windows. >Mono and Qt don't change the architecture of UNIX and their adoption across UNIX variants isn't a game changer. Nothing you've mentioned has changed the architecture of Windows. The fundamental architecture of Windows hasn't changed at all since WinNT 4.0 (or maybe 3.5); it just has an ugly new UI slapped on top and some slightly different administration tools. The Windows (Win32) environment suffers those limitations. It also suffers 20+ years of strong binary compatibility, broad hardware support, and consistent reliability that systems of similar class (e.g. Linux desktops) can't match. If it makes you feel any better, drive letters are a convenient illusion made possible by the Win32 subsystem; NT has no such concept and mounts file systems into the object hierarchy (NT is fundamentally object oriented - a more modern and flexible design than is provided by UNIX). The fundamental architecture of Windows, the kernel, hasn't changed in ages because it doesn't need to; it is far more sophisticated than UNIX will ever be and far more sophisticated than you will ever need. The fundamental architecture of Win32 hasn't changed since 32-bits was an exciting concept and it won't change because the market has said loud and clear that they want Windows-level compatibility. See Windows RT and The Year of the Linux Desktop for evidence that users aren't clamoring to ditch Win32 in favor of something more pure. For us, Windows pathanmes are just fine. As for Windows architecture, maybe you should spend some hours reading Windows Internals book series, BUILD and Channel 9 sessions about MinWin, Drawbridge, Picoprocesses, UWP, User Space Drivers, Secure Kernel,.... I'd love to read the source code myself to see how it works instead Not exactly "you can get it if you want it", but not "you can't even get it over our dead body", either. (Three versions of NT have been leaked; NT 4.0, Windows 2000, an the Windows Research Kit (which is Win2k3) -- they are all trivial to find online (first page of Google results).) Only for new applications. These written for the old ABIs still trip over after the 240th character, even though the FileSystem supports much more. To put in another words: I belive the inherent unix limitations (process model for terminals, process signalling, lack of structure) are still less limiting than DOS assumptions about the consumer hardware and applications of the 80's. Unix applications written using old assumptions (e.g. ?14? characters max for symbol names in libraries, ?16? bit address space, assuming the C library knows of gets) can have problems on modern systems, too. Windows roots are on VMS, not UNIX. There is hardly anything UNIX related on its architecture, regarding kernel design. What about something like this: Its written in Rust not C. In fact, according to the github stats, there is no C. >Rust 72.4% >Shell 13.2% >Makefile 12.5% >TeX 1.9% There are two types of languages, the ones everyone complains about, and the ones nobody uses. All of UNIX makes perfect sense if you are using UNIX for UNIX. If you're doing other things, like abstracting to "folders" and so on ... I am open minded and can see where it starts to fall apart a bit. But I use UNIX for the sake of UNIX ... I am interested specifically in doing UNIX things. It works great for that. This could be either a no-true-Scotsman, or a tautology. To solve this, you'd need to specify what UNIX is good at. It's even worse! I am saying that working in terminals, with strings of text and non-binary-format config files ... and all of the tools built around that ... is an end in itself. Every single "broken" example in the OP is something that I find non-remarkable and, in fact, makes perfect sense to me. Further, there is an immense value in GUI based systems: discoverability. On a GUI, you can learn how to use a program without ever consulting a manual just by inspecting your screen. This addition is what brought the computer to the masses. Finally, the terminal model of UNIX is just horrible. The hacks-on-top-of-hacks that are needed to turn the equivalent of a line-printer into something like nCurses or tMux are horrible. The current terminal is like this purely because of legacy. If you'd design a system for "working in terminals, with strings of text and non-binary-format config files" from the bottom up, it would look totally different. Sadly, getting it to work with existing software would be a total nightmare. All that being said, UNIX still has the better terminal (though I hear good things about powershell). Certainly, it is the best system for "working in terminals, with strings of text and non-binary-format config files". Though competition is sparse (windows, and maybe mac, depending on whether you consider it to still be unix or not). Man and apropos get you a long way. The near ubiquity of --help as a flag helps too. Well managed program distributions will even tell your shell how to tab-complete what it wants. MOST CRUCIALLY, though, the text of a command line or config snippet can be pasted, emailed, im'ed, blogged, and found with a search engine! Try describing how to navigate through a moderately complex gui in text or by phone... it's a disaster. passwd is not read every system call and anything that is read frequently is almost certainly in the fs cache. I got about 3 assertions into the article before I decided I had enough of that bullshit.. Think of it this way: if, per Unix Philosophy (points (1) and (2) of your summary), programs are kind of like function calls, and your OS is kind of like the running image, then (3) makes you programming with a dynamic, completely untyped language which forces each function to accept and return a single parameter that's just a string blob. No other data structures allowed. I kind of understand how it is people got used to it and don't see a problem anymore (Stockholm syndrome). What shocked me was learning that back before UNIX they already knew how to do it better, but UNIX just ignored it. Right. The title should have been reflective of that "Various idiocies Unix has accumulated to this day" but since the article mentions Unix Philosophy, my point is that the article should have criticised the philosophy and not the practice. > ." But this has actually proved to be very useful as it provided a standard medium of communication between programs that is both human readable and computer understandable. And ahead of its time since it automatically takes advance of multiprocessor systems, without having to rewrite the individual components to be multi-threaded. > "(3) makes you programming with a dynamic, completely untyped language which forces each function to accept and return a single parameter that's just a string blob. No other data structures allowed." That may be a performance downside in some cases, but the benefit of having a standard universally-agreeable input and output format is the time it saves Unix operators who can quickly pipe programs together. That saves more total human time than gained from potential performance benefits. It wasn't ahead of its time. By the time Unix was created, people were already aware of the benefits of structured data. > it automatically takes advance of multiprocessor systems, without having to rewrite the individual components to be multi-threaded. That's orthogonal to the issue. The simple solution to Unix problems would be to put a standard parser for JSON/SEXP/whatever into libc or OS libraries and have people use it for stdin/stdout communication. This can still take advantage of multiprocessor systems and whatnot, with an added benefit of program authors not having to each write their own buggy parser anymore. > but the benefit of having a standard universally-agreeable input and output format is the time it saves Unix operators who can quickly pipe programs together. That saves more total human time than gained from potential performance benefits. I'd say it's exactly the opposite. Unstructured text is not an universally-agreeable format. In fact, it's non-agreeable, since anyone can output anything however they like (and they do), and as a user you're forced to transform data from one program into another via more ad-hoc parsers, usually written in form of sed, awk or Perl invocations. You lose time doing that, each of those parsing steps introduces vulnerabilities, and the whole thing will eventually fall apart anyway because of million reasons that can fuck up the output of Unix commands, including things like your system distribution and your locale settings. As an example of what I'm talking about, imagine that your "ls" invocation would return a list of named rows in some structured format, instead of an ASCII table. E.g. ((:columns :type :permissions :no-links :owner :group :size :modification-time :name) (:data (:directory 775 8 temporal temporal 4096 1488506415 ".git") (:file 664 1 temporal temporal 4 1488506415 ".gitignore") ... (:file 755 1 temporal temporal 69337136 1488506415 "hju"))) ls | filter ':modification-time < 1 month ago' | cp --to '/home/otheruser/oldfiles/' find :name LIKE ".git%" | select (:name :permissions) | format-list > git_perms_audit.log BTW. This is exactly what PowerShell does (except it sends .NET objects), which is why it's awesome. Except it is completely unusable for network applications because the error handling model is broken (exit status? stderr? signals? good luck figuring out which process errored out in a long pipe chain) and it is almost impossible to get the parsing, escaping, interpolation, and command line arguments right. People very quickly discovered that CGI Perl with system/backticks was a very insecure and fragile way to write web applications and moved to the AOLServer model of a single process that loads libraries. I think this is great. There's slightly more principled ways to do it, but having to convert everything to one single format at the end of the day keeps you humble. Let's go back to the previous decade's Hacker News: 1) Text is for humans, and is generally incomprehensible to machines. Encodings, arbitrary config file formats, terminals, etc, are all piles of thoughtless one-off hacks. It's a horrible substrate for compositing software functionality, through either pipes or files. 2) Hierarchical directory structures quickly become insufficient. 3) The filesystem permission model is way too coarse grained. 4) C is a systems/driver/low-level language, not a reasonable user-level application or utility language. Actually, I'd take plain text conf over Win10 registry any day also. How do you transfer your (arbitrary) program settings from one computer to another? I can tell you how on Unix. How would you on windows? 2. / based filesystems are head and shoulders better than Windows. Why should moving a directory from my internal hard drive to a SSD over USB change my location? On Unix I can keep my home directory (or Program Files) on a NFS share, SMB share or SSD harddrive. Can I do the same on Windows? 3. It is, which is why SELinux was invented. But that's to hard, so no one uses it. 4. All major OSs (Both Unix/Linux and Windows) are C or C++ based. But here's something interesting: in the 90s, it was considered "advanced" to have the GUI an inherent part of the OS, rather than being just another program. Windows and Plan9 did that. Yet, it turned out that admining a system without a 1st class command line is a pain, so windows is rolling back with Power shell. Maybe Windows will one day make their Server headless, and where you can do full sysadmining through SSH Your personal registry settings are stored in %USERPROFILE%\ntuser.dat, and global registry trees are various files under %SystemRoot\System32\Config. Actually, in many ways, Windows makes its organization system much easier to find profile data to copy than Unix-based systems. Most programs install configurations either in their install folder (C:\Program Files\<Product Name>) or in per-user data (C:\Users\<user>\Application Data\<Product Name>). In Unix, you might have a mix between dot folders (with some things choosing to use .<name> while others might settle for .config/<name>), as well as having to guess if various config files are under /etc/<product>, /usr/lib/<product>, /usr/share/<product>, or maybe even weird places like somewhere under /var or /opt. > On Unix I can keep my home directory (or Program Files) on a NFS share, SMB share or SSD harddrive. Can I do the same on Windows? Yep, you can remap where your user's home profile is. You can even distinguish between files that should be replicated across network shares and files that should not be replicated in per-user storage (AppData\Roaming versus AppData\Local). Computers are for humans. I like my configurations to be accessible to me, as well as my logs. It's a fundamental, unstandardized mess. From the perspective of just a single program, then fine, it does what it needs to and its individual text formats are human-understandable, and its code is quickly banged together to do what it needs with it. From an ecosystem perspective, it's a complete failure of usability and _interoperability_, which is the entire point of Unix philosophy in the first place! Humans aren't the only ones reading & writing config files: Software needs to do it, too, be it applications or automation scripts. Again, text is a horrible substrate for compositing software functionality. (More people would probably have read the Unix Haters Handbook if it was as pithy as this.) The Windows registry/powershell approach has the niceness of passing pieces of data instead of one big blob of text that has to be re-parsed at every step, but with the drawback of verbosity and fussy static typing. Being able to directly pass s-expressions between between programs without the format/parse/typing hokey-pokey of Unix and Windows would be nice. Genera did not have programs, it is all just function calls. There was a multi-user LMI Lisp machine that ran Unix concurrently, that used streams to communicate between the two Lisp and one Unix processors: Formats for inter-process communication are hard and far from a solved problem. Just look at what has come out in the past 10 years: Thrift, Protocol Buffers, Cap'n Proto, FlatBuffers, SBE, probably twice as many others I haven't heard of. Also, ZetaLisp and Common Lisp have way more types than those supported by S-expressions. For example, they have real vectors/arrays, structures, and full objects. Don't assume all of the Lisp world uses just the subset of Scheme used in the first couple chapters of SICP. If I had to guess, I'd say the issue with JSON and the like is how to deduce types (and the limited types available) combined with the issue of special characters in names and strings. XML goes a long way towards fixing that, but at the cost of a lot of extra bloat. A binary format with a nicely defined header might work, but those formats tend to not be so good about inserting stuff. There is something to be said in favor of plain text. If all solutions suck, go for the simplest and most flexible solution. Next thing - bitch about how Earth works, with every region, country and even, gasp, city is different because a few people a long time ago decided "this is the way to go". Natural selection ought to quickly do away with an organ that served no purpose but to occasionally kill one of the luckless organisms that possessed it. It should have been obvious that it was doing something else, or more specifically, that the gene(s) for "having an appendix" were. Note that this philosophy covers many concepts. These discussions often mention the modularity and composition rules, but the other parts of the philosophy are also important. See "The Art Of Unix Programming" for a full explanation of the philosophy. What is missing in many cases is a concepts guide, explaining the key ideas, how to combine things, and what's possible in various subject areas. For GUI programs, menus / toolbars used to be the concept guide: what they show is what's possible, and they offer context help. This is why a GUI feels friendly. It sucks at composability, though. Current mobile interfaces, unfortunately, tend to lack this. If tiny GUI-oriented programs were easy to compose, had an easy way to save the composed state, and a number of daily-use programs bundled with an OS came in this form, providing example and reference, many people would consider following the suit, I suppose. [1]:... This simple fact seems like the key to getting the masses into computing. For something like 6 years (say 12-18) GUIs were the way I interacted with computers. Need to do something and learn about it? Go and explore the UI until you find the option. If the option has a shortcut printed, you will remember it eventually. Sadly, GUI design is a quite separate discipline from software design. This means much open source software is missing GUIs. Those who write the software aren't always GUI designers. This also creates the mismatch between composing software and composing GUIs. As they are different disciplines, what it means to combine them means different things. A decent stopgap is massive frameworking and standardization on GUI to make it easier for devs to get a GUI. To get the really good stuff, commercial entities have the best position. They need their stuff to be usable by everyone, and this finances the hiring of GUI people. There is the rare gen of a developer that can also do GUI right, but that only has value in the case of small projects. When projects grow, unless all devs have the GUI knack, you're gonna need some dedicated GUI people. It would be great if we could get more GUI-oriented people into open-source stuff but it seems like they aren't as attracted to open-source as devs are. It might be because devs can be at the ground floor of a project, and GUI, almost by neccesity, comes later. "The Art of Unix Programming" It's not perfect - I'd love to see a guide with more practical examples - but it does do a good job covering the basic philosophy and some of the history of why certain design decisions were made. What I mean to say is that while there has been a lot of initial inertia from the previous technologies that make it hard to change, it's also true that the new technologies have failed to make large enough gains to warrant the pains of changing - this is precisely the demise of Plan9: it's not that it wasn't better it just that it wasn't better enough to warrant the huge expense of replacing old, working systems. I guess I am lucky he wrote this for a pleb like me. Suckless and cat-v.org would disagree. I'd also disagree since I'm a huge fan of plan9port. This is the OS equivalent of a Call of Duty teenage online player on XBox live. Part of it is that it's way more annoying to maintain a little tool than to make it part of a bigger project with a much bigger community, it's also far harder to discover than just being in the docs a bigger project - Dirty hacks in UNIX started to arise when UNIX was released, and it was long before Windows came to the scene, I guess there wasn’t even Microsoft DOS at the time (I guess and I don’t bother to check, so check it yourself). At least he acknowledges that he's being incredibly lazy, and he shows the glimmer of an understanding as to why some of the things mentioned later happened: because Unix is from the early 70s, which were a very different time in computing & culture. - Almost at the very beginning, there was no /usr folder in UNIX. All binaries were located in /bin and /sbin. /usr was the place for user home directories (there was no /home). Putting /home on a separate partition remains a pretty common thing to this day because users will tend to have greater storage requirements than just the root. /usr/bin and the like are the result of people realizing that this secondary larger disk is an acceptable place to put binaries and other files that aren't needed at bootup. - In other words, if you’ve captured Ctrl+C from the user’s input, then the operating system, instead of just calling your handler, will interrupt the syscall that was running before and return EINTR error code from the kernel. That's not the kind of interrupt they're talking about. - I’ve read somewhere that the cp command is called cp not because of copy but because UNIX was developed with the use of terminals that output characters very slowly. Yep, terminals that print on paper are pretty slow, as are 300 baud modems. I'm absolutely crushed I had to learn that 'cp' means 'copy'--it took hours to beat that into my head, and the thousands of keystrokes I've saved over the years are a small comfort (except to my rsi-crippled hands) - The names of UNIX utilities is another story. For example, grep comes from command g/re/p in the ed text editor. Well, cat comes from concatenation. I hope you already knew it. To top it all up, vmlinuz — gZipped LINUx with Virtual Memory support. 'cat' comes from 'catenate', in fact. What would you name 'grep' instead? "searchregexandprint"? - at least the main website of C that would be the main entry point for all beginners and would contain not only documentation but also a brief manual on installing C tools on any platform, as well as a manual on creating a simple project in C, and would also contain a user-friendly list of C packages This is one of the most ridiculous ones. You're talking about a programming language defined in the 70s, for Christ's sake. Lot of websites created in the 70s? There is a document with a good introduction to C, project examples, etc. and it's call The C Programming Language, a book by K&R. When Kernighan made another language a few years ago, yeah, he made a website for it--golang.org, it's one of the best project sites I've seen. The article points out some legit problems in Unix, but even leaving aside the author's ESL challenges it's poorly-written, poorly thought-out, and poorly-defended. - make's TAB "problem" as very first argument is not very convincing. - The citations to back up the claim that a (possibly binary) registry database was better than small text files are just not backing up. There is nothing to defend a registry there. The quote is about fsync semantics which has nothing to do with registry. Btw. in my perception it's a widely accepted fact that a big-pile-of-crap database is a bad idea. And oh, I haven't ever heard of any problem with passwd/group/shadow/gshadow being text files. And if there were, the access method is actually abstracted away, it's easy to switch backends to something else (NSS). (there is a problem with these files though -- they are denormalized, and not all fields have clear meaning and some programs interpret some fields in weird ways.). - Zombie processes. What's the problem there? They are just like file handles. Handles have to be closed before they are garbage collected. The actual problem is that you can't really "open" and "close" processes, only spawn new childs and the resulting hierarchy is not typically desired. - "We call touch in the loop! This means there is a new process for each file. This is extremely inefficient." Yeah and why exactly is the shell to blame that you use touch in a loop? (apart from the fact that it's almost certainly not a problem). Could go on but have to leave... Yeah, I found that one an especially weird gripe. Grepping was a new thing, so we needed a word for it. 'Grep' is short, easy to say and type, and relatively hard to confuse with similar words in the domain. Works for me. I can unfortunately imagine a modern startup implementing it, and shudder at potential names my imagination is coming up with... Searchlr, the best way to search text! ReadMonkey, your personal pattern recognizer! I'll stop now. "No, I mean the kind of car that is really big, that you drive on ice rinks to smooth out the ice." There's always some default subject implied for every command name. For "find" it is files, for "search" it could have been text. Files, the base type that's consistent across all the basic commands (AFAIK). The author cites to this post by Rob Landley –... While I can't independently confirm Rob's claims, and he doesn't provide any citations, I do find them very believable – /usr was invented at Bell Labs because they were running out of space on their puny 1970s hard disks. (And an RK05 was small even by 1970s standards – the IBM 2314 mainframe hard disk, released in 1965, had a 30MB capacity; the IBM 3330, released in 1970, stored 100MB – of course, these disks would have cost a heck of a lot more than an RK05, and were likely not feasible for the UNIX team given their budget.) If they had bigger disks (or the facility to make multiple smaller disks appear like one big disk) – it is less likely they would have split the operating system itself across two disks (/ and /usr). (Using separate disks for the OS vs user data was more likely even with bigger disks since that was common practice on systems at the time.) (Some other operating systems from the same time period already had some ability to make multiple disks appear like one big disk. For example, OS/360 has the concept of a "catalog", which is a directory mapping file names to disk volume names; this means you can move individual files between disks without changing the names by which users access them. In their quest for simplicity, Thompson and Ritchie and co decided to omit such a feature from UNIX.) Lisp, Smalltalk, Mesa, Pilot OS and Xerox Development Environment are from the early 70s as well (Lisp even earlier). The whole series is worth a read, especially part 3 'unfixable desgins' which talks about Signals and the Unix permission model. (Windows also has a similar problem.) 1969 < 1981. Heck, in 1969 there wasn't even RT-11 that CP/M was modelled after. There was a brand new OS/8 that RT-11 was modelled after. If the system as designed accepts almost literally any characters in pathnames, it doesn't make a lot of sense to complain about people using "naughty" (troublesome) characters in pathnames they create. It would have made a heck of a lot more sense to have disallowed stupid characters (all white space and control characters at a minimum) from the outset. However, changing it at this point is pretty impractical. Of all the criticisms of UNIX, the singular one that strikes me as valid is the way that it accepts unquestioningly insane characters in pathnames. I would also maintain that case sensitivity in pathnames and a lot of other places is also a human interface error. It is arguable, and a matter of taste, but I find it obtuse. You should be able to "say" pathnames. The necessity for circumlocutions like "Big-M makefile" is offensive. I'm one of the biggest fans there is of UNIX and C and the Bourne shell and derivatives, but recognition of weaknesses and flaws in those you love is not weakness; it is wisdom. In fairness, though, allowing non-ASCII characters is what enabled the (mostly seamless) transition to UTF-8 filenames. "So don't do it." The problem with filenames is just a symptom of the biggest problem of UNIX conventions - passing around unstructured text. Filenames should have one well-defined format (AFAIR kernel allows pretty much anything but the NULL character). That's it. For most applications, filenames should be opaque data blobs compared for binary equality. But because we're passing around unstructured text, each program has to parse, reparse, and concatenate strings via ad-hoc, half-assed shotgun parsers. Each program does it slightly differently, hence the mess. As long as everyone recognized that putting certain characters in pathnames was counter-productive, things worked fine. Nobody ever dreamed of putting a space character in a filename when they all came from a CLI background. When barbarians came from Mac/Windows to UNIX, they lacked this background, and there went the neighborhood. I remember being stunned when I first encountered two periods in one filename! But it only took me a few seconds to grasp the fact that period was just another character, "suffixes" were just conventions, and it all made perfect sense. OTOH, the first time somebody showed me a file named "-fr\ \*" and suggested I delete it, I got one of my first disillusionments. P.S. - "/" is a character which is also "special" in pathnames. Actually, the particular FS may make other exceptions which are not globally enforced by the kernel. ZFS has several (such as "@"). Shortly after timeshare systems were created, people were thinking about security. I could buy this if not for the fact that back when UNIX was created, there were already better operating systems and sane solutions to those issues existed. It's more like that those aspects simply weren't really thought through, but instead just hacked together. Contrary to what seems to be a popular opinion nowadays, UNIX wasn't the first real operating system, just like C wasn't the first high-level programming language. I knew I actually believed the latter, due to the way C/C++ many books were written. But no, in both the worlds of programming and operating systems, there already were better thought-out solutions. It's a quirk of history that UNIX and C ended up winning. There were differently thought-out solutions, but not necessarily better or worse. Perhaps the issues they thought out didn't matter as much at the time and very unlikely to matter even a little bit today, and who knows how much they got wrong and much worse. It's very hard to speculate. But one thing I'm sure of is that these things cannot be really well designed from scratch and all the problems manifest only once they are used by people. So widely used systems can only be compared to widely used systems, not something niche or unused. Our industry does seem to be stuck in circles, continuously forgetting the ideas of past cycles and reinventing them, only for them to be forgotten again. To see that phenomenon in action, one does not have to look much further than the last 10-15 years of history of JavaScript to see how the web ecosystem basically slowly reinvented already long established practices from desktop operating systems and GUI toolkits... For those who're interested, there's a pretty good overview of them in Practical Common Lisp[0]. I'll quote the first paragraph of the subchapter "How Pathnames Represent Filenames", which serves as a decent TL;DR: ." [0] - There is an excellent discussion of the topic[0]. I find it utterly definitive in the way it relentlessly shows how you can't "fix" this issue completely any other way than by ruling out the bad characters by making the kernel disallow them. [0].... IMO the best approach would be to separate between file name and the file object. When I edit a file with vim, should vim really need to know the name of the file? No. Likewise for a lot of other utilities as well. If instead of being so focused on file names and paths everywhere and we operated instead mainly on inodes then I think much would have been won. Now in some instances the file name is of interest to the program itself, for example if you attach a file to an e-mail, upload it with a web browser, tar a directory, etc. but in all of these instances I think that the file name should be more separate and even for most programs that want the file name they should just treat the file name as a collection of bytes that have close to no meaning. In other words, I would want to translate paths and file names into inodes in just a very select few places and then keep them separate. This is what I am going to do in my Unix-derived operating system. I will get around to implementing said system probably never but you know, one can dream. NAME dd - convert and copy a file I read this, and was like "screw that, I'm not going to read it," and then was like, well let's see, and after reading, wish I had stuck with my initial gut instinct to close that tab. smh. 1) Standardize on some structured text serialization format (I like YAML for this) 2) Write a new shell Both of these things are compatible with the Unix Philosophy™, and thus said Philosophy is nowhere near collapse. Rusty around the edges, sure, and maybe with some asbestos in the ceiling tiles, but certainly renovatable. The philosophy is already prevalent in the world of "microservices"; an application is split into a whole bunch of independent (usually containerized) programs communicating via something like JSON over HTTP. 1) This is an appealing idea, but my claim is that there's no single serialization format that will work. (Or if there is one, it has yet to be invented.) More detail here:... There's nothing stopping anyone from using structured data over pipes, but I think it's a mistake to assume there will be or needs to be a "standard". 3) I agree that JSON over HTTP is very much in the vein of Unix. The REST architecture has a very large overlap with the Unix philosophy -- in particular, everything is a hierarchical namespace, and you have a limited number of verbs (GET / POST vs. read() / write() ). Great idea! We should use XML for that... So yeah, any of that would be a much better idea than unstructured text, and yes, you can serialize all those use cases into trees. I'd steer away from XML for sake of efficiency and human-readability though. The terminal is responsible for hacks upon hacks: Colors, ncurses, signals, and whatnot. "...great and perfect" is a strawman. Whether "some people" think that is irrelevant. Some of this article is interesting, but the fact of the matter is 40-year-old systems have signs of being 40 years old. If "fixing" everything were easy, it'd be done. Tabs in Makefiles throw off the uninitiated for 10 minutes, then they learn, shrug and move on. These scars and stories are part of the package. Reading further, some of this is just incorrect... >That’s not to mention the fact that critical UNIX files (such as /etc/passwd) that are read upon every (!) call, say, ls -l, are plain text files. The system reads and parses these files again and again, after every single call! Not on my system. > It would be much better to use a binary format. Or a database. On my system, it is ( running "ls -ld ."): kamloops$ uname -a NetBSD kamloops 7.99.64 NetBSD 7.99.64 (GENERIC) #26: Thu Mar 2 07:15:26 PST 2017 root@kamloops:/usr/src/sys/arch/amd64/compile/obj/GENERIC amd64 kamloops# dtrace -x nolibs -n ':syscall::open:entry /execname == "ls" / { printf("%s -%s", execname, copyinstr(arg0));}' dtrace: description ':syscall::open:entry ' matched 1 probe CPU ID FUNCTION:NAME 0 14 open:entry ls -/etc/ld.so.conf 0 14 open:entry ls -/lib/libutil.so.7 0 14 open:entry ls -/lib/libc.so.12 0 14 open:entry ls -. 0 14 open:entry ls -/etc/nsswitch.conf 0 14 open:entry ls -/lib/nss_compat.so.0 0 14 open:entry ls -/usr/lib/nss_compat.so.0 0 14 open:entry ls -/lib/nss_nis.so.0 0 14 open:entry ls -/usr/lib/nss_nis.so.0 0 14 open:entry ls -/lib/nss_files.so.0 0 14 open:entry ls -/usr/lib/nss_files.so.0 0 14 open:entry ls -/lib/nss_dns.so.0 0 14 open:entry ls -/usr/lib/nss_dns.so.0 0 14 open:entry ls -/etc/pwd.db 0 14 open:entry ls -/etc/group 0 14 open:entry ls -/etc/localtime 0 14 open:entry ls -/usr/share/zoneinfo/posixrules kamloops# file /etc/pwd.db /etc/pwd.db: Berkeley DB 1.85 (Hash, version 2, native byte-order) ...All the bluster (some of which is interesting), then at the end walks it back: > So, I do not want to say that UNIX – is a bad system. I’m just drawing your attention to the fact that it has tons of drawbacks, just like other systems do. I also do not cancel the “UNIX philosophy”, just trying to say that it’s not an absolute. Shame about the title... But maybe that's what landed it here on HN (?) Edit: explain the "ls" command actually run. Exactly, and this is the sort of thing that can be done with open source software. It may not even be a lot of code depending on how it is approached. No Mr. Askar Safin. YOU are lucky that I am reading it. I suspect the author of this site have contributed to the linux haters handbook. Some quotes: “Linux printing was designed and implemented by people working to preserve the rainforest by making it utterly impossible to consume paper.” – Athas “ALSA is like the emperors new clothes. It never works, but people say it’s because you’re a noob.” “Object-oriented programming is an exceptionally bad idea which could only have originated in California.” – Edsger Dijkstra “[firefox] is always doing something, even if it’s just calculating the opportune moment to crash inexplicably” – kfx .... We're trying to use a server OS on a single-user machine, complete with all the management cruft that comes along with a server-based OS. Consequently, we don't bother re-thinking what the user needs vs. a sysop. Who the hell 'develops' in shell? It's a glue language, not a development language. I've never heard anyone say "We're a shell shop". You must have mastered the bizarreness of bash arrays by the end of that :) Oh! The beauty of censorship by the masses. Now I can see why jwz would choose to take a stab at HN. He may be right, and even too kind. That flag is just sad... If the bottom-up compositional model for computing that largely originated with Unix is fading, this article doesn't go there, or suggest cause. Now, that's an article I'd like to read... Every time there's a post like this it's all about performance!, performance! I like UNIX not because of any technical reason, but because it's easy for a human to use. Linux and UNIX are not meant to be "perfect" from a CS standpoint. They're meant to be easy for people consumption. I can't read a binary format. I can read text. That is the real point of the UNIX philosophy, IMO. Do one thing, do it well. Trying to solve too many problems at once will bite you. Please note that the Google result for "UNIX philosophy" names a GOOGLE EMPLOYEE as the "originator of unix philosophy" without disclosing the conflict of interest. This not only evil but very stupid. When multibillion dollar companies get involved in altering history, watch out. i'm still to see any complainer grown an adecuate amount of facial hair, though. (incompetently) reinventing some bad idea from before you were born does not cut it, you know? My GPU has 4 times as many transistors as my CPU, and for parallel tasks, it computes stuff 50 times faster. Just too much complexity for a file, even with ioctl. I think that ideology is the main reason for the current state of 3D graphics in nix and bsd based platforms. The Unix philosophy is essentially the opposite of the Apple philosophy. It gives you flexibility and composability at the cost of simplicity and the overall experience. The optimal solution tends to be somewhere in-between. If you look at Linux, it's actually a monolithic system (which goes against the Unix philosophy); the popularity of Linux is in itself proof that people do want a single cohesive product - If the Unix philosophy was the best approach, we'd all be using Minix by now. I use Ubuntu (Gnome) these days. The only thing I miss from Windows is Windows Explorer. Nautilus just doesn't cut it in my opinion; I always end up browsing the file system with the command line. That said, I still prefer Nautilus over OSX's Finder. Of all the things to complain about in Linux, you choose the one thing that Mac OS and Windows still don't have right, and Linux had pretty good even back then? Compare to Linux, where a piece of software is scattered around /usr/bin, /usr/lib, /usr/share, /usr/doc. (Or /usr/local/*, you never know which) Oh, and those fun times where something depends on a libxxx.so.N but all that's on the system is libxxx.so.N.M.O and libxxx.so.N.M for some reason, so you have to make the symlink yourself. Or the distribution has a minimum of version N+1, so your option is to find the source for the library, figure out all the -devel packages it needs, and compile it up (hopefully), or just symlink libxxx.so.N to libxxx.so.N+1 and hope it works. And then the fun of figuring out what the package is named. pdf2text lives in poppler, who would have thought. Need gcc? That will be build-essential on Ubuntu last time I needed it. (Not build-essentials, either) And, there are some new package managers that isolate in this way (and go well beyond it by containerizing). Flatpak is probably the most promising, IMHO. And, it still provides all the benefits of a good package manager, like verification, authenticity, downloading automatically from a repo, dependency resolution for core libraries. And, the way Flatpak handles the latter feature is really quite cool (and avoids having to distribute dozens of copies of the same libs). Your description of installing packages on Linux does not match my experience in the past decade. Dependency resolution is a solved problem on Linux, at least on the major distros with good package managers. The fact that Linux relied on people to install stuff with the command line was a massive oversight. UIs are just way more intuitive than shell commands. I've rarely disagreed with something said on HN so strongly (at least among things that, in the grand scheme of things, really don't matter that much, but they matter a lot to my personal experience). "The fact that Linux relied on people to install stuff with the command line was a massive oversight. UIs are just way more intuitive than shell commands." This has never been true in the past 12 years. You have to go back even further to find a time when there weren't multiple GUIs for the leading package managers. And, for at least the past decade, the core GUI experience on every major Linux distro has had some sort of "Install Software" user interface that was super easy and provided search and the like. There's lots of things Linux got wrong (and some that it still gets wrong) that Windows or macOS got right. Software installation really just isn't one of them, IMHO. It's the thing I miss most when I have to work on Windows or macOS, and I miss it constantly...like multiple times a day. A good package manager is among the greatest time savers and greatest sources of comfort (am I up to date? do I have this installed already? which version? where are the config files? where are the docs? etc.) when I use any system, particularly one I haven't seen in a while. I just really love a good package manager, and Linux has several. Windows and macOS have none (because if the entire OS didn't come from a package manager, it's useless...you can't know what's going on by querying the package manager, if the package manager only installed a tiny percentage of the code on the system). So, even though there's choco on Windows and Homebrew (shudder...) on macOS, they are broken from the get-go because they are, by necessity, their own tiny little part of the system with little awareness or control over the OS itself. Also, if your problem with non-Linux package managers is that they only know about and control their own packages, then you must have the same objection to Nix and Guix, right? What happened to wanting simple tools that do one thing and one thing right? Don't we want package managers to only manage packages, to decouple them as much as possible from the rest of the operating system, and leave system configuration management to other tools? I've blogged about some of my problems with Homebrew. Generally speaking, Homebrew is a triumph of marketing and beautiful web design over technical merits (there are better options for macOS, but none nearly as popular as brew). The blog post:... I get that it's easy and lots of people like it, so I mostly try to hold my tongue, but every once in a while I'll see someone suggest something crazy like using Homebrew on Linux (where there is an embarrassment of good and even great package management options) and it makes me shudder. I'm not saying don't use Homebrew on your macOS system if it makes your life easier. I just would never consider it for a production system of any sort. I'm even kinda mistrustful of it on developer workstations (though there are plenty of similarly scary practices in the node/npm, rubygems, etc. worlds, so that ship has kinda sailed and I am resolved to just watch it all unfold). "What happened to wanting simple tools that do one thing and one thing right?" I still want that. Doing one thing right in this case means doing more than what packages on macOS or Windows do. One can argue about the complexity of rpm+yum or dpkg+apt, and it's likely that one could come up with simpler and more reliable implementations today, but if you want them to be more focused, I have to ask which feature(s) you'd remove? Dependency resolution? That one's a really complicated feature; a lot of code, and it's been reimplemented multiple times for rpm (up2date, yum, and now dnf). Surely, we can just leave that out. Or, perhaps the notion of a software repository? Is it really necessary for the package manager to download the software for us? I mean, I have a web browser and wget or curl. Verification of packages and the files they install, do we really need it? Can't we just assume that our request to the website won't be tampered with, and that what we're downloading has been vouched for by a party we trust? I dunno...I'm not really seeing a thing we can get rid of without making Linux as dumb as macOS or Windows. "Don't we want package managers to only manage packages, to decouple them as much as possible from the rest of the operating system, and leave system configuration management to other tools?" This is the strangest question, to me. Why on earth would we want the OS outside of the package manager? Why would we want to only verify packages that aren't part of the core OS? This is why Linux is so vastly superior to Windows and macOS on this one front. I'm having a hard time thinking of why having the package manager completely ignorant of the core OS would be a good thing. What benefit do you believe that would provide? And, NixOS does not meet the description you've given. The OS is built with nix the package manager. Running nix as a standalone package manager on macOS does have the failing you've mentioned, but that's not the fault of nix. And, yes, nix is a better option for macOS than brew, but the package selection is much smaller and not as up to date in the general case...so maybe worse is better, in that case. I get a bit ranty about package management. I spend a lot of time working with them (as a packager, software builder, distributor, etc.) and have strong opinions. But, I believe those strong opinions are backed by at least better than average experience. No where near as consistent as installing from a package manager. > The fact that Linux relied on people to install stuff with the command line was a massive oversight. UIs are just way more intuitive than shell commands. Every user orientated distro has come with a GUI package manager for at least a decade, probably two. It's that there are so many different standards for installing software, overlapping in sometimes conflicting ways. It's when you go to uninstall or upgrade it that you realize what a mess it is. A good example is ArcGIS which on the surface is ridiculously monolithic. But within the toolbox function are several hundred programs that do only one thing and are composable. This approach is also seen in video or image editing workflows where a user works with a particular set of tools. The main difference is that the programs use a types system that is appropriate to the domain rather than just text. The OS only really exposes an interface for working with OS level objects. That sometimes aligns to a workflow but not always. And we should not expect disciplines to align their techniques to OS level objects if that is not a good fit for the actual domain. It's entirely possible to have a microkernel with a Unix-like system; HURD attempts this. Microkernel vs. monolithic is an entirely separate issue.
https://jaytaylor.com/notes/node/1488518952000.html
CC-MAIN-2018-51
refinedweb
13,775
62.68
Mar 09, 2007 07:32 PM|smcoxon|LINK Hi All, I'm developing a web site using VWD 2005 Express which allows users to upload images using the file upload control. Easy enough but I want to resize the uploaded images to ensure large file sizes are not saved and to reduce the picture sizes to no more than 300x200 pixels before I save them to disk. I've seen some code examples, mainly C#, but nothing that clearly explains how to do this and nothing using VB code, which is my preference. I'm looking for some code/pointers and ideas on how to achieve this. Can anyone help me out please? Regards Smcoxon ASP.NET Visual Web Express VWD Participant 1200 Points Mar 09, 2007 07:52 PM|Ramzi.Aynati|LINK check this // Get the path of the original Image string displayedImg = Server.MapPath("~") + "/Testing.jpg"; // Get the path of the Thumb folder string displayedImgThumb = Server.MapPath("~") + "/Thumb/"; //); Hope this helps Mar 11, 2007 12:32 PM|smcoxon|LINK Thanks Ramzi, This has pointed me in the right direction. I'm now looking at the Graphics class too, so that I can do more with the original image. Once I have the code working as I want, I'll post it here for anyone else to use. Regards Sean.[8-|] Mar 11, 2007 11:23 PM|smcoxon|LINK Here's my VB code as promised. I'm a newbee to VWD and ASP.NET and know how hard it can be to find useful code. My code might not be elegant but it works well for me! Hope it helps someone else. ImportsSystem.IO ImportsSystem.Drawing Partial Class _Default Inherits System.Web.UI.Page Protected Sub btnUpload_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnUpload.Click Const bmpW = 300 'New image canvas widthConst bmpH = 226 'New Image canvas height If (FileUpload1.HasFile) Then lblError.Text ="" 'Check to make sure the file to upload has a picture file format extention If (CheckFileType(FileUpload1.FileName)) Then Dim newWidth As Integer = bmpW Dim newHeight As Integer = bmpH 'Use the uploaded filename without the '.' extension Dim upName As String = Mid(FileUpload1.FileName, 1, (InStr(FileUpload1.FileName, ".") - 1)) Dim filePath As String = "~/Upload/" & upName & ".png" 'Create a new Bitmap using the uploaded picture as a Stream 'Set the new bitmap resolution to 72 pixels per inch Dim upBmp As Bitmap = Bitmap.FromStream(FileUpload1.PostedFile.InputStream) Dim newBmp As Bitmap = New Bitmap(newWidth, newHeight, Imaging.PixelFormat.Format24bppRgb) newBmp.SetResolution(72, 72)'Get the uploaded image width and height Dim upWidth As Integer = upBmp.Width Dim upHeight As Integer = upBmp.Height Dim newX As Integer = 0 Dim newY As Integer = 0 Dim reDuce As Decimal 'Keep the aspect ratio of image the same if not 4:3 and work out the newX and newY positions 'to ensure the image is always in the centre of the canvas vertically and horizontally If upWidth > upHeight Then 'Landscape picture reDuce = newWidth / upWidth'calculate the width percentage reduction as decimal newHeight = Int(upHeight * reDuce)'reduce the uploaded image height by the reduce amount newY = Int((bmpH - newHeight) / 2)'Position the image centrally down the canvas newX = 0'Picture will be full width ElseIf upWidth < upHeight Then 'Portrait picture reDuce = newHeight / upHeight'calculate the height percentage reduction as decimal newWidth = Int(upWidth * reDuce)'reduce the uploaded image height by the reduce amount newX = Int((bmpW - newWidth) / 2)'Position the image centrally across the canvas newY = 0'Picture will be full hieght ElseIf upWidth = upHeight Then 'square picture reDuce = newHeight / upHeight'calculate the height percentage reduction as decimal newWidth = Int(upWidth * reDuce)'reduce the uploaded image height by the reduce amount newX = Int((bmpW - newWidth) / 2)'Position the image centrally across the canvas newY = Int((bmpH - newHeight) / 2)'Position the image centrally down the canvas End If Dim newGraphic As Graphics = Graphics.FromImage(newBmp) Try newGraphic.Clear(Color.White) newGraphic.SmoothingMode = Drawing2D.SmoothingMode.AntiAlias newGraphic.InterpolationMode = Drawing2D.InterpolationMode.HighQualityBicubic newGraphic.DrawImage(upBmp, newX, newY, newWidth, newHeight) newBmp.Save(MapPath(filePath), Imaging.ImageFormat.Png)'Show the uploaded resized picture in the image control Image1.ImageUrl = filePath Image1.Visible =True Catch ex As Exception lblError.Text = ex.ToStringFinally upBmp.Dispose() newBmp.Dispose() newGraphic.Dispose()End Try Else lblError.Text ="Please select a picture with a file format extension of either Bmp, Jpg, Jpeg, Gif or Png." End If End If End Sub Function CheckFileType(ByVal fileName As String) As Boolean Dim ext As String = Path.GetExtension(fileName) Select Case ext.ToLower() Case ".gif" Return True Case ".png" Return True Case ".jpg" Return True Case ".jpeg" Return True Case ".bmp" Return True Case Else Return False End Select End Function EndClass Jan 07, 2008 02:17 PM|Know24|LINK Hi smcoxon , Thanks for posting your code. I've developed/extended a C# method for handling aspect ratio's, however I am interested in the Interpolation and smoothing your code takes advantage of. After implementing the following lines from your code: newGraphic.SmoothingMode = Drawing2D.SmoothingMode.AntiAlias newGraphic.InterpolationMode = Drawing2D.InterpolationMode.HighQualityBicubic I dont see a difference in image quality for JPG and PNG images. Though simply choosing PNG output type produces notably better image quality (and a larger file size). My question to you is if you are seeing a definite change in quality when resizing with image Interpolation and Smoothing? ASP.NET Image Antialias Jan 09, 2008 12:04 PM|smcoxon|LINK Hi, I must confess I am not an expert on graphic images. I'm not surprised that you are not experiencing any difference with these two lines of codes though. AntiAlias is used in graphics applications for the smoothing of text and lines (diagonals and circles for example) and to help reduce wavy lines, bands or moiré patterns in images HighQualityBicubic is an algorithm used when scaling images (Shrink or Stretch) to maintain reasonable image quality with respect to the original image. So depending on your original images these lines of code may or may not have an effect on the resized image quality. I leave them in just in case. Take a look at Hope this helps answer your questions. Dec 26, 2008 12:37 PM|smcoxon|LINK No problem, glad it helped you. Seems my piece of code has been extensively used by many people on this forum. Over 126,000 visits to my posts on this subject here: Member 151 Points Feb 25, 2009 10:34 PM|smcoxon|LINK [cool] Glad it helps and thanks for showing your appriciation. smcoxon Jul 28, 2009 06:53 PM|smcoxon|LINK Your welcome. I'd love to know how many people are using this piece of code, with over 150,000 views must be quite a few! None 0 Points Sep 20, 2009 04:27 PM|Premy|LINK Boy, was I glad I came across this post while I was looking for a way to reduce images before upload, but... While the code works fine, I found out that the reduced image is actually far larger in size than the original. Files that were originally around 70KB, become around 170KB after "reduction". I also tried saving as .jpg but to no avail. Kind of beats the whole purpose, I think. It's a pity, and I still have to find a way to reduce images. Sep 21, 2009 12:14 PM|smcoxon|LINK? Sep 23, 2009 03:02 PM|Premy|LINK smcoxon? Well, like I said I just reduced the files, maintained the 72 ppi resolution, everything. Not being much experienced with graphics manipulation, I didn't bother too much finding out what's wrong, especially after I found this other routine elsewhere on this forum (sorry, lost the thread ref), which does exactly what I want: intNewWidth = Convert.ToInt32(dblCoef * intOldWidth) intNewHeight = Convert.ToInt32(dblCoef * intOldHeight) Dim dblCoef As Double = MaxSideSize / CDbl(intMaxSide)(MapPath(ImageSavePath), fmtImageFormat) 'release used resources imgInput.Dispose() bmpResized.Dispose() Buffer.Close() End Sub This one reduced the same 70 kb image to 9 KB. Thanks anyway for your attention. Regards, Premy Oct 08, 2009 11:19 AM|si.sharma|LINK Hi... You can use the following code.. try { FileUpload1.SaveAs(Server.MapPath("~/CLogos/") + FileUpload1.FileName); // Get the path of the original Image string displayedImg = Server.MapPath("~/CLogos/") + FileUpload1.FileName; // Get the path of the Thumb folder string displayedImgThumb = Server.MapPath("~/CLogos/Thumbnail/"); //); } catch { } Mark AS Answer if useful.. Thanks Kinjal Sharma Infoway Oct 27, 2009 08:55 PM|smcoxon|LINK Hi, Thumbnails are exactly what they say they are... Thumbnails, smaller versions of the original and lower quality. With my example code you have the option of controlling the pixels per inch, colour depth etc. If all you want is small 'thumbnails' then using GetThumbnailImage is fine. Member 151 Points Jan 09, 2010 09:42 AM|talktoanil|LINK Int16 bmpW = 300; Int16 bmpH = 226; if (FileUpload1.HasFile) { lblError.Text = ""; Int16 newWidth = bmpW; Int16 newHeight = bmpH; string upName = FileUpload1.FileName; string filepath = "~/images/" + upName + ".png"; Bitmap upBMP = (Bitmap)Bitmap.FromStream(FileUpload1.PostedFile.InputStream); Bitmap newBMP = new Bitmap(newWidth,newHeight,System.Drawing.Imaging.PixelFormat.Format24bppRgb); newBMP.SetResolution(72,72); Int16 UpWidth = (Int16) upBMP.Width; Int16 UpHeight = (Int16) upBMP.Height; Int16 newX = 0; Int16 newY = 0; Decimal Reduce; if (UpWidth>UpHeight) // landscape { Reduce=newWidth/UpWidth ; newHeight = (Int16) (UpHeight * Reduce) ; newY =(Int16)((bmpH - newHeight) /2); newX=0; } else if (UpWidth < UpHeight) //portrait { Reduce=newWidth/UpHeight ; newWidth = (Int16) (UpWidth * Reduce) ; newX= (Int16)((bmpW - newWidth) /2); newY=0; } else if (UpWidth == UpHeight) // square pic { Reduce=newHeight/UpHeight ; newWidth = (Int16) (UpWidth * Reduce) ; newX = (Int16)((bmpW - newWidth) /2); newY =(Int16)((bmpH - newHeight) /2); } Graphics newGraphics = Graphics.FromImage(newBMP); newGraphics.Clear(Color.White); newGraphics.SmoothingMode=System.Drawing.Drawing2D.SmoothingMode.AntiAlias; newGraphics.InterpolationMode = InterpolationMode.HighQualityBicubic; newGraphics.DrawImage(upBMP, (float)newX, (float)newY, newWidth, newHeight); newBMP.Save(MapPath(filepath),System.Drawing.Imaging.ImageFormat.Png); Image1.ImageUrl=filepath; Image1.Visible=true; } I have copied your entire code and its working in my sample web page using vb.net but when i converted it into C# all i get is image with white color in it and not the one as it was working for vb.net code i spent nearly 2 hour looking for issue but cant figure out. the image generated is just a white block. check the code i used of yours in C#.net. plz help me. create a new asp.net prj wih C# as language and use the code below Jan 09, 2010 01:30 PM|smcoxon|LINK Hi, Yes that does happen when its converted to C#. Basically the problem in the C# code is with the data types used for the Reduce, UpWidth and UpHeight variables, resulting in a Reduce variable value of 0 most of the time. Changing the data type to Double for these variables solves the problem. There is another post with lots of threads an comments on my code here too: Here's the working C# code: using System; using System.IO; using System.Drawing; public partial class _Default : System.Web.UI.Page { public bool CheckFileType(string fileName) { string ext = Path.GetExtension(fileName); switch (ext.ToLower()) { case ".gif": return true; case ".png": return true; case ".jpg": return true; case ".jpeg": return true; case ".bmp": return true; default: return false; } } protected void btnUpload_Click(object sender, EventArgs e) { const int bmpW = 300; //New image target width const int bmpH = 225; //New Image target height if ((FileUpload1.HasFile)) { //Clear the error label text lblError.Text = ""; //Check to make sure the file to upload has a picture file format extention and set the target width and height if ((CheckFileType(FileUpload1.FileName))) { Int32 newWidth = bmpW; Int32 newHeight = bmpH; //Use the uploaded filename for saving without the '.' extension String upName = FileUpload1.FileName.Substring(0, FileUpload1.FileName.IndexOf(".")); //Set the save path of the resized image, you will need this directory already created in your web site string filePath = "~/Upload/" + upName + ".jpg"; //Create a new Bitmap using the uploaded picture as a Stream //Set the new bitmap resolution to 72 pixels per inch Bitmap upBmp = (Bitmap)Bitmap.FromStream(FileUpload1.PostedFile.InputStream); Bitmap newBmp = new Bitmap(newWidth, newHeight, System.Drawing.Imaging.PixelFormat.Format24bppRgb); newBmp.SetResolution(72, 72); //Get the uploaded image width and height Double upWidth = upBmp.Width; Double upHeight = upBmp.Height; int newX = 0; //Set the new top left drawing position on the image canvas int newY = 0; Double reDuce; //Keep the aspect ratio of image the same if not 4:3 and work out the newX and newY positions //to ensure the image is always in the centre of the canvas vertically and horizontally if (upWidth > upHeight) { //Landscape picture reDuce = newWidth / upWidth; //calculate the width percentage reduction as decimal newHeight = ((Int32)(upHeight * reDuce)); //reduce the uploaded image height by the reduce amount newY = ((Int32)((bmpH - newHeight) / 2)); //Position the image centrally down the canvas newX = 0; //Picture will be full width } else if (upWidth < upHeight) { //Portrait picture reDuce = newHeight / upHeight; //calculate the height percentage reduction as decimal newWidth = ((Int32)(upWidth * reDuce)); //reduce the uploaded image height by the reduce amount newX = ((Int32)((bmpW - newWidth) / 2)); //Position the image centrally across the canvas newY = 0; //Picture will be full hieght } else if (upWidth == upHeight) { //square picture reDuce = newHeight / upHeight; //calculate the height percentage reduction as decimal newWidth = ((Int32)(upWidth * reDuce)); //reduce the uploaded image height by the reduce amount newX = ((Int32)((bmpW - newWidth) / 2)); //Position the image centrally across the canvas newY = ((Int32)((bmpH - newHeight) / 2)); //Position the image centrally down the canvas } / Graphics newGraphic = Graphics.FromImage(newBmp); try { newGraphic.Clear(Color.White); newGraphic.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias; newGraphic.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic; newGraphic.DrawImage(upBmp, newX, newY, newWidth, newHeight); newBmp.Save(MapPath(filePath), System.Drawing.Imaging.ImageFormat.Jpeg); //Show the uploaded resized picture in the image control Image1.ImageUrl = filePath; Image1.Visible = true; } catch (Exception ex) { string newError = ex.Message; lblError.Text = newError; } finally { upBmp.Dispose(); newBmp.Dispose(); newGraphic.Dispose(); } } else { lblError.Text = "Please select a picture with a file format extension of either Bmp, Jpg, Jpeg, Gif or Png."; } } } } Member 151 Points Jan 14, 2010 05:31 AM|Vipindas|LINK public static(toStream, image.RawFormat); thumbnailGraph.Dispose(); thumbnailBitmap.Dispose(); image.Dispose(); } None 0 Points Jun 11, 2010 08:15 PM|smcoxon|LINK Hi, I'm not sure I understand your problem but as in the C# version, try changing the data types for the Reduce, UpWidth and UpHeight variables to Double. Jun 15, 2010 12:39 AM|GDB|LINK Thanks ?smcoxon, very nice example. I would like to clarify one point. Your comment on Line 169: //Save the new bitmap image using 'Png' picture format and the calculated canvas positioning Your code on line 184: newBmp.Save(MapPath(filePath), System.Drawing.Imaging.ImageFormat.Jpeg); Should it not be "System.Drawing.Imaging.ImageFormat.Png" or am I missing something? Jun 15, 2010 01:07 AM|smcoxon|LINK Hi, You can save the image in .Png, .Jpeg and other formats. What you have noticed is my comment in the code doesn't match up with the image file type being used in the code at line 184. I probably changed the code to save the image as a Jpeg at sometime in the past but didnt change the comment to match. As comments are just comments they don't effect the code at run time. Use and change the code and comments to suit your needs. Jun 15, 2010 01:17 AM|luappy13|LINK I think his 'compression problem' comes from not applying compression. I'm no expert but have had this issue and I think its because the GDI object you are using is Bitmap (uncompressed) and no JPEG encoding compression is applied after the GDI creation of the thumb (just scanned your code so please forgive if I missed)... anyway. I use a couple of methods for this and it applies JPEG compression etc private static Bitmap CreateThumbnail(string lcFilename, int lnWidth, int lnHeight) {; } and the compression/codec grabbing methods public static string CreateImageThumbnail(string filename, long compression) { Bitmap bmp = CreateThumbnail(HttpContext.Current.Server.MapPath(Constants.UploadPath + filename), Constants.thumbDim, Constants.thumbDim); try { //Parameter object to compress the image // this automatically converts to jpeg of n compression ImageCodecInfo codec = GetEncoder(ImageFormat.Jpeg); Encoder enc = Encoder.Quality; EncoderParameters prms = new EncoderParameters(1); prms.Param[0] = new EncoderParameter(enc, compression); bmp.Save(HttpContext.Current.Server.MapPath(Constants.UploadPath + Constants.thumbPrefix + filename), codec, prms); return Constants.thumbPrefix + filename; } catch (Exception ex) { bmp.Dispose(); return filename; } } private static ImageCodecInfo GetEncoder(ImageFormat format) { ImageCodecInfo[] codecs = ImageCodecInfo.GetImageDecoders(); foreach (ImageCodecInfo codec in codecs) { if (codec.FormatID == format.Guid) { return codec; } } return null; } Jun 15, 2010 03:50 AM|GDB|LINK I wasn't picking nits with your code (or comments) Smcoxon, just wanted to confirm that newBmp.Save(MapPath(filePath), System.Drawing.Imaging.ImageFormat.<Png | Jpeg | Gif >); would all work; i.e. if I start with an uploaded (larger) jpg and I resize it I can then save it as a Png or vice versa. Makes sense given that we have a bitmap as an intermediate format but I have always done a "same format" resize - force of habit. Thanks again for your well documented example. Jun 15, 2010 09:34 AM|smcoxon|LINK Hi, I didn't think you were nit picking, sorry if my response gave you that impression. Yes you can save in Png | Jpeg | Gif image formats. Just test it out first as you may get some un-expected results. I.e. GIF is not very good for photographic images. Cheers! Jun 18, 2010 02:44 PM|mildition|LINK GDB I can provide the ADO.NET code but there are simpler ways of doing it. I'll wait to see if someone (smarter than me no doubt) answers your question and if not I'll post my code. Could you provide your solution until someone fiinds an easier way? I'm trying to save the resized image to the database using the filepath but it doesn't work :(<div style="position: absolute; left: -10000px; top: 0px; width: 1px; height: 1px; overflow-x: hidden; overflow-y: hidden;" id="_mcePaste">FileStream fs = new FileStream(filepath, FileMode.Open, FileAccess.Read);</div> <div style="position: absolute; left: -10000px; top: 0px; width: 1px; height: 1px; overflow-x: hidden; overflow-y: hidden;" id="_mcePaste"> byte[] data = new byte[fs.Length];</div> <div style="position: absolute; left: -10000px; top: 0px; width: 1px; height: 1px; overflow-x: hidden; overflow-y: hidden;" id="_mcePaste"> fs.Read(data, 0, System.Convert.ToInt32(fs.Length));</div> <div style="position: absolute; left: -10000px; top: 0px; width: 1px; height: 1px; overflow-x: hidden; overflow-y: hidden;" id="_mcePaste"> fs.Close();</div> FileStream fs = new FileStream(filepath, FileMode.Open, FileAccess.Read); byte[] data = new byte[fs.Length]; fs.Read(data, 0, System.Convert.ToInt32(fs.Length)); fs.Close(); Jun 18, 2010 02:49 PM|GDB|LINK This may raise more questions than it answers but … I can’t provide all the code you need to run this so hopefully you will be able to extrapolate from what I can provide. There is to the greatest degree possible a separation of concerns. The structure is as follows: The presentation layer passes the uploaded image to a class that handles the resizing: byte[] profileImage = public static Byte[] ScaledImageByteSmall(HttpPostedFile uploadedImage, int maxHeight, int maxWidth, string fileExt); The byte array is then passed to the ADO.NET data access layer: bool profileUpdated = UserProfile.UpdateUserPhoto(fileType, profileImage); /// <summary> /// Updates a user profile photo /// This is probably specific to SQL Server - haven't researched it /// </summary> public static bool UpdateUserPhoto(string photoFileType, Byte[] photo) { // get a configured DbCommand object DbCommand comm = GenericDataAccess.CreateCommand(); // set the stored procedure name comm.CommandText = "user_Profile_Photo_Update"; // create a new parameter DbParameter param = comm.CreateParameter(); param.ParameterName = "@UserId"; param.Value = UserId; param.DbType = DbType.String; param.Size = 36; comm.Parameters.Add(param); // create a new parameter param = comm.CreateParameter(); param.ParameterName = "@PhotoFileType"; param.Value = photoFileType; param.DbType = DbType.String; param.Size = 10; comm.Parameters.Add(param); // create a new parameter param = comm.CreateParameter(); param.ParameterName = "@Photo"; param.Value = photo; // We don't set a DbType for SQL Server comm.Parameters.Add(param); // Execute the sproc int result = -1; try { // execute the stored procedure result = GenericDataAccess.ExecuteNonQuery(comm); } catch { // The GenericDataAccess class handles exceptions and closes the database connection } // result will be 1 in case of success return (result != -1); } On the database side the sproc handles the update: ALTER PROCEDURE [dbo].[user_Profile_Photo_Update] ( @UserId UNIQUEIDENTIFIER, @PhotoFileType VARCHAR(10), @Photo IMAGE ) AS DECLARE @LastUpdatedDate DATETIME SELECT @LastUpdatedDate = GetDate() BEGIN UPDATE dbo.user_Profile SET PhotoFileType = @PhotoFileType, Photo = @Photo, LastUpdatedDate = @LastUpdatedDate WHERE UserId = @UserId END Jul 02, 2010 04:19 PM|smcoxon|LINK Hi, The first thing you need to decide is if to store the images in an SQL database or simply save it to a folder on yor web server. Both have their own merits. Saving to a folder is simple to do and if you are using a hosted service it doesn't use up lots of your valuable (costly) SQL server storage space. All you need to store in the database table is the url of the saved image (some thing like "~/uploads/images/myimage.jpg") in a varchar field. It may also be quicker to retrive and display your image from the web server folder rather than having to make a read request from the SQL server. On the other hand, saving your image in the database ensures all your website data is held in one place - on the SQL server. Also, when you save/backup your database you will also save/backup all the associated images. There's less risk of someone overwiting or deleting your saved images as they are in the database not in a disk folder. And it maybe easier to maintain depending on your website and SQL data structure. In the end the choice is yours. There's lots of tutorials, blogs etc on how to save images in an SQL database. Take a look at this post to find a number of suggested links: Mar 15, 2011 04:54 PM|mmkarim|LINK Thanks for the code. I converted the code to C# and wanted to post but I noticed somone already did it. I was getting the same issue with the white background picture, til I realized that the integer divisions of height and width does not convert to decimal by default. In C#, you can either change the upHieght, upWidth and reDuce to double type or you can just use "Decimal.Divide(int, int)" prototype to divide integers to give you a decimal output. TIL .NET has a Graphics library. Thanks, again. Mar 16, 2011 11:55 AM|smcoxon|LINK Great, glad you found the code useful. smcoxon Member 2 Points Aug 19, 2011 01:57 AM|TonyLoco23|LINK smcoxonHere's my VB code as promised Smcoxon, Your code worked great, but I had to change one thing, I had to replace "newX" and "newY" with the hard-coded values of 0. Otherwise I would get a thin white line along the bottom and left of my resized images. I spent ages puzzling over what the purpose was of the newX and newY, I never worked it out and then when I removed them and replaced them with 0s the white lines dissapeared. Also, I am using your code to resize all kinds of image types (jpg, gif, etc) and if the user uploads a jpg I leave it as a jpg extension, even though it uses "Imaging.ImageFormat.Png". Is that OK or is that wrong? Aug 23, 2011 11:00 PM|smcoxon|LINK Hi bozzo, You can see a C# version of my code further back in this post on page 3. Aug 24, 2011 12:23 PM|smcoxon|LINK Hi TonyLoco23, If you look at my code the first thing set up is the canvas size - the size you want the resized image to be. This is usually a landscape orientation with a 4:3 (width:height) aspect ratio. When the original image is reduced, depending on its original dimensions, it may or may not be an exact 4:3 ratio. Therefore, to position the resized image centrally (horizontal and vertical) on the canvas, the newX and newY values are used as coordinates for the new top left position of the resized image on the canvas. This is done because you never know what the size and aspect ratio the original image will be. The white lines you see is because the canvas colour is set to white and the resized image is not always exactly the base canvas size, especially if the original image aspect ratio is to be maintained, as it is in my code to eliminate image distortion, or if a portrait image (3:4) is uploaded, resized and drawn on to the 4:3 canvas. An alternative approach would be to resize the base canvas size to match the exact dimensions of the resized image for every resized image. The issue with this is you will end up with images of different dimensions which makes your aspx page layout more of an issue when displaying the resized images. You can also change the canvas colour if you want to by changing the following line of code to the colour you want: newGraphic.Clear(Color.White) For the image types, I'm no graphics expert but I'd convert and save with the correct image type and file extention. Since posting my original code I convert and save all my images as Jpg. To do this change the line of code for the save filePath and the code that saves the image as follows: Dim filePath As String = "~/Upload/" & upName & ".jpg" newBmp.Save(MapPath(filePath), Imaging.ImageFormat.Jpeg) Aug 27, 2011 02:14 PM|janwane|LINK Yours is the closest I've gotten to finding something that works. I have the code in place in my program with necessary name changes (destination folder and FileUpload object). Stepped thru it in debug with great expectations only to find it errored at this line: newBmp.Save(MapPath(filePath), System.Drawing.Imaging.ImageFormat.Jpeg); with this message: "A generic error occurred in GDI+" It was a 2MB jpg file, nothing weird. Please help as I have no idea what to do to fix this. Thanks! Aug 27, 2011 03:28 PM|smcoxon|LINK The code looks correct. The error you have could be due to not having write permissions on the directory where you are trying to save the resized image file or maybe you are saving the resized image with the same name as the original file in to the same directory? Take a look at the following blogs and forum post: Aug 27, 2011 04:26 PM|janwane|LINK Neither is the case because (a) it the same upload folder the application currently uploads successfully to and (b) I have code before the "save" that checks whether the file name exists. csPath = Server.MapPath("AcftInspPhotos"); (done in Page_Load) DirectoryInfo dirInfoEx = new DirectoryInfo(@csPath); System.IO.FileInfo[] fileNamesEx = dirInfoEx.GetFiles(FileUpload_R1.FileName); if (fileNamesEx.GetLength(0) == 0) { (clipped) } Again, I emphasize this is a working program that already uploads aircraft discrepancy photos that were resized enmass prior to the upload. Now my boss wants the user to skip the resize step and be able to select the fullsize photo and my program to resize it as it uploads. So I've just inserted your code in my program. Folder permissions were already in place. I suspected it is the way the path is being used and I've tried changing this line newBmp.Save(MapPath(filePath), System.Drawing.Imaging.ImageFormat.Jpeg); to this newBmp.Save(@csPath, System.Drawing.Imaging.ImageFormat.Jpeg); However, nothing I've tried so far solves this problem. I know it's Saturday but if a lightbulb goes off in your head, please let me know. I will go slowly thru the blog you cited and keep trying stuff. Thanks! Aug 27, 2011 04:48 PM|janwane|LINK I FIGURED IT OUT!! I had to change this string filePath = "~/AcftInspPhotos/" + upName + ".jpg"; to this string filePath = "~\\HR\\AcftInspPhotos\\" + upName + ".jpg"; Even though both the running program AND the upload folder are already under the "HR" folder! Aug 27, 2011 05:45 PM|smcoxon|LINK Excellent, glad you got it working for your website. 46 replies Last post Aug 27, 2011 05:45 PM by smcoxon
http://forums.asp.net/p/1085119/1613833.aspx?Resize+Images+on+upload
CC-MAIN-2014-52
refinedweb
4,727
55.64
React-Redux Part Three So far with this small series of our simple to-do app, we have successfully implemented redux to our react application fetching data using Axios from our rails API backend. We have also been able to create new tasks to add to our list. We will now move on to deleting any task that we don't want in our list or if the user is done with it they can just delete it too. Lets begin with our reducers. Go to the src/reducers/listReducers.js file and add a case for deleteting a task. The following will be our entire switch case : export default (state=[],action)=>{switch(action.type){case 'FETCH_LIST':return [...state, ...action.payload]case 'CREATE_TASK':return [...state, action.payload]case 'DELETE_TASK':return state.filter(task=> task.id !== action.payload)default:return state}} In the case that the action type is DELETE_TASK we will filter through the state and return all tasks that does not equal to the action payload which will eventually be the id of the task we will want to delete. We use the filter method in this case because when using redux we don't want to mutate our data and in this case the filter method will not mutate the array of the state. Ok great! Let’s move on to our actions. Go to the src/actions/index.js file. We will create an action using Axios to delete a specific task from our list. We will allow this action to accept an argument of the id. So we will have the following: export const deleteTask=(id)=>{return async (dispatch)=>{await api.delete(`/lists/${id}`)dispatch({type:'DELETE_TASK', payload:id})}} And lets finally put it all together and make a button, so when the user clicks on that button it will delete it. In our src/components/Showlist.js file we will import our deleteTask action that we just created and use it in our buttons onClick handler. We will have the following in our ShowList file: import React from 'react';import {fetchList, deleteTask} from '../actions/index'import {connect} from 'react-redux'class ShowList extends React.Component {componentDidMount(){this.props.fetchList()}renderList=()=>{return this.props.list.map(l=>{return(<div key={l.id}><button onClick={()=>this.props.deleteTask(l.id)}>X</button> {l.task} </div>)})}render(){return (<>{this.renderList()}</>)}}const mapStateToProps=(state)=>{return {list:Object.values(state.list)}}export default connect(mapStateToProps,{fetchList, deleteTask})(ShowList); Awesome we completed our delete function! We should have something that looks like this:
https://dianaponce2341.medium.com/react-redux-part-three-b601bef98ebb?source=post_internal_links---------3----------------------------
CC-MAIN-2021-43
refinedweb
422
57.16
Function function There is a saying that mathematics is the language of the universe. The main reason should be the function. Thousands of people in the world can be expressed by a function. In python, a function can be understood as packaging a series of operations. When you call the function, the program will execute all the statements in the function. Classification of functions When it comes to functions, you should think of the familiar f(x). Functions in programming languages are not much different from functions in mathematics. In the programming language, it can be divided into functions with return value and functions without return value, including parametric functions and nonparametric functions. When we use a function, it will be divided into two parts: the definition of the function and the call of the function. Nonparametric function In python, we use the def statement to tell the computer that all the indented contents below are the functions I am defining. 1. No return value def hello(): print("hello this is python") This is the definition of a function. When the program runs, the content in the function definition will not be executed. It will be executed only when the function is called. hello() # Function call output hello this is python 2. Return value Since there is little difference between whether there is a return value or not, I only describe the function with a return value here. If a return value occurs in a function, it indicates that the function has a return value. If you don't know about return, you can take a look at the "keyword of function" at the back of the article. def return10(): return 10 a = return10() # call print(a) # 10 After the contemporary code size becomes longer, it will be more troublesome to understand the variable type returned by a function, so you can "write" the type after the function name # The return value is null def fun() -> None: return Parametric function When learning parametric functions, first understand the effective global and local variables. 1. Global and local variables Global variables refer to variables defined in the global, that is, variables defined outside all functions. They are global and valid at any position. A local variable is a variable defined internally in a function or loop body. It is only valid internally, and it is invalid beyond the defined range. (local variables can also be local variables in a local, that is, nested functions in functions can also have local variables) Therefore, variables outside the function cannot be modified directly inside the function x = 100 def fun(): x = 200 print(x) print(x) # 100 fun() # 200 print(x) # 100 But sometimes I just want to modify it. What should I do? We can use the global keyword to raise local variables to global variables. x = 100 def fun(): global x x = 200 print(x) print(x) # 100 fun() # 200 print(x) # 200 At this time, the variable can be modified. Similarly, you can use nonlocal to convert external variables into local variables. The use of nonlocal is relatively small, so it is not written in detail here. Interested students can try it. Note: nonlocal is written in front of the internal variable and can only rise one level. 2. Parameters of function First, understand what a parameter is. A parameter is a special variable. It is in the parentheses in the function definition. The variable defined in the function will co-exist with the function, that is, the variable only exists in the currently defined function. This sentence is very important to understand the subsequent recursion and other knowledge. def add(number1, number2): return number1 + number2 def print_x(x): print(x) Since it is a parametric function, you must pass in parameters to them when calling them. num1 = 10 num2 = 11 num3 = add(num1, num2) print(num3) # 21 print_x(num3) # 21 3. Required and default parameters Required parameters: when calling a function, the parameters that must be passed in are required parameters. Default parameter: when calling a function, the parameters that can not be passed in are the default parameters. The default parameter will be a default value when it is not passed in. def add(number1, number2=0): return number1 + number2 In this function, number1 is a required parameter, which must be passed in, and number2 is the default parameter, which can not be passed in. The parameters are also defined in order: No parameter = mandatory parameter (defined) = default parameter (defined) Otherwise, an error will be reported during operation. 4. Variable length parameters Sometimes we want to pass in some parameters, and we don't know their number. It's difficult to define the function with the above method, so we introduce indefinite length parameters. Note that there can only be one variable length parameter of the same type at most, and the variable length parameter is not a required parameter. 4.1 tuple indefinite length Tuples of variable length can pass no value or a series of values, represented by a *. def fun(*args): print(args) fun() # No value transfer fun('asv', 'a', 87, 2, 4, 3, 4, {"af": "asdf"}, ('bv', 'a'), {4, 62, 74, 1}) # Store incoming data as tuples 4.2 dictionary variable length The indefinite length of the dictionary has restrictions on the incoming form: it must be passed in the form of key = value. Of course, you can also not transfer values. The function automatically quotes the key into a string. def fun(**kwargs): print(kwargs) fun() # No value transfer fun(name='Ice', age=0) # {'name': 'Ice', 'age': 0} 4.3 parameter sequence At this point, we will update the parameter order: No parameter = "required parameter (defined) =" default parameter (defined) = "tuple indefinite length (* args) =" dictionary indefinite length (* * kwargs) 4.4 input parameters in the form of unpacking def fun(*args, **kwargs): print(args) print(kwargs) tu = (2, 5, 3, 4, 6) dic = {"name": "Ice", "age": 0} fun(*tu, **dic) # *Indicates tuple unpacking, * * indicates dictionary unpacking closure When the outer function returns the function body of the inner function, it is called closure. def fun(): x = 5 def fun1(): nonlocal x x -= 1 print(x) def fun2(): nonlocal x x += 1 print(x) return fun1 # return fun1,fun2 can also return multiple function bodies. The results are stored in tuples. Calling the corresponding function body can be called by unpacking tuples print(fun()) # <function fun.<locals>.fun1 at 0x000001BA70979EE8> fun()() # Execute the function under the closure If you can't unpack the solution group, you can see this article python learning - tuples Keyword of function The return value function mentioned earlier is return. Here is an introduction to return. Return is used to terminate the function and return the value. If a function is executed to return, all statements after it will not be executed and the function call will be ended directly. def fun(): print(1) return print(3) fun() # 1 3 will not be output here. If return is followed by a variable or value, the value will be returned at the end of the function. The value can be received with a variable! def fun(): print(1) return 10 print(3) a = fun() # 1 print(a) # 10 recursion When a function calls itself, it is called recursion. For example, the factorial function can be implemented recursively: def fun(n): if n == 1: return 1 else: return n * fun(n - 1) print(fun(3)) # 6 When implementing with recursion, pay attention to setting the exit of recursion. In the above recursive function for factorial, the exit is when n is equal to 1. Anonymous function As the name suggests, an anonymous function is a function without a name. It is defined by lambda and returned automatically by default, so it can be received by variables. a = lambda x: x + 1 print(a(5)) # 6 Of course, anonymous functions can also be real names. def a(x): return x + 1 print(a(5)) # 6 task Interested students can write this problem to consolidate their function knowledge. - Define a function isPrime(), which can input an integer to judge whether the integer is a prime number. If yes, it returns True and if not, it returns False. (a prime number is also called a prime number. A natural number greater than 1 is called a prime number that cannot be divided by other natural numbers except 1 and itself.) - Define a function to input a string. If the string is arranged in ascending order, it returns UP, if it is arranged in descending order, it returns DOWN, and if it is out of order, it returns False. (the string is passed in in lowercase letters) The next article will give the answer for your reference. Conclusion The homework element is introduced this time. What do you think? Please give me a compliment. It's very important to me. ps: now pay attention to me, and you will be an old fan in the future!!! Next Trailer Because there are many functions, I will publish them in two articles. The next article will list some built-in functions.
https://programmer.help/blogs/python-learning-functions.html
CC-MAIN-2022-21
refinedweb
1,519
62.27
Walkthrough: Word 2007 XML Format Erika Ehrli Microsoft Corporation June 2006 Applies to: Microsoft 2007 Office Suites Microsoft Office Word 2007 Summary: Walk through the new default file format for Microsoft Office Word 2007. Read detailed descriptions of the file format architecture, key components, and ways in which you can programmatically modify content. (22 printed pages) Contents Introduction Word 2007 Document Packages Open Packaging Conventions for the Word XML Format Interpreting Word 2007 Files Identifying Non-XML Parts in Word 2007 Documents Separating Content from Documents Understanding the Data Store Walkthrough: Creating a Word XML Format File Conclusion Introduction Microsoft Office Word 2007 provides a new default file format called Microsoft Office Word XML Format (Word XML Format). This format is based on the Open Packaging Conventions, also used by the XML Paper Specification (XPS). The binary file format used in Microsoft Office 97 through Microsoft Office 2003 Editions is still available as a save format, but it is no longer the default when saving new documents. Microsoft introduced XML into Microsoft Office XP in 1999 with SpreadsheetML in Microsoft Office Excel 2002. SpreadsheetML was a good start, but it did not provide full-fidelity. That is, it cannot describe every part of a spreadsheet. The next release of Microsoft Office products introduced WordprocessingML in Microsoft Office Word 2003. WordprocessingML was a huge step forward because it was the first full-fidelity XML file format provided by Microsoft Office. Using Microsoft Office 2003, you can parse WordprocessingML files and manipulate, update, or add data to them. However, a few limitations exist. For example, you must encode binary data (such as images) as text within the XML file itself, which increases file size when working with a document containing many images. Additionally, Word 2003 embeds all custom XML data directly into the WordprocessingML that describes the document. This can make the custom XML difficult to access and manipulate from external processes. The new file format in Word 2007 solves these issues by dividing the file into document parts, each of which defines a part of the overall contents of the file. When you want to change something in the file, you can simply find the document part you want, such as the header, and edit it without accidentally modifying any of the other XML-based document parts. Similarly, all custom XML data is in its own part. Working with customer XML is now easier. This allows you to generate documents programmatically with less code. In addition to being more robust and making it easier to work with custom XML, the new file format is also much smaller than the binary file format. The new file format takes advantage of ZIP technology by using the Open Packaging Conventions. This article walks through the structure of a Word 2007 document in this new file format. Word 2007 Document Packages The file format in Word 2007 consists of a compressed ZIP file, called a package. This package holds all of the content that is contained within the document. Using the package format decreases file size for Office documents because of the ZIP compression. The new format is also more robust to errors in transmission or handling. It allows you to manipulate the file contents using industry-standard ZIP-based tools. An easy way to look inside the new file format is to save a Word 2007 document in the new default format and rename the file with a .zip extension. Double-click the file to open and view its contents. Note To understand the composition of a file based on Microsoft Office Open XML Formats (Office XML Formats), you may want to extract its parts. To open the file, it is assumed that you have a ZIP utility, such as WinZip, installed on your computer. To open a Word XML Format file in Word 2007: - Create a temporary folder in which to store the file and its parts. - Save a Word document (containing text, pictures, and so forth) as a .docx file. - Add a .zip extension to the end of the file name. - Double-click the file. It will open in the ZIP utility. You can see the document parts that are included in the file. - Extract the parts to the folder that you created earlier. - Integrated ZIP compression reduces the file size by up to 75 percent. Files are further broken down into a modular file structure that makes data recovery more successful and enhances security. The new format segments files into components that you can manage and repair independently. Files created in the new format also have a distinctive file extension for each application, depending on the file type. Table 1. File extensions for Word 2007 file types Open Packaging Conventions for the Word XML Format The Open Packaging Conventions specification defines the structure of Word 2007 documents using the new file format. For more information about open packaging conventions, see the Open Packaging Conventions also used by the XML Paper Specification. To understand the structure of a Word 2007 document, you must understand the three major components of the new file format: - Part items. Each part item corresponds to one file in the un-zipped package. For example, if you right-click a Microsoft Office Excel workbook and choose to extract it, you see a workbook.xml file, several sheetn.xml files, and other files. Each of those files is a document part in the package. - Content Type items. Content type items describe what file types are stored in a document part. For example, image/jpeg denotes a JPEG image. This information enables Microsoft Office, and third-party tools, to determine the contents of any part in the package and to process its contents accurately. - Relationship items. Relationship items specify how the collection of document parts come together to form a document. This method specifies the connection between a source part and a target resource. Relationships are stored within XML parts in the document package, for example, /_rels/.rels. The following sections explain how each of these components fit together in an Office XML Formats file. Word 2007 Document Parts To facilitate construction, assembly, and reuse of Word 2007 documents by third-party processes and tools, Word divides the contents of the package into several logical parts that each store a specific document part, for example: - Style definitions - List definitions - Headers - Charts - Diagrams - The main document body - Images Word represents each of these document parts with an individual file within the package. These parts can consist of XML files, such as the document parts that contain the markup for the Word XML Format, as well as attached contents, such as pictures or OLE–embedded files in their native format. All of these are contained within the package. a Word file inside its ZIP container, provided that you update the relationships properly so that the document parts continue to relate to one another as designed. If the relationships are accurate, the file opens without error. The initial file structure in a file in Word 2007 is simply the default structure created by Word to enable you to determine the file composition easily. Provided that you keep the relationships current, you can change this file structure. For example, in Word 2007, the container file represents a document. Within the container file, there are parts that, when aggregated, compose the document. For example, a Word 2007 file could contain (but is not limited to) the following folders and files: - [Content_Types].xml. Describes the content type for each part that appears in the file. - _rels folder. Stores the relationship part for any given part. - .rels file. Describes the relationships that begin the document structure. Called a relationship part. - datastore folder. Contains custom XML data parts within the document. A custom XML data part is an XML file from which you can bind nodes to content controls in the document. - item1.xml file. Contains some of the data that appears in the document. Example of a custom XML data part. - docProps folder. Contains the application's properties parts. - App.xml file. Contains application-specific properties. - Core.xml file. Contains common file properties for all files based on the Open Packaging Conventions document format. Figure 1 shows the file structure of a sample Word 2007 document. Figure 1. Hierarchical file structure of a typical Word 2007 document You can replace entire document parts in order to change the content, properties, or formatting of Word 2007 documents. Word 2007 Content Types As mentioned previously, each document part has a specific content type. The content type of a part describes the contents of that file type. In the case of the XML parts that contain the markup defined by the Word XML Format, the content type can help you determine its composition. A typical content type begins with the word application and is followed by the vendor name. In the content type, the word vendor is abbreviated to vnd. All content types that are specific to Word begin with application/vnd.ms-word. If a content type is an XML file, then the URI ends with +xml. Other non-XML content types, such as images, do not have this addition. Some typical content types are: application/vnd.openxmlformats-officedocument.wordprocessingml.endnotes+xml Content type for a document part that describes endnotes within a Word document. The +xmlindicates that it is an XML file. application/vnd.openxmlformats-package.core-properties+xml Content type for a part that describes the core document properties. The +xmlindicates that it is an XML file. image/png Content type for an image. The +xmlportion is not present—this indicates that this content type is not an XML file. You can use all of these content types when manipulating the contents of a Word 2007 file programmatically. The Microsoft Windows Software Development Kit (SDK) for Beta 2 of Windows Vista and WinFX Runtime Components includes the System.IO.Packaging namespace, which allows you to add document parts, retrieve and update contents, or create relationships programmatically. For example, using the Microsoft WinFX System.IO.Packaging class, you can create a document part with the PackagePart.CreatePart method. The CreatePart method takes two string parameters; one for the URI of the new part and one for the content type of the part, as follows: This code example creates a document part using the URI stored in the uriResourceTarget variable with a content type used for styles. For more information about PackageParts, see the PackagePart Class reference documentation in the Microsoft Windows SDK. Locating Content Types This section contains a list of the most frequently encountered content types. Word 2007 describes each content type by a file or part, inside the package. The [Content_Types].xml file, at the root of the package, lists every part in the file and its ContentType object. For example, you might see something like this: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Types xmlns=""> <Override PartName="/word/footnotes.xml" ContentType= "application/vnd.openxmlformats-officedocument.wordprocessingml.footnotes+xml"/> <Default Extension="png" ContentType="image/png"/> /numbering.xml" ContentType= "application/vnd.openxmlformats-officedocument.wordprocessingml.numbering+xml"/> <Override PartName="/word/styles.xml" ContentType= "application/vnd.openxmlformats-officedocument.wordprocessingml.styles+xml"/> <Override PartName="/word/endnotes.xml" ContentType= "application/vnd.openxmlformats-officedocument.wordprocessingml.endnotes/footer2.xml" ContentType= "application/vnd.openxmlformats-officedocument.wordprocessingml.footer+xml"/> <Override PartName="/docProps/custom.xml" ContentType= "application/vnd.openxmlformats-officedocument.custom-properties+xml"/> <Override PartName="/word/footer1.xml" ContentType= "application/vnd.openxmlformats-officedocument.wordprocessingml. <Override PartName="/docProps/core.xml" ContentType= "application/vnd.openxmlformats-package.core-properties+xml"/> </Types> You can rename and rearrange all of these parts within the directory structure. The parts are listed here in their default locations with their default names to make it as easy as possible to understand what they are. Inside the Word directory, off the root of the package, is the majority of the information describing the document. In this directory, you may find parts that represent a number of available content types. Mapping Document Parts with Content Types Every XML file in the file format is a document part. If you look inside the newly-formatted file of most Word files, within the directory structure you see files, or document parts, like /word/fontTable.xml and word/styles.xml. These files contain clear names that indicate their purpose (for example, font table and style parts). However, you can also change their names. Therefore, inside the [ContentTypes].xml file is a < Types> element that maps each content part with its respective content type. A [ContentTypes].xml file might consist of something like this: This indicates that the /word/styles.xml document part has a content type of /vnd.openxmlformats-package.core-properties+xml. The /docProps/core.xml document part has a content type of application/vnd.openxmlformats-package.core-properties+xml. Relationships Between Document Parts Relationships are one of the most important parts of the package because they record the connections between document parts. You can rename and move parts within the package's directory structure, but the relationships must remain intact to keep the file valid. A relationship is a logical connection between two parts within the file package. For example, the root document part has a relationship of type to a part with the content type application/vnd.openxmlformats-officedocument.wordprocessingml.header+xml. This indicates the relationship between the parts is that the target part is a header for the originating part. Furthermore, the content type indicates that the contents are a Word 2007 header. This header part could then have its own relationships. For example, if the header contained a JPEG image, the header might have a relationship of type to a part with the content type image/jpeg. Within the package, relationships are always located inside a directory titled _rels. To find the relationships originating from any part, look for the _rels folder that is a sibling of that part. If the part has relationships, the _rels folder contains a file that has your original part name with a .rels extension appended to it. For example, suppose you want to see if a relationship exists for the officeDocument part, which has a content type of. By default, this part has a URI of /word/document.xml, so you would open the directory /word/_rels in the package and look for a file called document.xml.rels. Every relationship has a source and a target. The source is the part after which the relationship is named. For example, all relationships inside document.xml.rels have document.xml as their source. Each .rels file contains a < relationships> element, inside which you find a <relationship> element for each target relationship containing the target relationship's id, it's target part, and the target part's content type. A typical < relationships> element inside a document.xml.rels file might look like this: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Relationships xmlns=""> <Relationship Id="rId3" Type= "" Target= "docProps/app.xml"/> <Relationship Id="rId2" Type= "" Target= "docProps/core.xml"/> <Relationship Id="rId1" Type= "" Target= "word/document.xml"/> <Relationship Id="rId4" Type= "" Target= "docProps/custom.xml"/> </Relationships> Notice that each relationship element first specifies the relationship id, then the content type of the target, and finally the target document part. Interpreting Word 2007 Files This section walks you through the main set of document parts in a Word 2007 file that uses the new file format. It also outlines the relationships between these parts as presented using the default directory structure. Understanding Root–Level Relationships The first part of any file that uses the Word XML Format is a virtual document part, or the package itself, called the start part. From this start part, there are relationships to several top-level parts, which describe the contents of the document: Table 2. Root-level parts, relationships, and content types These four default parts contain the primary document properties, as well as a reference to the root part for the document, that is, the main document body content. Understanding Document–Level Relationships From the main document part, there is a set of relationships to the document parts referred to by the main document, as shown in Table 3. Note that most relationships below are prefixed with the following: Table 3. Document-level parts, relationships, and content types This list of parts is not complete. For example, it does not include shared parts such as OLE objects, Microsoft ActiveX controls, and digital signatures. However, it does provide insight into the typical Word XML Format structure in Word 2007. Identifying the Package URI and Content Type Names As described previously, programmatically, you can use a URI to refer to all of the relationships and almost all of the document parts. There are two types of URIs: one for document parts and another for relationships. In the new Word XML Format, relationship URIs usually begin with the following: For example, note the relationship-type used for application-level properties: This URI includes the word officeDocument because the Open XML File Formats convention dictates these relationships. The exceptions are relationship-types that begin with a base of. Notice that this base uses the word package instead of officeDocument, indicating that it conforms to the XPS Open Packaging Convention. One such relationship-type that uses this prefix in its URI is the following: This URI describes properties specific to the file. Relationship URIs are predefined. You cannot change them. URIs for document parts point to the document part inside the package. For example, the default URI for the document part containing the main information about the document is /word/document.xml. This indicates that the main document information is contained in a file called document.xml inside a folder called word off the root of the package. You can rename and move document parts inside the package, thereby changing the URI for the document part. It is very important to update the relationships when renaming or relocating document parts inside the package. Identifying Non-XML Parts in Word 2007 Documents All embedded parts in a Word 2007 document are in their native, default Word XML Format. Therefore, if you add a picture to a document, you can rename the document with a .zip extension, and open it as you would any ZIP file. Within the package, you can locate the picture and open it as well. If the picture is in a .png format, you can see and open a .png file directly from the package. Similarly, if you embed a Microsoft Office Visio document inside a Word 2007 document, you can locate the file as a .bin file inside the package. This creates many possibilities for developers with files stored on a server. Consider the scenario where a company has hundreds of documents on a server that all contain the same corporate logo image. If the corporate logo changes, you can implement a simple script to replace the old logo with the new logo for every document. The default location for images in a package is the /word/media directory and the default location for embedded objects in a package is /word/embeddings. Figure 2 shows the directory structure for a document that contains images and embedded objects. Figure 2. Hierarchical file structure of a Word 2007 document containing images and embedded objects Separating Content from Documents The document part that maps to the content type specified by the following URI: defines most of the document structure. In macro-enabled files, the part that maps to application/vnd.ms-word.template.macroEnabledTemplate.main+xml defines most of the document structure. In the previous code example from the [Content-Types].xml file, this content type maps to the document.xml part in the /word/document.xml folder. This part contains XML that is similar to a subset of WordprocessingML used in Word 2003. There are elements for paragraphs, properties, and fonts that describe the basic structure of the document. Individual parts describe all the components of the document, such as headers, footers, lists, and endnotes. By default, most of these parts are siblings of the following content type file: If you look at the previous code example, from the [Content-Types].xml file, you see many of these parts listed. This separation of content from formatting makes working on elements of a document programmatically a much easier task than in previous versions. Using the WinFX System.IO.Packaging class, you can modify the file with a few lines of code and perform tasks such as: - Replacing an old corporate logo used in hundreds of documents on a server with a new logo. Simply locate the image, delete it, and replace it with the new image. - Updating all the footers in documents on a server with an updated company name. - Changing the style of all the text in documents on a server to reflect a new corporate font. There are, of course, many more possibilities. With this content separation, locating the part to edit is much easier than it is with the WordprocessingML used in Word 2003. In a WordprocessingML file, the entire document is described in one giant XML file. Parsing the file to make the change can be difficult. It also can be risky, because if a mistake is made, the entire document may become corrupt. In contrast, if one part of a Word 2007 document is corrupt, the rest of the document should open without error. Understanding the Data Store Similar to many other types of data in the Word XML Format, custom XML data is stored separately from the rest of the document. Each item is stored as a separate part within the package, and by default, this data appears in a folder called customXml located off the root of the package. If you attach an XML file to a document programmatically by adding a new part to the document's CustomXMLParts collection, then by default that XML data is stored in a file called /customXml/item1.xml. If you add a custom XML data from another file, then, by default, it is stored in a file called /customXml/item2.xml. By using XMLMapping and XPath expressions, you can map specific elements of an XML part to a content control. This means that to modify or change custom XML programmatically, you do not need to parse through a large WordprocessingML file, as you did in Word 2003. Instead, you find the part holding the custom XML that you want and modify only that part of the file. To add custom data to your document, you need to create a custom XML file and add it to the ZIP package. You also need to create the corresponding relationship from the main document part to your custom XML part. In the Word XML Format in Word 2007, each custom part persists in its own XML part within the document container. The custom part contains the file name and its relationship information. The XML part is stored off the root of the document container in a folder called customXml. Figure 3 shows the directory structure for a document that contains custom XML data. Figure 3. Hierarchical file structure of a Word 2007 document containing custom XML data Isolating custom XML data inside a document package allows you to read and update custom data without modifying or working with other document parts. The relationship file, stored inside a _rels folder, describes all the relationships from one XML part to all other XML parts within a Word XML Format document. There are two relationship types for custom XML parts. The relationship type for the XML is: The relationship type for the XML properties is: An ID is stored with each relationship, allowing you to identify it uniquely within the data store. The actual custom XML part is stored in its own file alongside the _rels folder. Each custom XML part has a file name of item## .xml and its own properties, named itemProps## .xml. In both file names, ## is the number (1, 2, 3. . .) of the custom XML part in the data store. The file format for the item## .xml custom XML part looks like the following. Walkthrough: Creating a Word XML Format File Document.xml is the only required part in the Word XML Format. For information about how to create a minimal document with only this required part, see the section Creating the Document. To illustrate how document parts, content type items, and relationship items work together, this section walks through the process of building a more elaborate Word XML Format document in Word 2007. This walk through helps illustrate how to access and alter document contents using the Word XML Format. To create a Word 2007 document that contains content type and relationship items, you need to create a root folder that contains a specific folder and file structure, as shown in Figure 4. Figure 4. Folder and file structure for a Word 2007 document After you create all folders and files, the next section walks your through adding the required XML code to each document part. Creating the Document Properties First, you need to create two XML files for the document properties: - Create a folder and name it root. - Create a folder inside the folder root and name it docProps. - Open Notepad or any other XML editor. - Copy and paste the following code into a new file and save it as app.xml inside the docProps folder: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Properties xmlns= "" xmlns: <Template>Normal.dotm</Template> <TotalTime>1</TotalTime> <Pages>1</Pages> <Words>3</Words> <Characters>23</Characters> <Application>Microsoft Office Word</Application> <DocSecurity>0</DocSecurity> <Lines>1</Lines> <Paragraphs>1</Paragraphs> <ScaleCrop>false</ScaleCrop> <Company>MS</Company> <LinksUpToDate>false</LinksUpToDate> <CharactersWithSpaces>25</CharactersWithSpaces> <SharedDoc>false</SharedDoc> <HyperlinksChanged>false</HyperlinksChanged> <AppVersion>12.0000</AppVersion> </Properties> - Open Notepad or any other XML editor. - Copy and paste the following code into a new file and save it as core.xml inside the docProps folder: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cp:coreProperties xmlns: <dc:title></dc:title> <dc:subject></dc:subject> <dc:creator>Your name</dc:creator> <cp:keywords></cp:keywords> <dc:description></dc:description> <cp:lastModifiedBy>Your name</cp:lastModifiedBy> <cp:revision>2</cp:revision> <dcterms:created xsi:2006-05-03T01:13:00Z</dcterms:created> <dcterms:modified xsi:2006-05-03T01:14:00Z</dcterms:modified> </cp:coreProperties> Creating the Document Next, you need to create an XML file for the document part. This is the only required part in the new Word XML Format. - Create a folder and name it root. - Create a folder inside the root folder and name it word. - Open Notepad or any other XML editor. - Copy and paste the following code into a new file and save it as document.xml inside the word folder: <> <w:r w: <w:t>Word 2007 rocks my world!</w:t> </w:r> </w:p> </w:body> </w:document> Creating a Relationship Next, you need to create a relationship to this part. This relationship is documented in the root _rels folder, which means that the relationship is off the root (or start part) of the package. To create the relationship: - Create a folder inside the folder root and name it _rels. - Open Notepad or any other XML editor. - Copy and paste the following code into a new file and save it as .rels inside the _rels folder: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Relationships xmlns=""> <Relationship Id="rId3" Type= "" Target="docProps/app.xml"/> <Relationship Id="rId2" Type= "" Target="docProps/core.xml"/> <Relationship Id="rId1" Type= "" Target="word/document.xml"/> </Relationships> - Notice that this XML creates a relationship of type officeDocument with ID rId1 to the document.xml file in the folder named word. Defining the Content Type Next, you need to define the content type of this file. - Note that the structure of a content type definition file looks like the following: <> - Open Notepad or any other XML editor. - Copy and paste the above code into a new file and save it as [Content_Types].xml inside the root folder. Note This reserved file name is used by the Open Packaging Conventions to define the content types of all files in the package. Creating the Package Finally, you can put these files into a ZIP package to create a valid Word 2007 document: - Using any ZIP utility, save all the content of the simpledocument folder into a ZIP archive, including the following subfolders: the docProps folder, the word folder, and the _rels folder. Also include [Content_Types].xml. IMPORTANT Do not simply add the complete simpledocument folder to a ZIP file or you get an internal error while opening the file in Word 2007. You need to specifically add all the subfolders of the simpledocument folder to the ZIP archive. - Save the archive as simpledocument.docx. Now, you can open this file in Word 2007 and see the contents of the package: Figure 5. Simpledocument.docx in Word 2007 Conclusion When compared to the binary file format used in previous versions of Word, the new Word XML Format in Word 2007 offers many strong benefits. The compression offered by the ZIP container results in much smaller file sizes. The files are also much more robust—if a portion of the file becomes corrupt, the compartmentalization of the different document elements allows the file to open, even if one part is damaged. It is also easier to change, add, or delete data in a Word 2007 file programmatically or manually. The file is easily accessible using the Microsoft WinFX System.IO.Packaging class. You can modify documents on a server with only a few lines of code. You can readily access and manipulate custom XML data from its own separate parts. You can even use events to trigger the change of XML data. For example, you can map a content control to an XML element containing a stock quote and then retrieve the most recent quote programmatically each time the document opens, thereby ensuring that the user always sees the current price. The possibilities and ease with which you can program against the new Word XML Format are impressive and mark a significant advancement in Microsoft Office. Additional Resources To keep current with the latest on Word 2007 and the new file format, see the following resources: - Introducing the Microsoft Office (2007) Open XML File Formats - What's New for Developers in Word 2007 - Office Open XML Document Interchange Specification - Microsoft Windows Software Development Kit (SDK) for Beta 2 of Windows Vista and WinFX Runtime Components - Open Packaging Conventions - Setting Word Document Properties the Office 2007 Way - Video: Office 12 - Word to PDF File Translation - Video: Open XML File Formats - OpenXMLDeveloper.org - Video: Brian Jones - New Office File Formats Announced - Blog: Brian Jones: Office XML Formats Acknowledgments Thanks to Frank Rice, Mark Iverson, and Tristan Davis for their contributions to this article.
https://msdn.microsoft.com/en-us/library/ms771890.aspx
CC-MAIN-2018-17
refinedweb
5,158
55.84
the Loader class, as well as the URLRequest class. This actually gives us more flexibility when loading external images, because of the various events of the Loader object that we can create event listeners for. Let’s say that we a .swf file called “main.swf”, a document class file called “Main.as”, and a .jpg file called “thePic.jpg” in a directory called “images/”. (If you are not yet familar with the document class, please see this tutorial). Our directory structure would look like the following (click to expand): In order to load the .jpg into our file, we first need to import the correct classes into our scope. We will require the URLRequest and Loader classes, and also, we’ll go ahead and import the “events” package, since it is often good practice to have all of the events handy. The Code Here is our code for the Main.as file so far: package { import flash.display.MovieClip; import flash.display.Loader; import flash.events.*; import flash.net.URLRequest; Next, we’ll go ahead and make our constructor, a function called Main(). Inside our constructor we will create: We will pass the path to our bitmap image (which should be a .gif, .jpg, or .png file), to the URLRequest() constructor. In our case it will be a .jpg. package { import flash.display.MovieClip; import flash.display.Loader; import flash.events.*; import flash.net.URLRequest; public class Main extends MovieClip { public function Main() { var imageLoader:Loader = new Loader(); var theURL:String = "images/thePic.jpg"; var imageRequest = new URLRequest(theURL); Next, we will add the all-important event listener. As you should now be aware, interactivity is all about events (so you should get to know your event listener syntax ). Please note here, that we are not adding the event listener directly to the Loader object, we are passing it to one of its properties, its contentLoaderInfo object. For more about the contentLoaderInfo class, see the Adobe docs here. This object will trigger an event called “COMPLETE” when everything is loaded, so this is what we will listen for. To this event we assign the listener onComplete(), to be defined later (see below). After the event listener is set, we then call the load() method of the Loader() object that we have instantiated as imageLoader. Lastly, placing the image on stage is handled by our (nested) function called onComplete(). It does the work of adding our newly loaded DisplayObject instance (in this case the content property of a Loader object) to the display list using addChild(). The complete constructor function is below: public function Main() { var imageLoader:Loader = new Loader(); var theURL:String = "images/thePic.jpg"; var imageRequest = new URLRequest(theURL); imageLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, onComplete); imageLoader.load(imageRequest); function onComplete(evt:Event) { addChild(imageLoader.content); } } Notice the line: addChild(imageLoader.content); The content property of the Loader object is used here because it actually contains our image. If we want to manipulate it as a Bitmap however, we will have to cast it to the correct data type (not covered in this tutorial). Don’t Forget to Unload the Image When You are Done With It The title of this section says it all. You should use the removeChild() method of the DisplayObjectContainer that the object belongs to (in this case our main timeline), as well as the Loader.unload() method to remove all references to the loaded image. (Unless you want to keep the bitmap in memory of course). Once there are no remaining references, our good friend the garbage collector will remove the bitmap from memory. And that concludes this tutorial. I hope you enjoyed it. Ciao and happy ActionScripting
http://m.mowser.com/web/www.heaveninteractive.com%2Fweblog%2F2008%2F03%2F28%2Floading-an-external-jpg-png-or-gif-file-in-actionscript-30%2F
crawl-002
refinedweb
614
59.5
The wcrtomb() is defined in <cwchar> header file. wcrtomb() prototype size_t wcrtomb( char* s, wchar_t wc, mbstate_t* ps ); The wcrtomb() function converts the wide character represented by wc to a narrow multibyte character and is stored in the address pointed to by s. - If s is not a null pointer, the wcrtomb() function determines the maximum number of bytes required to store the multibyte representation of wc and stores it in the memory location pointed to by s. A maximum of MB_CUR_MAX bytes can be written. The value of ps is updated as required. - If s is a null pointer, the call is equivalent to wcrtomb(buf, L'\0', ps)for some internal buffer buf. - If wc == L'\0', a null byte is stored. wcrtomb() Parameters - s: Pointer to the multibyte character array to store the result. - wc: Wide character to convert. - ps: Pointer to the conversion state used when interpreting the multibyte string wcrtomb() Return value - On success, the wcrtomb() function returns the number of bytes written to the character array whose first element is pointed to by s. - On failure (i.e. wc is not a valid wide character), it returns -1, errno is set to EILSEQ and leaves *ps in unspecified state. Example: How wcrtomb() function works? #include <cwchar> #include <clocale> #include <iostream> using namespace std; int main() { setlocale(LC_ALL, "en_US.utf8"); wchar_t str[] = L"u\u00c6\u00f5\u01b5"; char s[16]; int retVal; mbstate_t ps = mbstate_t(); for (int i=0; i<wcslen(str); i++) { retVal = wcrtomb(s, str[i], &ps); if (retVal!=-1) cout << "Size of " << s << " is " << retVal << " bytes" << endl; else cout << "Invalid wide character" << endl; } return 0; } When you run the program, the output will be: Size of u is 1 bytes Size of Æ is 2 bytes Size of õ is 2 bytes Size of Ƶ is 2 bytes
https://www.programiz.com/cpp-programming/library-function/cwchar/wcrtomb
CC-MAIN-2020-16
refinedweb
306
61.67
Introduction In this article, we will learn how to implement Azure serverless with Blazor web assembly. We will create an app that lists out some Frequently Asked Questions (FAQ) on Covid-19. We will create an Azure Cosmos DB which will act as our primary database to store questions and answers. An Azure function app will be used to fetch data from cosmos DB. We will deploy the function app to Azure to expose it globally via an API endpoint. We will consume the API in a Blazor web assembly app. The FAQs will be displayed in a card layout with the help of Bootstrap. What is a serverless architecture? In the traditional application such as a 3-tier app, a client will request the server for the resources, and the server will process the request and respond with the appropriate data. However, there are some issues with this architecture. We need a server running continuously. Even if there is no request, the server is present 24X7, ready to process the request. Maintaining server availability is cost-intensive. Another problem is scaling. If the traffic is huge, we need to scale out all the servers and it can be a cumbersome process. An effective solution to this problem is serverless web architecture. The client will make a request to a file storage account instead of a server. The storage account will return the index.html page along with some code that needs to be rendered on the browser. Since there is no server to render the page, we are relying on the browser to render the page. All the logic to draw the element or update the element will run in the browser. We do not have any server on backend, we just have a storage account with a static asset. What is an Azure function? Making the browser run all the logic to render the page seems exciting but it has some limitations. We do not want the browser to make database calls. We need some part of our code to run on the server-side such as connecting to a database. This is the place when Azure functions come in use. In a serverless architecture, if we want some code to run the server-side, then we use an Azure function. Azure functions are event-driven serverless compute platform. You need to pay only when execution happens. They are also easy to scale. Hence, we get both the scaling and the cost benefits with Azure functions. To learn more refer to the Azure function official docs. Why should you use Azure serverless? Azure serverless solution can add value to your product by minimizing the time and resources you spend on infrastructure-related requirements. You can increase the developer productivity, optimize resources and accelerate the time to market with the help of a fully managed, end-to-end Azure serverless solution. What is Blazor? Blazor is a .NET web framework for creating client-side applications using C#/Razor and HTML. Blazor runs in the browser with the help of WebAssembly. It can simplify the process of creating a single page application (SPA). It also provides a full-stack web development experience using .NET. Using .NET for developing Client-side application has multiple advantages as mentioned below: - .NET offers a range of API and tools across all platforms that are stable and easy to use. - The modern languages such as C# and F# offer a lot of features that make programming easier and interesting for developers. - The availability of one of the best IDE in the form of Visual Studio provides a great .NET development experience across multiple platforms such as Windows, Linux, and macOS. - .NET provides features such as speed, performance, security, scalability, and reliability in web development that makes full-stack development easier. Why should you use Blazor? Blazor supports a wide array of features to make web development easier for us. Some of the prominent features of Blazor are mentioned below: - Component-based architecture: Blazor provides us with a component-based architecture to create rich and composable UI. - Dependency injection: This allows us to use services by injecting them into components. - Layouts: We can share common UI elements (for example, menus) across pages using the layouts feature. - Routing: We can redirect the client request from one component to another with the help of routing. - JavaScript interop: This allows us to invoke a C# method from JavaScript, and we can call a JavaScript function or API from C# code. - Globalization and localization: The application can be made accessible to users in multiple cultures and languages - Live reloading: Live reloading of the app in the browser during development. - Deployment: We can deploy the Blazor application on IIS and Azure Cloud. To learn more about Blazor, please refer to the official Blazor docs. Prerequisites To get started with the application, we need to fulfill the prerequisites as mentioned below: An Azure subscription account. You can create a free Azure account at Install the latest version of Visual Studio 2019 from While installing the VS 2019, please make sure you select the Azure development and ASP.NET and web development workload. Create Azure Cosmos DB account Log in to the Azure portal and search for the “Azure Cosmos DB” in the search bar and click on the result. On the next screen, click on the Add button. It will open a “Create Azure Cosmos DB Account” page. You need to fill in the required information to create your database. Refer to the image shown below: You can fill in the details as indicated below. - Subscription: Select your Azure subscription name from the drop-down. - Resource Group: Select an existing Resource Group or create a new one. - Account Name: Enter a unique name for your Azure Cosmos DB account. The name can contain only lowercase letters, numbers, and the ‘-‘ character, and must be between 3 and 44 characters. - API: Select Core (SQL) - Location: Select a location to host your Azure Cosmos DB account. Keep the other fields to its default value and click on the “Review+ Create” button. In the next screen, review all your configurations and click on the “Create” button. After a few minutes, the Azure Cosmos DB account will be created. Click on “Go to resource” to navigate to the Azure Cosmos DB account page. Set up the Database On the Azure Cosmos DB account page, click on “Data Explorer” on the left navigation, and then select “New Container”. Refer to the image shown below: An “Add Container” pane will open. You need to fill in the details to create a new container for your Azure Cosmos DB. Refer to the image shown below: You can fill in the details as indicated below. - Database ID: You can give any name to your database. Here I am using FAQDB. - Throughput: Keep it at the default value of 400 - Container ID: Enter the name for your container. Here I am using FAQContainer. - Partition key: The Partition key is used to automatically partition data among multiple servers for scalability. Put the value as “/id”. Click on the “OK” button to create the database. Add Sample data to the Cosmos DB In the Data Explorer, expand the FAQDB database then expand the FAQContainer. Select Items, and then click on New Item on the top. An editor will open on the right side of the page. Refer to the image shown below: Put the following JSON data in the editor and click on the Save button at the top. { "id": "1", "question": "What is corona virus?", "answer": "Corona viruses are a large family of viruses which may cause illness in animals or humans. The most recently discovered coronavirus causes coronavirus disease COVID-19." } We have added a set of questions and answer along with a unique id. Follow the process described above to insert five more sets of data. { "id": "2", "question": "What is COVID-19?", "answer": "COVID-19 is the infectious disease caused by the most recently discovered corona virus. This new virus and disease were unknown before the outbreak began in Wuhan, China, in December 2019." } { "id": "3", "question": "What are the symptoms of COVID-19?", ." } { "id": "4", "question": "How does COVID-19 spread?", ." } { "id": "5", "question": "What can I do to protect myself and prevent the spread of disease?", "answer": "You can reduce your chances of being infected or spreading COVID-19 by taking some simple precaution. Regularly and thoroughly clean your hands with an alcoholbased hand rub or wash them with soap and water. Maintain at least 1 metre (3 feet) distance between yourself and anyone who is coughing or sneezing.." } { "id": "6", "question": "Are antibiotics effective in preventing or treating the COVID-19?", "answer": "No. Antibiotics do not work against viruses, they only work on bacterial infections. COVID-19 is caused by a virus, so antibiotics do not work." } Get the connection string Click on “keys” on the left navigation, navigate to the “Read-write Keys” tab. The value under PRIMARY CONNECTION STRING is our required connection string. Refer to the image shown below: Make a note of the PRIMARY CONNECTION STRING value. We will use it in the latter part of this article, when we will access the Azure Cosmos DB from an Azure function. Create an Azure function app Open Visual Studio 2019, click on “Create a new project”. Search “Functions” in the search box. Select the Azure Functions template and click on Next. Refer to the image shown below: In “Configure your new project” window, enter a Project name as FAQFunctionApp. Click on the Create button. Refer to the image below: A new “Create a new Azure Function Application settings” window will open. Select “Azure Functions v3 (.NET Core)” from the dropdown at the top. Select the function template as “HTTP trigger”. Set the authorization level to “Anonymous” from the drop-down on the right. Click on the Create button to create the function project and HTTP trigger function. Refer to the image shown below: Install package for Azure Cosmos DB To enable the Azure function App to bind to the Azure Cosmos DB, we need to install the Microsoft.Azure.WebJobs.Extensions.CosmosDB package . Navigate to Tools » NuGet Package Manager » Package Manager Console and run the following command: Install-Package Microsoft.Azure.WebJobs.Extensions.CosmosDB Refer to the image shown below. You can learn more about this package at the NuGet gallery. Configure the Azure Function App The Azure function project contains a default file called Function1.cs . You can safely delete this file as we won’t be using this for our project. Right-click on the FAQFunctionApp project and select Add » New Folder. Name the folder as Models. Again, right-click on the Models folder and select Add » Class to add a new class file. Put the name of your class as FAQ.cs and click Add. Open FAQ.cs and put the following code inside it. namespace FAQFunctionApp.Models { class FAQ { public string Id { get; set; } public string Question { get; set; } public string Answer { get; set; } } } The class has the same structure as the JSON data we have inserted in the cosmos DB. Right-click on the FAQFunctionApp project and select Add » Class. Name your class as CovidFAQ.cs . Put the following code inside it. using Microsoft.AspNetCore.Mvc; using System.Collections.Generic; using Microsoft.AspNetCore.Http; using Microsoft.Extensions.Logging; using Microsoft.Azure.WebJobs; using FAQFunctionApp.Models; using Microsoft.Azure.WebJobs.Extensions.Http; using System.Threading.Tasks; namespace FAQFunctionApp { class CovidFAQ { [FunctionName("covidFAQ")] public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)] HttpRequest req, [CosmosDB( databaseName:"FAQDB", collectionName:"FAQContainer", ConnectionStringSetting = "DBConnectionString" )] IEnumerable<FAQ> questionSet, ILogger log) { log.LogInformation("Data fetched from FAQContainer"); return new OkObjectResult(questionSet); } } } We have created a class CovidFAQ and added an Azure function to it. The attribute FunctionName is used to specify the name of the function. We have used the HttpTrigger attribute which allows the function to be triggered via an HTTP call. The attribute CosmosDB is used to connect to the Azure Cosmos DB. We have defined three parameters for this attribute as described below: - databaseName: the name for the cosmos DB - collectionName: the collecting inside the cosmos DB we want to access - ConnectionStringSetting: the connection string to connect to Cosmos DB. We will configure it in the next section. We have decorated the parameter questionSet, which is of type IEnumerable FAQ with the CosmosDB attribute. When the app is executed, the parameter questionSet will be populated with the data from Cosmos DB. The function will return the data using a new instance of OkObjectResult. Add the connection string to the Azure Function Remember the Azure cosmos DB connection string you noted earlier? Now we will configure it for our app. Open the local.settings.json file and add your connection string as shown below: { "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "UseDevelopmentStorage=true", "FUNCTIONS_WORKER_RUNTIME": "dotnet", "DBConnectionString": "your connectionn string" } } The local.settings.json will not be published to Azure when we publish the Azure Function app. Therefore, we need to configure the connections string separately while publishing the app to Azure. We will see this in action in the latter part of this article. Test the Azure Function locally Press F5 to execute the function. Copy the URL of your function from the Azure Functions runtime output.Open the browser and paste the URL in the browser’s address bar. You can see the output as shown below: Here you can see the data we have inserted into our Azure Cosmos DB. Publish the Function app to Azure We have successfully created the Function app, but it is still running in the localhost. Let’s publish the app to make it available globally. Right-click on the FAQFunctionApp project and select Publish. Select the Publish target as Azure. Select the specific target as “Azure Function App (windows)”. In the next window, click on the “Create a new Azure Function…” button. A new Function App window will open. Refer to the image as shown below: You can fill in the details as indicated below: - Name: A globally unique name for your function app. - Subscription: Select your Azure subscription name from the drop-down. - Resource Group: Select an existing Resource Group or create a new one. - Plan Type: Select Consumption. It will make sure that you pay only for executions of your functions app. - Location: Select a location for your function. - Azure Storage: Keep the default value. Click on the “Create” button to create the Function App and return to the previous window. Make sure the option “Run from package file” is checked. Click on the Finish button. Now you are at the Publish page. Click on the “Manage Azure App Service Settings” button. You will see a “Application Settings” window as shown below: At this point, we will configure the Remote value for the “DBConnectionString” key. This value is used when the app is deployed on Azure. Since the key for Local and Remote environment is the same in our case, click on the “Insert value from Local” button to copy the value from the Local field to the Remote field. Click on the OK button. You are navigated back to the Publish page. We are done with all the configurations. Click on the Publish button to publish your Azure function app. After the app is published, get the site URL value, append /api/covidFAQ to it and open it in the browser. You can see the output as shown below. This is the same dataset that we got while running the app locally. This proves that our serverless Azure function is deployed and able to access the Azure Cosmos DB successfully. Enable CORS for the Azure app service We will use the Function app in a Blazor UI project. To allow the Blazor app to access the Azure Function, we need to enable CORS for the Azure app service. Open the Azure portal. Navigate to “All resources”. Here, you can see the App service which we have created while Publishing the app the in previous section. Click on the resource to navigate to the resource page. Click on CORS on the left navigation. A CORS details pane will open. Now we have two options here: - Enter the specific origin URL to allow them to make cross-origin calls. Remove all origin URL from the list, and use “*” wildcard to allow all the URL to make cross-origin calls. - We will use the second option for our app. Remove all the previously listed URL and enter a single entry as “*” wildcard. Click on the Save button at the top. Refer to the image shown below: Create the Blazor Web assembly project Open Visual Studio 2019, click on “Create a new project”. Select “Blazor App” and click on the “Next” button. Refer to the image shown below: On the “Configure your new project” window, put the project name as FAQUIApp and click on the “Create” button as shown in the image below: On the “Create a new Blazor app” window, select the “Blazor WebAssmebly App” template. Click on the Create button to create the project. Refer to the image shown below: To create a new razor component, right-click on the Pages folder, select Add »Razor Component. An “Add New Item” dialog box will open, put the name of your component as CovidFAQ.razor and click on the Add button. Refer to the image shown below: Open CovidFAQ.razor and put the following code into it. @page "/covidfaq" @inject HttpClient Http <div class="d-flex justify-content-center"> <img src="../Images/COVID_banner.jpg" alt="Image" style="width:80%; height:300px" /> </div> <br /> <div class="d-flex justify-content-center"> <h1>Frequently asked Questions on Covid-19</h1> </div> <hr /> @if (questionList == null) { <p><em>Loading...</em></p> } else { @foreach (var question in questionList) { <div class="card"> <h3 class="card-header"> @question.Question </h3> <div class="card-body"> <p class="card-text">@question.Answer</p> </div> </div> <br /> } } @code { private FAQ[] questionList; protected override async Task OnInitializedAsync() { questionList = await Http.GetFromJsonAsync<FAQ[]>(""); } public class FAQ { public string Id { get; set; } public string Question { get; set; } public string Answer { get; set; } } } In the @code section, we have created a class called FAQ. The structure of this class is the same as that of the FAQ class we created earlier in the Azure function app. Inside the OnInitializedAsync method, we are hitting the API endpoint of our function app. The data returned from the API will be stored in a variable called questionList which is an array of type FAQ. In the HTML section of the page, we have set a banner image at the top of the page. The image is available in the /wwwroot/Images folder. We will use a foreach loop to iterate over the questionList array and create a bootstrap card to display the question and answer. Adding Link to Navigation menu The last step is to add the link of our CovidFAQ component in the navigation menu. Open /Shared/NavMenu.razor file and add the following code into it. <li class="nav-item px-3"> <NavLink class="nav-link" href="covidfaq"> <span class="oi oi-plus" aria-</span> Covid FAQ </NavLink> </li> Remove the navigation links for Counter and Fetch-data components as they are not required for this application. Execution Demo Press F5 to launch the app. Click on the Covid FAQ button on the nav menu on the left. You can see all the questions and answers in a beautiful card layout as shown below: Source Code You can get the source code from GitHub. Summary We learned about the serverless and its advantage over the traditional 3-tier web architecture. We also learned how the Azure function fits in serverless web architecture. To demonstrate the practical implementation of these concepts, we have created a Covid-19 FAQ app using the Blazor web assembly and Azure serverless. The questions and answers are displayed in the card layout using Bootstrap. We have used the Azure cosmos DB as the primary database to store the questions and answers for our FAQ app. An Azure function is used to fetch data from the cosmos DB. We deployed the function app on Azure to make it available globally via an API endpoint.
https://girishgodage.in/blog/serverless-with-blazor
CC-MAIN-2022-40
refinedweb
3,386
66.44
This seemingly simple task had me scratching my head for a few hours while I was working on my website. As it turns out, getting the current page URL in Gatsby is not as straightforward as you may think, but also not so complicated to understand. Let’s look at a few methods of making it happen. But first, you might be wondering why on earth we’d even want to do something like this. Why you might need the current URLWhy you might need the current URL So before we get into the how, let’s first answer the bigger question: Why would you want to get the URL of the current page? I can offer a few use cases. Meta tagsMeta tags The first obvious thing that you’d want the current URL for is meta tags in the document head: <link rel="canonical" href={url} /> <meta property="og:url" content={url} /> Social SharingSocial Sharing I’ve seen it on multiple websites where a link to the current page is displayed next to sharing buttons. Something like this (found on Creative Market) StylingStyling This one is less obvious but I’ve used it a few times with styled-components. You can render different styles based on certain conditions. One of those conditions can be a page path (i.e. part of the URL after the name of the site). Here’s a quick example: import React from 'react'; import styled from 'styled-components'; const Layout = ({ path, children }) => ( <StyledLayout path={path}> {children} </StyledLayout> ); const StyledLayout = styled.main` background-color: ${({ path }) => (path === '/' ? '#fff' : '#000')}; `; export default Layout; Here, I’ve created a styled Layout component that, based on the path, has a different background color. This list of examples only illustrates the idea and is by no means comprehensive. I’m sure there are more cases where you might want to get the current page URL. So how do we get it? Understand build time vs. runtimeUnderstand build time vs. runtime Not so fast! Before we get to the actual methods and code snippets, I’d like to make one last stop and briefly explain a few core concepts of Gatsby. The first thing that we need to understand is that Gatsby, among many other things, is a static site generator. That means it creates static files (that are usually HTML and JavaScript). There is no server and no database on the production website. All pieces of information (including the current page URL) must be pulled from other sources or generated during build time or runtime before inserting it into the markup. That leads us to the second important concept we need to understand: Build time vs. runtime. I encourage you to read the official Gatsby documentation about it, but here’s my interpretation. Runtime is when one of the static pages is opened in the browser. In that case, the page has access to all the wonderful browser APIs, including the Window API that, among many other things, contains the current page URL. One thing that is easy to confuse, especially when starting out with Gatsby, is that running gatsby develop in the terminal in development mode spins up the browser for you. That means all references to the window object work and don’t trigger any errors. Build time happens when you are done developing and tell Gatsby to generate final optimized assets using the gatsby build command. During build time, the browser doesn’t exist. This means you can’t use the window object. Here comes the a-ha! moment. If builds are isolated from the browser, and there is no server or database where we can get the URL, how is Gatsby supposed to know what domain name is being used? That’s the thing — it can’t! You can get the slug or path of the page, but you simply can’t tell what the base URL is. You have to specify it. This is a very basic concept, but if you are coming in fresh with years of WordPress experience, it can take some time for this info to sink in. You know that Gatsby is serverless and all but moments like this make you realize: There is no server. Now that we have that sorted out, let’s jump to the actual methods for getting the URL of the current page. Method 1: Use the href property of the window.location objectMethod 1: Use the href property of the window.location object This first method is not specific to Gatsby and can be used in pretty much any JavaScript application in the browser. See, browser is the key word here. Let’s say you are building one of those sharing components with an input field that must contain the URL of the current page. Here’s how you might do that: import React from 'react'; const Foo = () => { const url = typeof window !== 'undefined' ? window.location.href : ''; return ( <input type="text" readOnly="readonly" value={url} /> ); }; export default Foo; If the window object exists, we get the href property of the location object that is a child of the window. If not, we give the url variable an empty string value. If we do it without the check and write it like this: const url = window.location.href; …the build will fail with an error that looks something like this: failed Building static HTML for pages - 2.431s ERROR #95312 "window" is not available during server-side rendering. As I mentioned earlier, this happens because the browser doesn’t exist during the build time. That’s a huge disadvantage of this method. You can’t use it if you need the URL to be present on the static version of the page. But there is a big advantage as well! You can access the window object from a component that is nested deep inside other components. In other words, you don’t have to drill the URL prop from parent components. Method 2: Get the href property of location data from propsMethod 2: Get the href property of location data from props Every page and template component in Gatsby has a location prop that contains information about the current page. However, unlike window.location, this prop is present on all pages. Quoting Gatsby docs: The great thing is you can expect the location prop to be available to you on every page. But there may be a catch here. If you are new to Gatsby, you’ll log that prop to the console, and notice that it looks pretty much identical to the window.location (but it’s not the same thing) and also contains the href attribute. How is this possible? Well, it is not. The href prop is only there during runtime. The worst thing about this is that using location.href directly without first checking if it exists won’t trigger an error during build time. All this means that we can rely on the location prop to be on every page, but can’t expect it to have the href property during build time. Be aware of that, and don’t use this method for critical cases where you need the URL to be in the markup on the static version of the page. So let’s rewrite the previous example using this method: import React from 'react'; const Page = ({ location }) => { const url = location.href ? location.href : ''; return ( <input type="text" readOnly="readonly" value={url} /> ); }; export default Page; This has to be a top-level page or template component. You can’t just import it anywhere and expect it work. location prop will be undefined. As you can see, this method is pretty similar to the previous one. Use it for cases where the URL is needed only during runtime. But what if you need to have a full URL in the markup of a static page? Let’s move on to the third method. Method 3: Generate the current page URL with the pathname property from location dataMethod 3: Generate the current page URL with the pathname property from location data As we discussed at the start of this post, if you need to include the full URL to the static pages, you have to specify the base URL for the website somewhere and somehow get it during build time. I’ll show you how to do that. As an example, I’ll create a <link rel="canonical" href={url} /> tag in the header. It is important to have the full page URL in it before the page hits the browser. Otherwise, search engines and site scrapers will see the empty href attribute, which is unacceptable. Here’s the plan: - Add the siteURLproperty to siteMetadatain gatsby-config.js. - Create a static query hook to retrieve siteMetadatain any component. - Use that hook to get siteURL. - Combine it with the path of the page and add it to the markup. Let’s break each step down. Add the siteURL property to siteMetadata in gatsby-config.jsAdd the siteURL property to siteMetadata in gatsby-config.js Gatsby has a configuration file called gatsby-config.js that can be used to store global information about the site inside siteMetadata object. That works for us, so we’ll add siteURL to that object: module.exports = { siteMetadata: { title: 'Dmitry Mayorov', description: 'Dmitry is a front-end developer who builds cool sites.', author: '@dmtrmrv', siteURL: '', } }; Create a static query hook to retrieve siteMetadata in any componentCreate a static query hook to retrieve siteMetadata in any component Next, we need a way to use siteMetadata in our components. Luckily, Gatsby has a StaticQuery API that allows us to do just that. You can use the useStaticQuery hook directly inside your components, but I prefer to create a separate file for each static query I use on the website. This makes the code easier to read. To do that, create a file called use-site-metadata.js inside a hooks folder inside the src folder of your site and copy and paste the following code to it. import { useStaticQuery, graphql } from 'gatsby'; const useSiteMetadata = () => { const { site } = useStaticQuery( graphql` query { site { siteMetadata { title description author siteURL } } } `, ); return site.siteMetadata; }; export default useSiteMetadata; Make sure to check that all properties — like title, description, author, and any other properties you have in the siteMetadata object — appear in the GraphQL query. Use that hook to get siteURLUse that hook to get siteURL Here’s the fun part: We get the site URL and use it inside the component. import React from 'react'; import Helmet from 'react-helmet'; import useSiteMetadata from '../hooks/use-site-metadata'; const Page = ({ location }) => { const { siteURL } = useSiteMetadata(); return ( <Helmet> <link rel="canonical" href={`${siteURL}${location.pathname}`} /> </Helmet> ); }; export default Page; Let’s break it down. On Line 3, we import the useSiteMetadata hook we created into the component. import useSiteMetadata from '../hooks/use-site-metadata'; Then, on Line 6, we destructure the data that comes from it, creating the siteURL variable. Now we have the site URL that is available for us during build and runtime. Sweet! const { siteURL } = useSiteMetadata(); Combine the site URL with the path of the page and add it to the markupCombine the site URL with the path of the page and add it to the markup Now, remember the location prop from the second method? The great thing about it is that it contains the pathname property during both build and runtime. See where it’s going? All we have to do is combine the two: `${siteURL}${location.pathname}` This is probably the most robust solution that will work in the browsers and during production builds. I personally use this method the most. I’m using React Helmet in this example. If you haven’t heard of it, it’s a tool for rendering the head section in React applications. Darrell Hoffman wrote up a nice explanation of it here on CSS-Tricks. Method 4: Generate the current page URL on the server sideMethod 4: Generate the current page URL on the server side What?! Did you just say server? Isn’t Gatsby a static site generator? Yes, I did say server. But it’s not a server in the traditional sense. As we already know, Gatsby generates (i.e. server renders) static pages during build time. That’s where the name comes from. What’s great about that is that we can hook into that process using multiple APIs that Gatsby already provides. The API that interests us the most is called onRenderBody. Most of the time, it is used to inject custom scripts and styles to the page. But what’s exciting about this (and other server-side APIs) is that it has a pathname parameter. This means we can generate the current page URL “on the server.” I wouldn’t personally use this method to add meta tags to the head section because the third method we looked at is more suitable for that. But for the sake of example, let me show you how you could add the canonical link to the site using onRenderBody. To use any server-side API, you need to write the code in a file called gatsby-ssr.js that is located in the root folder of your site. To add the link to the head section, you would write something like this: const React = require('react'); const config = require('./gatsby-config'); exports.onRenderBody = ({ pathname, setHeadComponents }) => { setHeadComponents([ <link rel="canonical" href={`${config.siteMetadata.siteURL}${pathname}`} />, ]); }; Let’s break this code bit by bit. We require React on Line 1. It is necessary to make the JSX syntax work. Then, on Line 2, we pull data from the gatsby-config.js file into a config variable. Next, we call the setHeadComponents method inside onRenderBody and pass it an array of components to add to the site header. In our case, it’s just one link tag. And for the href attribute of the link itself, we combine the siteURL and the pathname: `${config.siteMetadata.siteURL}${pathname}` Like I said earlier, this is probably not the go-to method for adding tags to the head section, but it is good to know that Gatsby has server-side APIs that make it possible to generate a URL for any given page during the server rendering stage. If you want to learn more about server-side rendering with Gatsby, I encourage you to read their official documentation. That’s it! As you can see, getting the URL of the current page in Gatsby is not very complicated, especially once you understand the core concepts and know the tools that are available to use. If you know other methods, please let me know in the comments! Jesus that’s a lot of JS just to access the page url. This is literally just {{ page.url }} in most other SSGs. Hey Max! Yeah, I hear you, it looks like a lot of code for such a simple task. But in reality, you use a variation of `${siteURL}${location.pathname}`most of the time. It’s just a matter of understanding where those pieces are coming from. Hi, thanks a lot. I’ve been struggling with that and ended up with the same solutions as your #3. Especially, that I need to set siteMetadata.siteUrl anyway in order to use the gatsby-plugin-sitemap. Thanks, Paulina! Yeah, that’s usually my go-to method. I’m using siteMetadatafor quite a few things myself! tks guy! thanks for post, such a nice post… Gatsby uses react router under the hood so you could also use the Location renderProp. Hi, I was following the 3rd method, but it didn’t work for me. Using this line: const url = ${site.siteMetadata.siteURL}${location.pathname}it worked on local, but on netlify it says: error “location” is not available during server side rendering. So, it seems that not always work. Can’t we use ${location.origin}${location.pathname}?? Hey all, figured out the shorthand, this is passing a bool to my navbar. Love you all: <Navbar isIndex={props=>(props.location.pathname === "/" ? true : false)} /> In case you’re expecting #4 to work locally, it won’t. See here: If you deploy to Netlify or something it should be fine if you visit a page directly or navigate and refresh.
https://css-tricks.com/how-to-the-get-current-page-url-in-gatsby/
CC-MAIN-2021-10
refinedweb
2,720
65.22
6 Jan 00:04 2015 Re: What purpose does __NO_ISOCEXT serve? Earnie <earnie@...> 2015-01-05 23:04:21 GMT 2015-01-05 23:04:21 GMT > -----Original Message----- > From: Earnie > Sent: Monday, December 22, 2014 4:03 PM > To: 'MinGW Devlopers Discussion List' > Subject: RE: [MinGW-dvlpr] What purpose does __NO_ISOCEXT serve? > > > -----Original Message----- > > From: Keith Marshall > > > > > IMO, it is an aberration. AFAICT, it is another mechanism, in > > addition to __STRICT_ANSI__, for suppressing declarations which are > > not expected to be specified in the ISO-C namespace. In the mingwrt > > headers, I see several instances where non-ISO-C declarations are > > conditionalized on > !__NO_ISOCEXT, > > but many more where !__STRICT_ANSI__ is used (abused?) for the same > > purpose; there seems to be no rational explanation for the particular > choice > > made, in each instance. > > > > I'd like to get rid of this aberration. Unfortunately, there seems to > > be > an > > abundance of anecdotal evidence, on the internet, for it having leaked > into user > > space, (in spite of the double underscore prefix, which > > *should* mark it as reserved for internal use by the runtime > implementation). > > Thus, I propose folding it into a _mingw.h feature test(Continue reading)
http://blog.gmane.org/gmane.comp.gnu.mingw.devel
CC-MAIN-2015-06
refinedweb
193
56.55
Is there a way to create a routable form. One that each department head could fill out their section and then route to IT. - Chris GregoryAsked on February 06, 2012 at 06:10 PM - JotForm SupportabajanAnswered on February 07, 2012 at 05:50 AM That can be done using a two formed system where just one department head submits the first form. Upon doing so, all of the heads (including the one who submitted the form) will receive a message in their inbox with a link that when clicked, will load the second (main) form. The main form would be multi-paged and ideally, each page would contain the fields to be completed by a particular head. In other words, if the department heads are Mary, John, Linda and Ramona, the first page would have the fields pertaining only to Mary, the second, John; the third, Linda and the fourth, Ramona. Upon completing their page, each head would need to click either the "Next" or "Back" button to save the entries. Since their entries would be saved, if the form's pages contain several fields and the heads are very busy people whose time is at a premium, they could always partially complete their page and later return (via the link) to complete it. Once all of the department heads confirm amongst themselves that their pages are complete, the head elected to submit the form (not necessarily Ramona) would submit it. If this sounds like the sort of setup you're looking for, please see the following JotForm User Guide articles: Send Form Emails to Multiple Recipients How to Save Forms to Continue Later Two points regarding to the second article: * In the first form you would substitute a notification for the autoresponder and enter the email addresses of the heads into the "Recipient E-mail" field, separated by commas, as described in the first article. * To avoid interception by entities not supposed to be privy to the information being submitted, in Step 10 (construction of the link) it is safer to use the tag for the Unique ID than the email address for the session variable. (A person knowing the email address of the department head who filled out the first form and the URL of the second form could just enter that URL appended with ?session=emailAddressOfDepartmentHead@example.com to view all of the entries in the form! It would be much harder to figure out the unique ID of the submission.) Finally, the "Recipient E-mail" box in the main form's notification should contain the email address (or comma separated addresses, as the case may be) of the IT department to which that form is to be sent. If you need clarification on anything, do let us know. -
https://www.jotform.com/answers/73381-Is-there-a-way-to-create-a-routable-form-One-that-each-department-head-could-fill-out-their-section-and-then-route-to-IT-
CC-MAIN-2017-39
refinedweb
462
59.77
Search the Community Showing results for tags 'wsa'. Need advise on my WSA TCP functions [C] matwachich posted a topic in AutoIt Technical DiscussionHi everybody! I wanted to learn winsockets, and for this, I try to reproduce AutoIt's TCP functions. My code is working pretty good! I juste want somebody, an AutoIt dev, to look my code and tell me if I'm doing something wrong, or something need to be done differently. Another problem, is that when I close a socket on one side of a connection, the recv functions in the other side doesn't detect the close action! (like TCPRecv would return an @error) Thanks in advance! (I compile it with GCC 4x) Here is the code #ifndef WIN32_LEAN_AND_MEAN #define WIN32_LEAN_AND_MEAN #endif #defin
https://www.autoitscript.com/forum/tags/wsa/
CC-MAIN-2021-25
refinedweb
127
72.66
I have following method that I select all the ids from table and append them to a list and return that list. But when execute this code I end up getting tuple indicies must be integers... error. I have attached the error and the print out along with my method: def questionIds(con): print 'getting all the question ids' cur = con.cursor() qIds = [] getQuestionId = "SELECT question_id from questions_new" try: cur.execute(getQuestionId) for row in cur.fetchall(): print 'printing row' print row qIds.append(str(row['question_id'])) except Exception, e: traceback.print_exc() return qIds Database version : 5.5.10 getting all the question ids printing row (u'20090225230048AAnhStI',) Traceback (most recent call last): File "YahooAnswerScraper.py", line 76, in questionIds qIds.append(str(row['question_id'][0])) TypeError: tuple indices must be integers, not str The python standard mysql library returns tuples from cursor.execute. To get at the question_id field you'd use row[0], not row['question_id']. The fields come out in the same order that they appear in the select statement. A decent way to extract multiple fields is something like for row in cursor.execute("select question_id, foo, bar from questions"): question_id, foo, bar = row
https://codedump.io/share/JwaO3lkpCONj/1/python-tuple-indices-must-be-integers-not-str-when-selecting-from-mysql-table
CC-MAIN-2017-26
refinedweb
198
59.8
Standard C Malloc functionality. More... #include <sys/cdefs.h> #include <arch/types.h> Go to the source code of this file. Standard C Malloc functionality. This implements standard C heap allocation, deallocation, and stats. allocate memory on the heap and initialize it to 0 This allocates a chunk of memory of size * nmemb. In otherwords, an array with nmemb elements of size or size[nmemb]. releases memory that was previous allocated frees the memory space that had previously been allocated by malloc or calloc. Sets tunable parameters for malloc related options. allocate memory This allocates the specified size of bytes onto the heap. This memory is not freed automatically if the returned pointer goes out of scope. Thus you must call free to reclaim the used space when finished. KOS specfic calls. Debug function. Only available with KM_DBG. Only available with KM_DBG. allocate memory aligned memory Memory of size is allocated with the address being a multiple of alignment changes the size of previously allocated memory The size of ptr is changed to size. If data has already been placed in the memory area at that location, it's preserved up to size. If size is larger then the previously allocated memory, the new area will be unititialized. allocates memory aligned to the system page size Memory is allocated of size and is aligned to the system page size. This ends up basically being: memolign(PAGESIZE, size)
http://cadcdev.sourceforge.net/docs/kos-2.0.0/malloc_8h.html
CC-MAIN-2018-05
refinedweb
237
51.34
I am aware that some connoisseurs of single-malt scotches can discourse endlessly on the virtues of one scotch versus another. But as someone who rarely drinks alcohol, I have trouble fully understanding the mindset of these enthusiasts. I feel almost the same way about Lisp and Scheme programming. I can tell that it is an area filled with sophistication and intelligence, but somehow both the Polish (prefix) notation and endless parentheses, and the fervent semantic eschewal of a distinction between code and data, continue to feel alien to me. Nonetheless, I have enough of a fascination that I want to see how these languages approach XML processing. Among Lisp/Scheme enthusiasts, the starting point for the SSAX library for Scheme is the observation that XML is semantically almost identical to the nested list-oriented data structures native to Lisp-like languages. Anything you can represent in XML can be straightforwardly represented as SXML -- Scheme lists nesting the same data as the original XML. Moreover, Scheme comes with a rich library of list and tree manipulation functions, and a history of contemplating manipulation of those very structures. A natural fit, perhaps. A good first step is to take a look at SXML in its concrete form. Trees are the underlying abstraction -- the Infoset -- of XML; but the abstract information takes a specific semantic form. For example, the following is a starkly reduced (but still well-formed) version of another article I wrote recently: Listing 1. An XML document with most XML features Transformed to SXML, this article looks like: Listing 2. An SXML document with most XML features You'll find a number of interesting differences between XML and SXML, but also an obvious and direct correspondence between any given aspect of the two. Some of the differences are relatively trivial -- parentheses instead of angle brackets, for example -- while others are ambivalent. For an example of the latter, SSAX 's creator, Oleg Kiselyov, thinks the reduction of closing tags to closing parentheses makes the syntax more concise (see Resources). However, the designers of XML went out of their way to remove the tag reduction options in SGML. Perhaps they were wrong, but explicit closing tags are not there because their possibility was overlooked. In a number of ways, however, SXML corrects some awkwardness in XML, without sacrificing information. In particular, the distinction between attributes and element contents feels arbitrary to most XML developers, and SXML removes it in an elegant way. An attribute collection is simply another nested list, but one that happens to start with the name @ -- a name that is conveniently prohibited in XML identifiers. Effectively, an SXML document is like an XML document that eschews attributes, but sometimes nests a <@> child inside other elements. Referring to children that happen to be named @ in SXML is no different than the filtering on any other tag name. It's interesting to note that both my gnosis.xml.objectify and RELAX NG stylesheets attempt a similar homogenization of attributes and elements. Processing instructions and comments are also reduced to special tag names that are not available in XML: PI for the former, COMMENT for the latter. As with most of Lisp, just one basic data structure represents everything. Namespaces are also interesting in the SXML format. The full namespace URI simply becomes part of the tagname when SXML is generated as above. However, an optional NAMESPACES tag can be used to abbreviate namespace references in essentially the same way as in XML, but to utilize this you need to either write SXML by hand or enhance the conversion utility. Working with the SSAX library From the description above, SXML seems like just another shortcut notation for XML, of which there are many (such as PYX, SOX, SLiP, and XSLTXT). The difference is that SXML is not merely (arguably) simpler to read and edit, but is already itself code in Scheme. No special parsers for the format are needed or even relevant. As a first example of working with the SSAX library, take a look at the application xml2sxml that was utilized above: Listing 3. xml2sxml conversion script Not too much to it, is there? Of course, this relies on a collection of load functions that I put into my convenience module: Listing 4. sxml-all.scm support file I may not need all of these load functions every time, but this loads the complete collection of SSAX functions. The last line is an oddity: For whatever reason, the function port? that is used by SSAX is not available in the version of Guile that I installed on Mac OSX using fink. However, the definition I added comes straight out of the manual for Guile. I'm assuming that a different Scheme system would not have this same issue. The data structure produced by the function SSAX:XML->SXML is a regular list that you can work with using all of the usual Scheme functions. For example: Listing 5. Navigating an SXML structure While an SXML representation is just a tree that can be manipulated and traversed with general Scheme techniques, the SSAX library provides a handy macro called SSAX:make-parser that works in a manner similar to the SAX API in other programming languages. A number of tree-walking optimizations are built into this macro, giving you linear -- O(N) in Big-O notation -- efficiencies in processing a given SXML structure; a naive approach could easily use more memory or CPU time. (See Resources for more on Big-O.) Unlike the actual SAX API that you might have used in languages like C, C++, Python, or Perl, SSAX walks a tree rather than scanning a linear bytestream -- that is, SAX or expat simply look for opening and closing tags as events, and call back to the relevant handlers. If you want to keep track of the relative nesting and context in which a tag occurs, you need to maintain your own stack, or other data structure. In SSAX, by contrast, every node descends from a parent, passing and returning a seed. Of course, this seed is itself essentially a data structure that you can modify in each NEW-LEVEL-SEED and FINISH-ELEMENT handler -- but at least it's local rather than global. To show you how SSAX works, I have enhanced an outline example that is available on the CVS directory for the SSAX library (see Resources). I'll demonstrate how to display attributes and (abbreviated) CDATA content. This will take you most of the way toward writing an sxml2xml utility -- one that oddly is not distributed as part of SSAX, not even as a direct function or macro. However, I'll skip handling proper escaping, processing instructions, and a few other aspects. Listing 6. Outline conversion script The basics are a lot like a SAX class. The outline function is generated with the SSAX:make-parser macro, which allows the definition of several event types. The main ones are entering and leaving an element, and getting character data. A couple support functions help with the process. The seed used in outline is quite simple; it is just a string that gets longer as deeper branches of the tree are reached. Of course, you could pass around a whole list of encountered tags -- such as for an XPath-like analysis of what to do with a node. The CDATA handler simply checks whether there is enough CDATA to bother displaying (at least 30 characters, arbitrarily chosen), and then displays it at the same indent as the current element. The NEW-LEVEL-SEED handler demonstrates a couple of interesting aspects, mostly in the two support functions it employs. Not every tag is a simple symbol in the SXML structure; specifically, a namespace-qualified tag is a pair instead. The function tag->string checks a tag's type, and only displays the local part of the name -- not the namespace. You could take another approach, but this demonstrates the test needed. The function format-attrs is probably more an example of generic recursive programming in Scheme than it is specific to SSAX. Still, tags can have zero, one, or several attributes, and you need to return a string for each case. A real Scheme programmer could probably point out an even more concise way to do this -- I welcome comments in the discussion forum for this column. Now I'll take a look at the output, given the earlier XML document. By the way, warnings are generated for the unprocessed PIs, so I redirect STDERR to ignore those: Listing 7. An outline display of example.xml In addition to its equivalents for SAX and DOM (the native Scheme nested lists), SSAX comes with its own SXPath and SXSLT components. I don't have room here for an extensive discussion of these, but they are worth mentioning briefly. Unfortunately, the SXPath and SXSLT functions and macros discussed in Oleg Kiselyov's document "XML, XPath, XSLT implementations as SXML, SXPath and SXSLT" (see Resources) are not included in the SSAX distribution, at least not the one for Guile (other distributions are available for a number of Scheme systems). What can be easily downloaded only works with a few Scheme systems, and versioning is unclear. Based just on the document mentioned, SXPath works much like XPath. For example, either of the following expressions expand to a selection function: Listing 8. SXPath expressions, native and textual These undergo macro expansion to: Listing 9. Full path selector function Of course, playing with this expanded form allows programming arbitrary calculations inside an XPath-like selector -- anything you can write in Scheme. SXSLT is similar in concept. Stylesheets are written in Scheme form which is semantically similar to XSLT. But much as with the flexibility of HaXml, within any particular transformation rule, you can embed arbitrary extra code. Particular XSLT engines, of course, often come with foreign-function APIs to write extra capabilities in JavaScript, VB, or other languages. But with SXSLT, the custom functions are written in the very same Scheme language as the transformation stylesheet elements. I like the SSAX library quite a bit, and I suspect I will like it more as I become more comfortable with Lisp/Scheme programming. It shares many of the advantages of other native XML libraries I have written about in other installments: gnosis.xml.objectify, REXML, XML::Grove, HaXml, and so on. A lot can be said for making XML into just another data object in whatever programming language you use. That said, SSAX has a lot of rough edges. It's hard to figure out what to download, and what Scheme systems each part is available for. The documentation is somewhat inconsistent and incomplete -- most of the documents are academic in focus, and do more to discuss abstract goals and concepts than concrete usage and APIs. As a demonstration of what is possible in Scheme, using functional techniques, these papers are interesting; but it would be nice to have something that's easy to install and use, and just works. - Participate in the discussion forum. - Check out the homepage for SXML, which offers a number of overlapping documents, mostly written by Oleg Kiselyov. - Download the SSAXlibrary for various Scheme systems. - Browse the most current SSAXfiles, including some tests and examples not included in the distribution, on SourceForge.net. - Find the SXPathextension, but unfortunately not for Guile. - Find out more about the XML Information Set (Infoset), a W3C Recommendation that specifies the information content of XML documents -- meaning, which features of a concrete document carry information, and which are incidental. - Learn more about Big-O notation in this glossary by David Mertz. - Take a look at the GNU project's Guile, the version of Scheme used for this article. Other versions, both commercial and free, exist as well, but Guile seems widespread -- and it is the version that finkwill install under Mac OSX. - Read "Transcending the limits of DOM, SAX, and XSLT," a previous installment of this column that discusses the XML library HaXmlfor the functional programming language Haskell. While Haskell is purer in its functional programming paradigm, many common motivations and designs went into HaXmland SXML (developerWorks, October 2001). - Review the alternative syntaxes for XML in this "XML Watch" column by Edd Dumbill (developerWorks, October 2002). - Get a nice visual summary of syntax variations of near-XML languages at Scott Sweeney's site. - IBM's DB2 database provides relational database storage, plus pureXML to quickly serve data and reduce your work in the management of XML data.Visit the DB2 Developer Domain to learn more about DB2. -.
http://www.ibm.com/developerworks/linux/library/x-matters31.html
CC-MAIN-2014-23
refinedweb
2,100
60.55
Updating ReddHub for Windows 8.1 If you’ve ever looked for a Reddit app in the Windows Store, you’ve probably come across Reddit on ReddHub. ReddHub is one of the most popular Reddit readers for Windows 8 and 8.1. The app brings the full world of Reddit to Windows with a user experience that works great on any form factor. ReddHub was created by Feras Moussa, a former Microsoft employee who now works for a new startup. In his spare time he’s been developing apps, like ReddHub. With the release of Windows 8.1, Feras has updated the app to take advantage of several new features of the OS. To find out more about what it was like to update ReddHub from Windows 8 to 8.1, I sat down with Feras to ask him a few questions about his experience. Jake Sabulsky: To start off could you give us a quick overview of your app? What’s the high level architecture/design you’re using for the app today? Feras Moussa: I’m a frequent user of Reddit. I love being able to stay up to date on topics that interest me, and communicate on a range of topics with the vast community. Once the Windows 8 store opened up, I decided to build a Reddit app to create an awesome Reddit experience for Windows. Reddit is a site where people share links, questions, and comments on specific topics that interest them…Anything from popular interviews with people such as The President of the United States, to entertaining pictures of pets. Given the vastness of content types in Reddit, there was a huge opportunity to build a great app and innovate on key areas to give users a compelling product. Things such as running in the background and providing instant alerts for messages, updating live tiles for displaying new content, integrating YouTube videos directly into the app, and leveraging third party APIs for a richer experience for common types of content, to name a few examples. I built ReddHub using XAML and C#, but one of the secrets to the success of ReddHub over competing apps is that in addition to XAML and C#, a large part of ReddHub was built using JavaScript and HTML inside the WebView. This allowed ReddHub to excel when displaying comments generated by users, because it was able to render the full fidelity of the comments, which Reddit likes to provide as HTML. Given this, I like to say ReddHub has multiple personalities – one is the world of XAML and C#, where ReddHub calls into native Windows APIs for many of the system-provided services such as file and network access. The other is the world of JavaScript/HTML, where ReddHub renders and provides the full commenting experience, and communicates back and forth with the C#. JS: You mention that you use the WebView control in several places within your app. In Windows 8.1 we’ve made some significant improvements to the control. What was your experience like working with the new WebView? FM: Let’s be honest. The old WebView needed some improvement. The good news is the new WebView is nothing short of amazing, and it has made a huge jump forward in terms of both ease-of-use and performance. Given how much ReddHub leverages WebView, it was the feature I was most excited about using in 8.1. The first and probably most important thing a developer will notice about the new WebView is that you no longer have to work around the z-ordering issues the previous one had. This means that it is much easier to render UI on top of the WebView, or even animate it and provide a richer experience to users. Additionally, the WebView and other XAML content now handle input the same – which means users no longer have to ‘click into’ comments first to transfer focus over, which was a big complaint in ReddHub. The above two features alone allowed me to remove a tremendous amount of code, resulting in less bloat and more visible bugs. I was also happy to see that the removal of the excess code seems to have indirectly fixed a slight memory-leak issue that I was noticing in ReddHub. Additionally, I was able to provide a much richer navigation experience in ReddHub. Reddit is a very link-rich experience, where people frequently share links to popular content. For content that ReddHub can’t do something ‘smarter’ for, it will fall back to loading it directly in the WebView as a webpage. The new update now has support for many of the properties exposed on the new WebView, such as CanGoBack, CanGoForward as well as their GoBack() and GoForward() counterparts. For example, previously in ReddHub, to navigate back (despite not being able to accurately light up the button if there is somewhere to navigate back to), I had to have the following code: webView.InvokeScriptAsync(“eval”, new [] {“history.go(-1)”} ) The above is a bit cumbersome, not guaranteed to always work, and was hard to realize it was possible to navigate that way. Instead, I can do this very trivially and have it behave the same way clicking back in a browser works with the following code: webView.GoBack() This is a much cleaner, more secure, and more reliable approach than the previous invokeScript tricks that were required. JS: It’s great to hear that the new WebView control made such a big impact in your app. As you were upgrading your app did you end up using any of the other updated or new XAML controls in the 8.1 platform? FM: Yes, definitely. I also added support for a few of the other controls offered in Windows 8.1. ReddHub actually leverages a popular Windows 8 library, Callisto, for many of the Windows-like controls, such as the SettingsFlyout and Flyout controls. It was great to see these become native controls in 8.1, and I’ve begun moving ReddHub over to them, because they provide more functionality and animate smoother. While switching over though, I did encounter one time-consuming issue – a namespace collision, because both Callisto and Windows used the same exact control name. This was easily remedied, but required making some changes across most of ReddHub to fully qualify the namespace I wanted to use [Editor’s note: There’s now a migration guide available that can help with this]. JS: Now, moving on from controls, one of the big changes for apps running in Windows 8.1 is that the user can resize the app’s window fluidly down to 520px wide or optionally all the way down to 320px. This is different from before where apps were either resized at 320px or filled to at least 1024px. How did you account for this new behavior in your app? FM: For starters, ReddHub on Windows 8 had a very popular experience when resized to 320px. When content was selected, it would launch the default browser and navigate to that page. This is a feature many people really enjoy, because it allowed them to monitor Reddit while doing other things, and occasionally view the content. So for Windows 8.1, I knew I wanted to keep that experience and opted to keep 320px support in my manifest [Editors note: check out the Dev Center for more details on how to do this in your apps]. I also took some time to make sure the content in the app expands automatically to support whatever larger size the user puts it at. Below is a screenshot of both the resized view, as well as a non-full screen view. ReddHub supporting the opted-in 320px view, with links launching in the default browser. ReddHub scaling to fit in the provided screen space. Moving forward, I plan to sit down and create different experiences for some key sizes, similar to how some sites offer different experiences depending on the resolution. JS: Another piece of your app that now has more flexible sizes is your tile, correct? How have you leveraged the new large tile size in ReddHub? FM: Yes, ReddHub for 8.1 supports the new large tile-size. ReddHub has lots of content, and, well, users love to see more content on their tiles. Adding support for large tiles was actually the very first 8.1 related feature request I got once people began to use Windows 8.1 Preview. For the large tile support, I decided to use the tile template from the sample, which allowed me to provide 3 titles and images, essentially showing 3x more content than before. Implementing large tiles did, however, take a bit longer than I expected. This was because I had to re-architect a few pieces, because on Windows 8 my code was set up to always provide only one item at a time, whereas the large tile now required providing a full tile at once – meaning I have to generate the content for a small, medium, and large tile at the same time before I hand off to the API. It took a bit of tuning, but it now works as expected. To help clarify this, before in my code, I had a list of items I would iterate through, and create one tile per item: for(int i=0; i<resultSet.Count; i++) { var mediumTile = createMediumTile(resultSet[i]); var smallTile = createSmallTile(resultSet[i]); mediumTile.Square150x150Content = smallTile; //remaining tile update code } With the large tile in 8.1, it was a bit trickier because I had to sort out how to create a large tile with 3 items, then create small and medium tiles with one item each. The solution I chose was to create them independently, then combine them. The code looks like this: //first we create the big tiles List<ITileSquare310x310SmallImagesAndTextList02> listOfBigTiles = new List<ITileSquare310x310SmallImagesAndTextList02>(); for(int i = 0; i < resultSet.children.Count; i++) { var largeTile = createLargeTile(); //we add 3 at a time largeTile.Image1.Src = resultSet[i].... i++; largeTile.Image1.Src = resultSet[i].... i++; largeTile.Image3.Src = resultSet[i].... //..... listOfBigTiles.Add(largeTile); } //now let's add the create small tiles, and add them to big tiles for (int i = 0; i < 5; i++ ) { var mediumTile = createMediumTile(resultSet[i]); var smallTile = createSmallTile(resultSet[i]); mediumTile.Square150x150Content = smallTile; listOfBigTiles[i].Wide310x150Content = mediumTile; //remaining tile update code } It was a bit trickier and I would have preferred if the API let me create them independently, but it was a manageable problem to work around. JS: Switching gears to a new topic, from using your app on Windows 8, I know that you leveraged the Search charm for in app searching. In Windows 8.1 our guidance for search has changed to make in app search more direct and allow users to leverage the Search charm for global search scenarios. As you updated Reddhub, how did this new guidance affect how you thought about search in the app? FM: Yes, I went through ReddHub and took out the old search integration, and added support for the new approach. I did like how the new control is actually a rich control, and automatically had support for remembering previous searches, which was a feature some users had requested. I’ve already done the initial step of moving over to it, and now that search is integrated into the app, I can begin to offer a richer search experience moving forward, such as better result filtering. Below is a sample screenshot of the new integrated search box in 8.1, remembering history and opening the door for ReddHub to provide a rich experience. JS: Any other new features in Windows 8.1 that you were really excited about as you made your update? FM: There were definitely changes throughout ReddHub, and I did spend some time watching a few of the Build 2013 talks. One really cool feature that I did come across and added support for was the ContentPrefetecher. ReddHub is very heavy on network requests, and it’s something users wait for frequently. When I found out about the prefetcher it seemed like a perfect fit for ReddHub, because the specific thing the API is targeting to improve is exactly what ReddHub does – it shows a loading progress bar at launch that is dependent on a network request. The prefetcher API allows the app to tell it URLs the app will request when launched. Then the prefetcher tries to be smart and has already made the network request prior to subsequent launches. That way the content is cached and ready for the app. I liked the concept of it, and added support. I’m eager to see if users notice an improvement [Editor’s note: You can find more info on the ContentPrefetcher class on the Dev Center and the Build 2013 talks on Channel 9]. JS: Ok, we’ve spent some time talking about the changes you made and the new features you used, but what about the update experience itself? What was your process for taking your Windows 8 version and moving it forward to Windows 8.1? FM: At first I created a backup of my Windows 8 code, so I had it while I updated to 8.1. I then began to make changes and commit them as I made them. (I use the integrated TFS for my code repository, which works great) [Editor’s note: You can find more info on TFS and sign up for free on the Visual Studio site]. Once I neared completion and poked around the Windows Store Dashboard, I noticed that I could essentially maintain ‘one app’ unit, but provide a Windows 8 package as well as an 8.1 package. This made sense, because it allows me to continue to update/fix issues in the Windows 8 version, while also adding changes to the 8.1 version. So after realizing this, I went back to my repository and forked my code entirely for a Windows 8 version, in addition to the one I moved forward. Moral of the story – feel comfortable forking your code branch to have both a Windows 8 and Windows 8.1 version. After that, it’s just a very straightforward package and upload to the store for each version. JS: One last question for you that I bet a lot of other developers are asking themselves, now that you have updated Reddhub and published it to the new Store, what will you do with the Windows 8 version? It sounds like you’ve already forked your code so you still have the version targeted at Windows 8. What’s your plan for that version going forward? FM: Despite Reddit having a very tech-savvy community, I’ve noticed people still don’t jump to a new version as quickly as I’d like. So I’m planning to focus my time on the 8.1 version, while also fixing any critical bugs that may show up in both versions, and submit an update to the 8 version as appropriate. This works out well, because it still allows me to make updates to address issues for users as they come up. [Editor’s note: Stay tuned for a full post on this topic.] Finishing up A big thanks to Feras for sharing his insights into the app update experience for Windows 8.1. In case you were wondering, the updated version of Reddit on Reddhub is now live on the Windows 8.1 Store. If you’ve upgraded to Windows 8.1 you can head over to the Store and check out the new features that Feras talked about. And if you’re looking for more info on how to update your own apps to Windows 8.1, check out our new 8.1 documentation on the Dev Center. Finally if any of you out there have a popular app on the Store and would like talk about your development experience here on the App Builders blog, please reach out to us. We’d love to post about your experiences and share your guidance with other Windows Store app developers. Send us mail at myappbuilderstory@outlook.com. Good luck with your updates! –Jake Sabulsky, Program Manager, Windows Updated November 7, 2014 11:40 pm Join the conversation
http://blogs.windows.com/buildingapps/2013/10/31/updating-reddhub-for-windows-8-1/
CC-MAIN-2015-35
refinedweb
2,727
60.75
Member Since 5 Years Ago 4,570. turkalicious left a reply on Installing Laravel Dusk - Failed To Connect On Localhost Port 9515: Connection Refused Finally got the courage to start the adam wathan courses but I am getting this error from first test video. None of the suggestions here solved it. Fresh laravel install on fresh homestead. Installed all the chrome install suggestions and added no-sandbox to no avail. turkalicious started a new conversation Electron Axios Post To Oauth/token CORS Problem Hello all, I've came across many topics about CORS issues here. I've been very hesitant to open another one myself but it has been hours and I still have not figured out a way. Desperate times... axios.post(LOGIN_URL, creds, { headers: { 'Access-Control-Allow-Origin': '*' } }) .then((response) => { localStorage.setItem('id_token', response.data.id_token); localStorage.setItem('access_token', response.data.access_token); this.user.authenticated = true; // Redirect to a specified route if (redirect) { Router.go(redirect); } }).catch((err) => { context.error = err; }); is my post request and my credentials that I sent to are below methods: { submit() { if (this.$refs.form.validate()) { const credentials = new FormData(); credentials.append('username', this.username); credentials.append('password', this.password); credentials.append('grant_type', 'password'); credentials.append('client_id', '5'); credentials.append('client_secret', 'hvthW9MsfNnDSBqmVlBsb4oTwukPpli0puMZ4QH4'); credentials.append('scope', ''); auth.login(this, credentials, 'home-view'); } }, clear() { this.$refs.form.reset(); }, }, Below is the full error in chrome console. I guess postman is not bound by CORS because I get my tokens from that XMLHttpRequest cannot load. Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '' is therefore not allowed access. I greatly appreciate any help. Thank you! turkalicious left a reply on Customizing API Authentication Level @bobbybouwmann thats awesome. Thanks for sharing. I think I am going to have some sort of "login gate" function that sits up front, does the check based on their auth method (user db vs ldap) and sends it to whichever login function for those turkalicious started a new conversation Customizing API Authentication Level Hello all, I am moving our existing app from codeigniter to laravel api. In my electron app, I have username and password fields. I am able to post to my oauth login and get password grant tokens from my api using those fields (not email). Some of my existing clients have LDAP as their login system. Currently, when they enter their username and password, our system binds to their LDAP server and validates against those values. How should I go about writing this functionality to my api? I am new to this and I just want to have my electron app submit to my login on the server with username and password. Server needs to handle LDAP binding when necessary or just validate against my own users table. Should I use JWT instead? turkalicious started a new conversation Fresh Install Homestead Multiple Issues Man I love Laravel but holy crap am I gonna donate my wealth to a charity the day I can get a fresh install of Homestead to just work. I trieed vagrant ssh and it gave me ssh_exchange_identification: Connection closed by remote host so I thought oh well I probably have some local stuff mixed up. I logged in via virtualbox because I wanted to start something quick instead of spending more time debugging a fresh install. I logged in vagrant/vagrant changed the root password, switched to root and ran the ubuntu update command. First ever command that I ran apt-get update && apt-get upgrade -y Same thing I run at home and I got /usr/lib/x86_64-linux-gnu/libstdc++.so.6: Invalid ELF header turkalicious left a reply on Is It Me, Or Is Web Development A Pain In The Butt? I'd be burnt out too after 10 years. Take a break and see if you miss it or not. turkalicious started a new conversation Interactions With Encrypted MySQL Database We recently had to encrypt our data such as name, last name, dob etc using AES. We have a place in our software where a user can search by those attributes. While doing an exact match on the user input to the data works okay, we are not sure how to go with the queries that use LIKE . I would love to see just from ground up on the latest version of Laravel, a proper way of creating and interacting with an encrypted database. Encryption is becoming a must if it has not already so I think a lot of people will benefit from it. turkalicious left a reply on Updating All Form Text Fields At Once @viktorivanov This is great. Thank you. turkalicious started a new conversation Updating All Form Text Fields At Once Hello all. I am new to Vue and what I am trying to do is this. I am trying to have lets say 3 number input fields. That is the entire page, all right in the middle. It is a measure converter between KM, CM, MM and I want the user to be able to enter a number in any of them and the rest should populate automatically. If user enters in KMs, CM and MM should populate ... so and so fort. I thought they would all be computed property but I could not make it work. Any help would be greatly appreciated. I am just trying to learn by practice. Thx turkalicious left a reply on Best Practice For Creating Multiple Step Form turkalicious left a reply on Laravel 5 - Blurry Image After Upload - Intervention Have you tried saving it as png instead of jpg ? Just curious. turkalicious left a reply on Practical Uses For Lumen? I may be wrong but I think lumen is meant for simple REST apis or such things. There is a benchmark video of how many requests it can handle - which is quite a lot. Here are some examples I found to be quite good. Check out the first two posts. turkalicious left a reply on Add Guzzle To Lumen @martinbean Ah ... thank you ! turkalicious left a reply on Laravel Excel Installation Issue Have you done composer dump-autload turkalicious left a reply on Lumen - $request Not Found turkalicious left a reply on Add Guzzle To Lumen My instantiation of the class was wrong. After calling use GuzzleHttp\Client; which did not error alone. After that I had the following $client = new GuzzleHttp\Client(); as it is in the guzzle docs. I took out the GuzzleHttp part of the instantiation statement and it worked. @bashy If you are not gonna help, why even bother ? Do you really that in need of an attention ? Here is a bag of attention for you buddy ... Enjoy turkalicious started a new conversation Lumen - $request Not Found Route file $app->get('/api/{zipCode}', [ 'uses' => 'App\Http\Controllers\ApiController@displayInfo' ] ); and the function response use Illuminate\Http\Request; // more stuff in between // function return return response()->json(['name' => 'Abigail', 'state' => 'CA']) ->setCallback($request->input('callback')); Return sits in my function and I've set it up just to experiment. Lumen set up is running on homestead with no problems - apart from the current issue haha. I have another app running on my homestead host machine that is a MEAN stack. I am using angular's http.jsonp to get the response from my lumen app. When I visit an example url on my lumen it returns the following Undefined variable: request turkalicious left a reply on Add Guzzle To Lumen Got it working. Please close the thread. turkalicious left a reply on Add Guzzle To Lumen I thought the use statement in the controller would do that. turkalicious left a reply on Add Guzzle To Lumen Fatal error: Class 'App\Http\Controllers\GuzzleHttp\Client' not found in turkalicious started a new conversation Add Guzzle To Lumen I am trying to get guzzle work with lumen. I added guzzle through composer. composer.json "psr-4": { "App\\": "app/", "GuzzleHttp\\": "/vendor/guzzlehttp/" } },``` and my controller ```<?php namespace App\Http\Controllers; use App\Http\Controllers\Controller; use GuzzleHttp\Client; class ApiController extends Controller {``` when I try to instantiate a new guzzle, it fails. What am I doing wrong ? turkalicious started a new conversation PDF From Remote Url To JSON Hello all I am trying to accomplish the following. Go to a certain url which always displays an embedded document with type="application/pdf" attribute. I want to take that pdf from that url and convert it to Json on my laravel installation. I want to then return the json result as a reply to the request origin to my server. Is that possible ? Are there any other / better ways to do this ? Thank you turkalicious left a reply on Download The Whole Serie After going into each episode in a given serie and downloading each episode, what is stopping me from uploading those to somewhere as a whole serie if I wanted to? I am already able to download every single episode here. Some are grouped together and instead of clicking multiple times to download individually, I just want to click once because I am lazy. Thats all. But I get the bandwidth charges and I won't download because I want this site to stick around... turkalicious started a new conversation Download The Whole Serie Hello you awesome people. If I am not mistaking, the only way to download a video here is within the video's page itself. I am currently moving and do not have internet at my new place yet. I was at work and decided to download the regular expression videos and thought wish I had a button to download everything at once. May be next to the "Watch Later" icon under the serie name. turkalicious started a new conversation Bootstrap Sass Hello guys I have set up gulp to read sass files. What is a good way to customize bootstrap? I understand that it is modularized and I would like to overwrite some of those to make my own theme. Within the bootstrap/stylesheets folder, I have made my own main.scss right next to the bootstrap.scss file and gulp is watching it. For example, how can I edit the menu background within my own main.scss ?
https://laracasts.com/@turkalicious
CC-MAIN-2019-13
refinedweb
1,719
64.2
To make the development of LuxBlend25 easier, this is a step by step tutorial which describe the setup of Eclipse IDE with Python development and setup the luxblend25 exporter project in Eclipse to debug within Blender. The steps are done under Windows Vista 64Bit, but it should be straight forward and also doable under Linux and Mac OSX. The Blender version 2.59 is used, which has a buildin Python 3.2 version. ECLIPSE_INSTALL_PATH = the path where Eclipse IDE is installed/extracted (f.e. c:\eclipse) PYTHON32_PATH = the path where Python 3.2 is installed (f.e. c:\python32) BLENDER259_PATH = the base path of Blender 2.59 (f.e. c:\blender_259) PYDEV_SOURCE_PATH = the path of the pydev debugger python source (this is depending on where your eclipse plugins are installed, f.e. "ECLIPSE_INSTALL_PATH/plugins/org.python.pydev.debug_2.2.2.2011090200/pysrc") Italic Text means a menu, option, drop down or input field Bold Text means a selected option, or typed in text To use a relative new build of Eclipse IDE, I downloaded the eclipse IDE from the website: Eclipse Downloads. In my case i used the Eclipse Classic 3.7 Windows 32Bit version. After downloading, extract the archive to the harddisk to ECLIPSE_INSTALL_PATH. During the 1st Start of Eclipse you where asked for a Workspace, you can use the default or select your own path with write access. PyDev is a Eclipse plugin for Python development, which can use the official python interpreter Download the Python installation package from Python Download (I used the Python 3.2.2 Windows X86-64 MSI Installer). For Linux or Mac OSX look for the distribution/vendor packages for Python 3.2. Run the installation and install Python 3.2 to PYTHON32_PATH. Start your Eclipse IDE and go to Help->Install New Software... In the upcoming dialog click on Add... to add a new installation repository for PyDev. Enter PyDev in to the "Name" and into the "Location" fields, then click OK. Your newly created installation repository is now selected automatically and after the Pending... you can mark PyDev for installation. Install the Plugin with hitting Next > a few times (you have to accept the license and signature of the plugin). At the end you have to restart your Eclipse IDE To setup PyDev go to Window Preferences In the Preferences dialog open PyDev and select Interpreter - Python Click on New..., enter Python 3.2 into the Interpreter Name and Browse... to the python.exe in your PYTHON32_PATH Click on OK 2 times which should configure the python interpreter and its paths. Now we have a full Python IDE I assume you have the LuxBlend25 source installed or checked out correctly under your "BLENDER259_PATH/2.59/scripts/addons/luxrender" Now go to File->New->Project... to open the New Project Wizard. Select PyDev->PyDev Project Type in luxblend25 as the Project name Uncheck the Use default option and Browse to the path of luxblend25 (BLENDER259_PATH/2.59/scripts/addons/luxrender) Choose Python as the Project type Switch Grammar Version to 3.0 Select Default Interpreter Select Add project directory to PYTHONPATH? Click on Finish Now we have to setup some properties on our project: Right click on the project in the 'PyDev PackageExplorer window and select Properties Select PyDev - PYTHONPATH on the left side and switch to External Libraries on the right side. Now we add several source paths for out python modules with Add source folder: Click on OK To debug the LuxBlend25 python code several things must be prepared. Go to Run->External Tools->External Tools Configuration... Right click on Program and select New to add a new Launch configuration Typ in Blender for Name and select the path to blender executable under Location (f.e. BLENDER259_PATH/blender.exe) Set the Working Directory to BLENDER259_PATH You can add optional parameters to the blender executable under the Main tab (f.e. open a testing .blend file directly) Now click on Apply then on Close Test this launch configuration by click on the Run... Toolbar icon (the one with the red toolbox). If you have done it correctly then blender starts up. You can close blender for now. To enable debugging we have to do a little modification to luxblend25 python code. Maybe later this is in the source repositories. First we add a new file in the luxblend25 base directory "BLENDER259_PATH/2.59/scripts/addons/luxrender" called luxdebug.py: # -*- coding: utf8 -*- # # ***** BEGIN GPL LICENSE BLOCK ***** # # -------------------------------------------------------------------------- # Blender 2.5 LuxRender Add-On # -------------------------------------------------------------------------- # # Authors: # Doug Ham GPL LICENCE BLOCK ***** # ''' This file is for debugging To make the pydevd working, set the directory of pydevd source ''' import sys DEBUGGING = False # set the PYDEV_SOURCE_DIR correctly before using the debugger PYDEV_SOURCE_DIR = 'X:\eclipse\plugins\org.python.pydev.debug_2.2.2.2011090200\pysrc' def startdebug(): if DEBUGGING == True: # test if PYDEV_SOURCE_DIR already in sys.path, otherwise append it if sys.path.count(PYDEV_SOURCE_DIR) < 1: sys.path.append(PYDEV_SOURCE_DIR) # import pydevd module import pydevd # set debugging enabled pydevd.settrace(None, True, True, 5678, False, False) !!! ATTENTION: watch the python indenting!!! Setup the path in the line PYDEV_SOURCE_DIR = '...' to the PYDEV_SOURCE_PATH Now add to following lines after the bl_info struct to BLENDER259_PATH/2.59/scripts/addons/luxrender/__init__.py: from . import luxdebug luxdebug.startdebug() Because the render process itself is started from Blender in a different thread And add some code into BLENDER259_PATH/2.59/scripts/addons/luxrender/core/__init__.py Before all other imports: . . . from .. import luxdebug At the begining of the render function: . . . def render(self, scene): luxdebug.startdebug() . . . To add Breakpoints in the code simply right click on the vertical grey bar on the left side of the code editor and select Add Breakpoint' Now after we have setup all what we need we can start debugging Go to Window->Open Perspective->Other... and select Debug. Now we have the Debug Perspective enabled In the tool bar there is a tool button for the Python Debug Server (little P with a Bug sign) which says: PyDev: start pydev server click on it to start the debug server. In the console you should see. Debug Server at port: 5678 Now its time to start Blender (see: Create Blender Launch configuration) and hope we hit our breakpoints. If a breakpoint is hit, we see the strack trace in the Debug Window and the Variables in the (x)=Variables Window.
http://www.luxrender.net/wiki/index.php?title=LuxBlend25_Debugging_with_eclipse&printable=yes
CC-MAIN-2018-39
refinedweb
1,061
66.33
Building an app with components, states, events, and transitions Explanation This sample project shows you how to create an application interface with Flex. To look at the code, right-click on the SWF in the browser and select View Source or download the sample files and follow the instructions to import the Flash Builder FXP. In the main MXML file, you will see two languages used in the code: ActionScript and MXML. ActionScript is an inheritance based object-oriented scripting language based on the ECMAScript standard. MXML is a convenience language; it provides an alternate way to generate ActionScript using a declarative tag-based XML syntax. When you compile an application, the MXML is parsed and converted to ActionScript in memory and then the ActionScript is compiled into bytecode, your SWF, which is rendered by Flash Player. Although you never have to use MXML, it is typically used to define application interfaces (for layouts, the MXML code is usually more succinct and understandable than the corresponding ActionScript would be) and ActionScript is used to write the application logic. The first line of code in the application is an optional XML declaration tag that specifies how the MXML file is encoded. Many editors let you select from a range of file encoding options. UTF-8 encoding ensures maximum platform compatibility, providing a unique number for every character in a file, and it is platform-, program-, and language-independent. <?xml version="1.0" encoding="utf-8"?> After this, you add tags for the various objects you want to add to your application. The MXML must be well-formed: every tag must have an end tag and the value for every tag attribute must be in quotation marks (single or double). Properties can be specified as attributes or as child tags. You usually use child tags when the value is a complex object being defined with MXML. <Tag attribute="value"/> <Tag> <attribute>value</attribute> <attribute>value2</attribute> </Tag> The root tag of a Flex web application is the Applicationtag. When you compile the application, an instance of the Application ActionScript class (which is part of the Flex framework) is created and its properties are set as defined by the attributes in the tag. For this application, the minWidthand minHeightproperties of the application are set to 955 and 600 pixels. <s:Application minWidth="955" minHeight="600" ...> The three xmlnsattributes in the Application tag specify XML namespaces. <s:Application xmlns:fx="" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" ...> The first namespace attribute, xmlns:fx, is used to associate the fx prefix with the use of ActionScript top level language elements (so you can create them with MXML tags, like <fx:Array>) and with compiler tags. Compiler tags do not correspond to ActionScript classes but are used to provide instructions to the compiler. For example, <fx:Script>is used to tell the compiler that the contents inside the tag are going to be ActionScript. You set an xmlnsattribute equal to a string called the URI (universal or uniform resource identifier) that identifies a resource that contains the information about what to do when you use an XML tag with this prefix. The xmlns:fxattribute is set equal to. This does not identify a web location but is a mapping to another XML file that maps tag names to corresponding ActionScript classes (for everything besides compiler tags). The mapping for the URI string is located in the flex-config.xml file. The flex-config.xml file contains all the default values used by the compiler when an application is compiled with Flash Builder. You can find the flex-config.xml file in the /Flash Builder/sdks/4.5.0/frameworks/ folder. Inside this file, you will find the following definition: <namespace> <uri></uri> <manifest>mxml-2009-manifest.xml</manifest> </namespace> The URI string that was used in the xmlns:fxattribute is here mapped to the mxml-2009-manifest.xml file. If you open this second XML file located in the same directory as the flex-config.xml file, you will see a list of components mapping the names to use in XML tags to the corresponding ActionScript classes: <componentPackage> <component id="Array" class="Array" lookupOnly="true"/> <component id="Boolean" class="Boolean" lookupOnly="true"/> ... The flex-config.xml file also includes mappings for the Spark (Flex 4 and later) and MX (Flex 3 and earlier) components in the Flex framework. <namespace> <uri>library://ns.adobe.com/flex/spark</uri> <manifest>spark-manifest.xml</manifest> </namespace> <namespace> <uri>library://ns.adobe.com/flex/mx</uri> <manifest>mx-manifest.xml</manifest> </namespace> These manifests define the tags that can be used to make instances of the Spark and MX components. For example, you use the Panel tag to create an instance of the ActionScript Panel class located in the spark.components package. <component id="Panel" class="spark.components.Panel"/> The Spark and MX manifests are associated with the s and mx namespaces in the application. <s:Application xmlns:fx="" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" ...> The Flex framework provides classes for over 100 extensible components (MX and Spark), including UI controls (like the Button, List, HSlider, NumericStepper, DataGrid, and PieChart) and containers (like the VGroup, HGroup, Panel, and Form). The MX component set was included in the Flex 3 and earlier releases and is defined in the mx.* packages. The Spark component set was new for Flex 4 and is defined in the spark.* packages. The Spark components use a new architecture for skinning and have other advantages over the MX components. A lot of the components are included in both the MX and Spark sets, like the Button, TextInput, and List controls. If both exist, you should use the new Spark version of the component. Not all of the components have been rewritten for Spark yet, though, so you might end up with both Spark and MX components (like the DateChooser and the Tree) in an application. To get familiar with the Spark and MX components, use the Tour de Flex application, browse the spark.components and mx.components packages in the ActionScript 3 Reference for the Adobe Flash Platform (commonly referred to as ASDocs), or browse the source code in the /Flash Builder/sdks/4.5.0/frameworks/projects/ folder. In the application, you'll see a tag for each control to be added to the interface. Each tag has attributes defined to set values for properties and styles for that instance of that class. You can tell if an attribute is a property or a style by looking at the symbol in front of it in the Flash Builder Code Assist popup or locating it in the Properties or Styles section for the class in ActionScript 3 Reference for the Adobe Flash Platform. In the application, you'll see instances of the Spark Label, Button, TextInput, and DataGrid controls and the MX LinkButton and Spacer controls. <s:Label This code creates an instance of the Label class and sets its xproperty to 20 pixels and its fontSizestyle to 20 pixels. When set inline in a tag like this, properties and styles are handled the same as attributes, but they are different entities and must be handled differently in code to make runtime changes. Properties only pertain to this instance of this class whereas styles can be inherited from a parent object or assigned in stylesheets and are managed by a StyleManager class. You use the idproperty to assign a component instance a name. You only need to assign instance names to components that are going to be manipulated with ActionScript. <s:Button id="loginBtn" label="Login" .../> Where the controls appear in the interface depends upon what container they are placed in and the type of layout that container is using. A container defines a rectangular region of the Flash Player drawing surface within which you define the components (the children) that you want to appear. Each container uses a set of rules to control the layout of its children, including sizing and positioning, so you don't have to spend time defining it. In this way, Flex helps you more easily build adaptive application interfaces. Just as with controls, there are both MX and Spark containers and when possible, use the new Spark versions. The main difference between MX and Spark containers is what you can add to them. You can only add Flex components to MX containers. You can add components and new graphics elements (like Rect and Ellipse from the spark.primitives package) to Spark containers. Another difference is that the layout algorithm for MX containers is fixed (so there is a different container for each type of layout), whereas for Spark containers it is selectable (so there is a smaller set of containers and you can switch the layout algorithm for each). Predefined layout classes that can be used with Spark containers include BasicLayout, HorizontalLayout, VerticalLayout, and TileLayout. You can also define your own custom layouts. The root of the application is a single container, the Application container, that represents the entire Flash Player drawing surface. By default, it uses a BasicLayout which uses absolute positioning in which you explicitly position all container children or use constraints (which specify how far the component should be from the top, right, left, right, horizontal center, or vertical center of the container) to position them. The Application contains three children, a Label, a Panel, and a Group. The Label and Group controls are positioned within the Application container by setting xand yproperties. <s:Label ... The Panel control is positioned within the Application container using constraints; it is positioned in the center of the application by setting its verticalCenterand horizontalCenterproperties to 0. <s:Panel id="loginPanel" verticalCenter="0" horizontalCenter="0" width="300" ...> The Panel container is typically used to wrap self-contained application modules and includes a title bar, a border, a content area for its children, and an optional footer (called the control bar). You set the layout algorithm for the Panel by setting its layoutproperty to an instance of a Layout class. In this sample, the VerticalLayout class is used so the children inside of it (the Labels and the VGroup) will be laid out vertically. If the children had any x, y, or constraint properties set, they would be ignored. The layout algorithm is customized by setting properties for that layout instance. The paddingTopof 30 specifies that there will be a 30 pixel space between the first child (the Label) and the top of the container (the Panel). A paddingLeftof 30 sets a 30pixel space between all children and the left edge of the container and a gapof 20 sets a 20 pixel space between children. <s:Panel <s:HGroup <s:Label <s:Label <s:TextInput </s:HGroup> <s:HGroup <s:Label <s:Label <s:TextInput </s:HGroup> <s:HGroup <s:Label <s:Label <s:TextInput </s:HGroup> </s:VGroup> <s:controlBarContent> <mx:LinkButton <s:Button id="loginButton" label="Login" .../> <s:Button id="registerButton" label="Register" .../> </s:controlBarContent> </s:Panel> The form inside the Panel is created using a VGroup and HGroups. Children inside a VGroup are positioned vertically and those in an HGroup, horizontally. Each HGroup contains a "form element" consisting of a Label, a Label with an asterisk, and a TextInput control. Various styles and properties are set to make everything line up nicely. There actually is a new Spark Form container to help you more easily lay out forms, but it is not used in this application so that the more basic layout containers can be shown and explained. To include children in the Panel footer, you use the controlBarContentproperty. By default, the controls in the control bar area use HorizontalLayout. (You can use the Panel controlBarLayoutproperty to specify a different layout.) In the control bar in the application, a Spacer component (which is invisible) with a widthof 100% is used between the buttons to make one button appear all the way to the left and one all the way to the right. It looks like there are four buttons, but only two will be displayed at a time. This is covered in the view states section later. If there was no spacer, the buttons would be laid out right next to each other against the left edge of the Panel. Component sizes are discussed more in the next section. The last child in the Application tag is a Group container. The default layout for a Group is BasicLayout so in this case, the definition of the layoutproperty is redundant but has been included for clarity. The Label has no x, y, or constraint properties set, so it will appear in the upper-left corner of its parent, the Group component, at 0,0. The DataGrid will be displayed 60 pixels from the top of the Group, which in turn is 70 pixels from the top of the main Application container. <s:Group <s:layout> <s:BasicLayout/> </s:layout> <s:Label .../> <s:DataGrid top="60".../> </s:Group> Up to now, the sizes of the components we've looked at were set in one of two ways. Either the size was set explicitly in pixels, like for the Panel: <s:Panel id="loginPanel" width="300" ...> ... or no heightor widthproperties were set at all and a default size for the component was used as was the case for the Label: <s:Label x="20" y="40" text="XYZ Corporation Directory" .../> For most components, the default size is "as big as it needs to be to hold its contents". For example, the Label is as big as it needs to be to display the text specified in its textproperty. A third way to set size is to use percentages. After space is allocated for all the component's whose sizes are set using default or explicit sizing, the remaining space is divided up amongst the components asking for a percentage of it. In the control bar, the buttons used default sizing so they were as big as they needed to be to display their labels and the spacer took 100% of the remaining space. The result is that one button appears on the left edge of the control bar and the other on the right edge. <mx:Spacer If the percentages asked for by the components in a container add up to over 100%, relative percentages are used. For example, if a container has three controls that all have their widthset to 100%, each control will be allocated one third (100/300) of the width of the container. To provide more than one application "page" or view in your interface, you use view states, which provide a way to change the user interface dynamically at runtime—for instance to add, remove, move, or modify components. For every Flex view or component, you can define multiple states and then for every object in that view, you can define what state(s) it should be included in and what it should look like and how it should behave in that state. To create view states, you set the statesproperty equal to an array of State instances, assigning the nameproperty of each. Four states have been defined for this application interface. <s:states> <s:State <s:State <s:State <s:State </s:states> You use the includeInor excludeFromproperties of a component to specify which states it should be included in or excluded from. If neither property is set, the component is inlcuded in all states. The XYZ Label appears in all states; the Panel is included in every state but main; the first LinkButton is included only in the loginerror and login states. <s:Label x="20" y="40" text="XYZ Corporation Directory" .../> <s:Panel id="loginPanel" excludeFrom="main" ...> <mx:LinkButton id="registerLink" includeIn="loginerror,login" .../> You can specify different properties, styles, and event handlers for each state by appending each with the name of the state to which it should apply. The Panel has a different title in the three states it appears in. <s:Panel id="loginPanel" excludeFrom="main" title.login="Login" title.register="Register" title.loginerror="Login" ...> You switch between states by setting the component's currentStateproperty to the name of one of the defined states. When you click the register button, the application switches to the register state. <mx:LinkButton A typical Flex application consists of MXML code to define the user interface and ActionScript code for the logic. Just as for JavaScript and the browser DOM objects, the two are wired together using events and event handlers. You need to register to listen for some component event to occur and specify the ActionScript code to execute when it does. You can find all the events for a component listed in its API (application programming interface) in ActionScript 3 Reference for the Adobe Flash Platform. On way to do make something happen as you just saw, is to simply specify both the event you want to listen for and the code to execute when it occurs right inline in an MXML tag. When the registerLinkLinkButton is clicked, the ActionScript line of code switching to the register state is executed. <mx:LinkButton id="registerLink" label="Need to Register?" click="currentState='register'" .../> Be aware that ActionScript is case-sensitive so the case used for the currentStateand registerstate name properties must match that in their definitions. You could specify additional lines of code to execute by separating them with semi-colons, but this gets unreadable very quickly (and the code is not reusable). Instead, you specify a function to call when that event occurs. This is what you see for the loginBtnand registerBtnbuttons. <s:Button <s:Button The event handler, the function to invoke when this event occurs is defined inside a Script tag. <fx:Script> <![CDATA[ ]]> </fx:Script> Remember that when you compile an application, the MXML is parsed and converted to ActionScript in memory and then the ActionScript is converted into a SWF that can be rendered by Flash Player. The CDATA tag says don't parse what's inside me; it does not need to be parsed and converted to ActionScript. The <fx:Script>tag can go anywhere inside the MXML file. Order does not matter, but a common convention is to place it as the first inside the root tag of the MXML file. Inside the Script block, you can put property (variable) declarations and method (function) definitions. This application, this MXML Application tag and its children, is an instance of the Application ActionScript class and for a class, you can define properties and methods. When objects were added with MXML, like the registerBtnButton, a property of the class is being defined (called registerBtnof type Button) and the object is added to the container so it appears on the Flash Player drawing surface. To define a property (variable) in ActionScript, you use the varkeyword and to define a method (function), you use the functionkeyword. [Bindable] protected var user:User=new User(); protected function loginBtn_clickHandler(event:MouseEvent):void{ //code to execute } For each class member, you specify an access modifier (specifying what code can access it) and a data type (specifying what type of object it is or what type it returns). The four values for the access modifier are private(only code in this class can access the variable), public(any code in any class can access this variable), protected(only code in this class or subclasses can access this variable), or internal(only code in classes in the same package/folder can access this variable). You specify data types using post colon syntax. Strong data typing is actually optional, but its use is strongly encouraged as it provides compile time code-hinting and errors as well as faster runtime performance. The uservariable is of type User (whose class definition we'll look at shortly) and the two functions loginBtn_clickHandler()and registerBtn_clickHandler()return nothing, so their return data types are set to void. Required and optional function arguments are placed inside the parentheses () and the function body is placed inside the french (or curly) braces {}. Both functions in the code have one required argument of type MouseEvent. Every time an event is broadcast, it passes to the event handler an event object that has information about the type of event that occurred. For example, when you click a button, the event object will contain information about what button was clicked and where it was clicked (they xand ypositions). When you change the selection in a drop-down list, the event object will have information about which list was changed and also have references to what item is currently selected. Because the type of information that is relevant for an event depends on what the event is, there are many different types of event classes defined. For example, when the clickevent of a Button occurs, an instance of the flash.events.MouseEvent class is broadcast. When the changeevent of a List control changes, an instance of the spark.events.IndexChangeEvent is broadcast. All of the event classes extend the base event class, flash.events.Event. You can see what type of event object is broadcast with each event in a component's API in ActionScript 3 Reference for the Adobe Flash Platform. Look at the loginBtn_clickHandler()and registerBtn_clickHandler()functions and see that they both receive an event argument of type MouseEvent. The event object is not actually referenced or used inside these particular functions, but it could be. protected function loginBtn_clickHandler(event:MouseEvent):void{...} protected function registerBtn_clickHandler(event:MouseEvent):void{...} To register to listen for an event of a component with ActionScript, you use the component's addEventListener()function. For example, in some function you would have the following code: loginBtn.addeventListener("click",loginBtn_clickHandler); ... where the second argument is a reference to the event handler to invoke when the clickevent occurs. Notice that the function is not being called (there are no parentheses after it), it is just being specifed as the function to invoke when the event does occur. You have no control over what gets passed to the function; it automatically gets passed an event object of whatever type was broadcast. With that in mind, let's go back and take a look at the inline MXML code for registering to listen for the button clickevents. <s:Button In this declarative MXML shortcut way of listening for an event, you are actually writing the ActionScript code to be executed inside an automatically generated listener function for the clickevent. If you want the event object that is passed to the automatically generated listener function passed to your function as well, you can pass an object called eventwhich is the name assigned to the argument for the automatically generated listener function. This is what you see in the code above. In this case, because the event object is not actually used inside the function, the code could also have been written as: <s:Button ... and: protected function loginBtn_clickHandler():void{...} The code generation features in Flash Builder always pass the event object though, because it a best practice to always write your event handlers with the required event object argument. Next, let's look in more detail at the uservariable which is being manipulated inside the event handler functions. The uservariable is defined as type User. [Bindable] protected var user:User=new User(); Typically, you define one class per ActionScript file. The name of the class must match the name of the file and the location of the file must match the location specified for the class package. In this project, the User class in the valueObjects package is defined in User.mxml in the valueObjects folder. package valueObjects { [Bindable] public class User{ public var name:String; public var username:String; public var password:String; public var email:String; public var deptid:uint; public function User(){ } } } ActionScript 3 is an inheritance-based object-oriented scripting language very similar to Java but with some different syntax. In ActionScript 3, the class definition is included inside the curly braces of the package definition and the class members are defined inside the curly braces of the class definition. The User class has five public properties defined of type String and uint. (You can also create implicit getters and setters using getand setkeywords.) You can find the main data types listed as the classes for the Top Level package in ActionScript 3 Reference for the Adobe Flash Platform. The constructor, the function that is automatically called when an instance of the class is created, is defined as a function with the same name (and case) as the class and no return type. The User constructor here is empty and does nothing. (You can also leave it out of the code entirely and one will be automatically created.) You can specify required and/or optional arguments for the constructor function but you cannot overload functions in ActionScript 3. If you return to the main MXML file and look at the Script block again, you will see the uservariable declared as type User and set equal to a new instance of the User class. import valueObjects.User; [Bindable] protected var user:User=new User(); The importstatement is required so that the compiler can locate the definition for the User class. Import statements are written for you automatically in Flash Builder when you select a class from code-hinting. The code inside the loginBtn_clickHandler()and registerBtn_clickHandler()functions should now make sense. Inside loginBtn_clickHandler(), conditional logic is used to check if a valid username and password combination was entered and if so, to populate the userobject's nameproperty and switch to the main application state. In registerBtn_clickHandler(),the userobject's properties are populated with the values entered in the form. In a complete application, the code inside these functions would make calls to the server to either perform the user authentication check or to add a new user. Calls to the server are illustrated and discussed in the next sample project. Next, let's talk about that [Bindable]tag in front of the uservariable and inside the User class definition. Data binding is a powerful part of the Flex framework that lets you update the user interface when data changes without you having to explicitly register and write the event listeners to do this. The [Bindable]tag is a compiler directive; when the application is compiled, ActionScript code is automatically generated so that an event is broadcast whenever the variable whose definition it prefaces changes. For the uservariable, an event is broadcast whenever it gets assigned a new value, for example when it goes from null to having a value or when it gets set to another instance of the User class. [Bindable] private var user:User=new User(); The uservariable is, however, a complex object and if you want an event to be broadcast when one of its properties change (not just when the variable in its entirety changes)—which you usually do—you need to specify this in the class definition. This is the purpose of the [Bindable]tag in front of the class definition in User.as. [Bindable] public class User{ Now whenever a value of one of the publicly accessible properties for an instance of the User class changes, an event is broadcast. You use curly braces around a value of an MXML attribute to register to listen for these changes and have the component display updated when that change occurs. If you look at the Label in the Group near the end of the code, you'll see curly braces as part of the expression to set the Label's textproperty. <s:Label When the application is compiled, code is generated to listen for changes to user.nameand when its value changes, the Label display is updated accordingly. Finally, let's finish up by looking at the transition code. By default, when a component changes state, all the changes (objects moving, appearing, disappearing, and resizing) happen at about the same time which can be an abrupt experience for the user. In order to make the transition smoother, you can animate it by changing properties in a gradual manner and/or by specifing the order in which the changes should occur. The Flex framework includes a number of predefined effect classes (located in the spark.effects package) including Fade, Rotate, Resize, and others. You can apply an animation to an object or to a view state change. When an animation is applied to a state change, it is called a transition. To define transitions, you set the transitionsproperty of a component equal to an array of Transition instances. For each Transition instance, you can set fromStateand toStateproperties. The default values for these is *, the wildcard, which means for any state. Inside each Transition tag, you specify the animation to occur for that state transition. If you want more than one effect to occur, you can nest them inside Parallel or Sequence tags. In the application, two transitions have been defined. The first will take place when changing from any state to the loginerror state; the second will occur for all other state changes. For the first transition, a sequence of actions occur: the Panel gradually changes size and then the errorLblLabel is wiped in to the right. For the second sequence, several actions happen in parallel over 1 second (1000 milliseconds): the Panel changes size and the confirmFormItemgroup fades in or out. <s:transitions> <s:Transition <s:Sequence> <s:Resize <s:AddAction <s:Wipe </s:Sequence> </s:Transition> <s:Transition> <s:Parallel <s:Resize <s:Fade </s:Parallel> </s:Transition> </s:transitions> For effects which change the value of a property over time, as Fade changes alphaand Resize changes heightand width, the start and end values for the effect correspond to the start and end values for the component in the state it is moving from and to. The last thing to notice in the code is the use of {} when specifying the values for the targetproperties. The targetproperty is typed as an Object and you set it equal to the visual object you want to animate or manipulate, not a string equal to the object's name. You need the {} to assign it a reference to the object instead of a literal string.
https://www.adobe.com/devnet/flex/trial/flex_trial_day28/00_complex_app.html
CC-MAIN-2019-43
refinedweb
5,064
51.38
npm i -g style-so-lit npm link style-so-lit <script src='//unpkg.com/style-so-lit@0.0.9/dist/css.min.js'> </script> This CSS is LIT! :fire: Hey all, So i'm here to announce something to make your life a LOT easier if you prefer writing in either javascript or typescript and being able to construct elements which you can later reuse in other projects. I could also do with your feedback on usability, and use cases, drop me a message, or feel free to clone the repo and work on it yourself. I'll start by showing you some code (note this is in Typescript but is pretty much the exact same as the ES6/7 versions - latest - of javascript. import { css, j2css } from 'style-so-lit'; class MyElement { /* * create the rest of your element with any lib of your choice\ * and set a global theme potentially a user selected one. */ public giveItSomeStyle() { const myCss : css = css`.myelement { background-color: ^themebg^; color: ^themefg^; }` let myVars = [ new j2css('themebg',<string>global.theme.background), new j2css('themefg', <string>global.theme.font.color) ]; return (myCss.mount(myVars) !== undefined) ? 0 : -1; } } I'm sure some of you are familiar with the google™ backed polymer-project and its latest developments lit-html and lit-element. both of these are amazing libraries allowing you to create your own custom elements and attach them to the dom. One thing however was missing, css in the same, easy to use way that html is used in string literals (the ` characters). So, while It took a bit of work (and frustration) I managed to get a working library created for my current project. It's at version 0.0.9 so its still very much a work in progress but it's available on npm for use in projects with no planned breaking changes and many planned enhancements, (CSS-JS 2 way binding... whaaaaat!?) anyway hopefully this could be of some use to someone, its available now on: - npm - which includes the typescript .d.ts, js.map and original .ts files and will make it easier for you to navigate and use code hinting; by which I mean typedefs, enums (to be used later), constructors, etc - I didn't want to name any specific one such as intellisense™©$$$@microsoft because of trademarking and it will work no matter what system you use - these can then be packaged up with your project but I would recommend the next option for production ready work. - unpkg - to include in html files, you only need to do this once on the top level if you are using multiple custom elements or using a router, include the following file for the smallest library.. and when I say small take a quick look at the minified code (for reference the build number at the time of writing is 0.0.9 which is available all the time here), I kinda forgot to remove the option of having debugging output on by default but that will be in my next release ASAP!. Let me know where you want to see it and what you want the next features to be! I hope it helps!
https://forum.freecodecamp.org/t/css-lib-style-so-lit-opensource-help-would-be-awesome/245400
CC-MAIN-2021-10
refinedweb
534
67.89
glock 0.1 Abstraction of time to ease testing # Glock - a Clock abstraction for gevent # Most likely your application is in some way working with the concept of time. It can be timeouts, it can be tasks that should be executed within regular intervals. Glock (Gevent cLOCK) tries to encapsulate time-related functionality into a simple Clock class. Also provided is a mock Clock class allowing deterministic testing. Usage: from glock.clock import Clock c = Clock() c.call_later(1, fn) API: The call_later(seconds, fn, *args, **kw) method calls fn seconds from now. A DelayedCall instance is returned that can be used to reschedule the call (using reset) or cancel it (using cancel). The clock also have a sleep method that acts just like gevent.sleep. To read out the current time, use the time method. The MockClock method also has an advance method that is used to advance time in a controlled manner. - Downloads (All Versions): - 1 downloads in the last day - 27 downloads in the last week - 102 downloads in the last month - Author: Johan Rydberg - Package Index Owner: jrydberg - DOAP record: glock-0.1.xml
https://pypi.python.org/pypi/glock
CC-MAIN-2015-40
refinedweb
188
64.91
UFDC Home | Help | RSS Title: County Faces Yet Another Water Shortage CITATION PAGE IMAGE ZOOMABLE Full Citation STANDARD VIEW MARC VIEW Permanent Link: Material Information Title: County Faces Yet Another Water Shortage Physical Description: Book Language: English Publisher: St Petersburg Times Subjects Spatial Coverage: North America -- United States of America -- Florida Notes Abstract: County Faces Yet Another Water Shortage, 1/13/1979 General Note: Box 10, Folder 19 ( SF Water Wars - 1975-2000 ), Item 64 Funding: Digitized by the Legal Technology Institute in the Levin College of Law at the University of Florida. Record Information Bibliographic ID: WL00002488 Volume ID: VID00001 Source Institution: Levin College of Law, University of Florida Holding Location: Levin College of Law, University of Florida Rights Management: All rights reserved by the source institution and holding location. Full Text ,1 i II -l- p e - S ST. P TERSBU M TIMES -Ia SATURDAY. JANUARY 13; 1979 ,.'-- ". ,; -t *,- .:: .,- : ;*: :* .. .- . Were ibt jut talking boutsprinkling bane Wr.' e tiring abour Me po miblty Sofi ag. &ackingr up Into our water Ssystem because of nthendif erencesn . pressure. Commiisioner Jeann Malchon SCou ty faces- ' S-yet another .. water shortage . ---- y DAVD M. SNYER A shortage is not a certainty, however, *--.5 rwns ri..m w sut Wdler Knepper added. "If we have a wet spring, S -LRWATRRMp- we won't have a poblem." dents could face restrictions on lawn sprinkling and other non-essential water use again this year, the county's water di. S-ctor warned Friday. . S-Becausedemands for water) have in- - creased dramatically, there's-a good chance we may have a shortage" during the normally dry months of April, May and early June, said Pinellas County Wa- -er System Director Terry Knepper. The principal cause of a shortage would be the rapidly increasing water de- mands of western Pasco County residents, who get water pumped from underground in central Pasco into Pinellas, then back to Pasco through lines belonging to the privately owned, profit-making Paco Wa- terA~thority Inc. A DAILY average of more than 5- million gallons of water about 2-million gallons a day more than anticipated - was sold to Pasioresidents last year, said S Knepper. Pasco's peak demands, which could be as high as 10-million gallons a day during _ the dry season, and new water customers in Pinellas could leave the water system with a reserve of only 2-million gallons a day, he said.-- However, voluntary restrictions on wa- ter sprinkling and other non-essential uses would be invoked long before re- -serves became tht small, he said. --- A--hortage-wpud affect about 400,0 - Pinellas resident- who get their water di- rectly or indirectly (through city water systems) from the county. Knepper said that the city of St. Pe- tersburg, which supplies water to its resi- dents and the cities of Gulfport, Oldsmar and South Pasadena, apparently has suffi- - cient water supplies, as do Dunedin and Bellesir, which have their own water sys- tems. He aid, however, that a shortage will be unavoidable in 1980 because of the slow development of new water sources in Pasco County. KNEPPER AND John T. Alen, the county's special water attorney, told the County Commission Friday that contract negotiations with Psco County over the use of water from the Cros Bar wellfield in north-central Pasco have reached an impasse. SUnless the impasse is broken in two or ' three montuw, te county and the cities perve by its water system will eventually have t1 declare a building moratorium e.- cause ol a lack of water, Allen told th commission. Said Commissioner John Chesnut: "They (Pasco leaders) have got the same problems that we've got, so why are they being so damn obstinate? They're getting themselves in the hole worse than us." Though the eastern and central portions of Pasco county are water-rich, western Pasco, like all of Pinellas, is wa- ter-poor. Water shortages are almost as com- mon in west Pasco as in Pinellas, where voluntary and-mandatory water-use re- strictions have been imposed in four of - the last five years. - Z LAST YEAR, Pinella water-system customers were told to cut their water use -between 4 and 7p.m. to avoid ios of water pressure in certain neighborhoods. Future shortages could be far worse. "We're not just talking about sprin- .kling bans," said Commissioner Jeanne Malchon. "We're talking about the possi- bility of sewage backing up into our water system because of the differences in pressure." RECErVEO JAN 19 197 "- -- ----.y .. *, I I ~r -- ---;= ---- --- i7-.- Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2010 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Last updated October 10, 2010 - - mvs
https://ufdc.ufl.edu/WL00002488/00001
CC-MAIN-2019-47
refinedweb
796
53.92
Download presentation Presentation is loading. Please wait. Published byHarvey Fox Modified about 1 year ago 1 Programming for Artists ART 315 Dr. J. R. Parker Art/Digital Media Lab Lec 02 Fall 2010 2 2 Instructor (ME) Dr. Jim Parker Specialty: Digital media, video games, animation Office: AB606 LAB: AB611 3 3 Programming. [W. ??? 4 4 Programming Albert Einstein. Many programmers believe themselves to be artists, and believe that programming is an art. This seems a stretch. 5 5 Programming I would describe programming as a craft, which is a kind of art, but not a fine art. Craft means making useful objects with perhaps decorative touches. Fine art means making things purely for their beauty. Richard Stallman ( founder of the GNU Project and the Free Software Foundation) 6 6 Assignment: Read Computer Programming as an Art Don Knuth, 1974 (link on web page) So: Is computer programming an art? 7 7 Who Cares? A computer is a tool. We can make cool things with good tools, and a properly configured computer is a good tool. To make a computer do your bidding, you need to know how to program it. 8 8 Programmers Make Mediocre Tools First, I am a programmer. I am willing to accept the criticism here. Tools for artists include paint, photoshop, illustrator and maya. Are these good tools (or simply what there is?) 9 9 Mediocre Tools … Artists used to make their own tools, their own paints, mix their own colours. … because the paints are our tools, and we know about them. Photoshop is built by programmers and uses their paradigms for manipulating images. They build OK tools for using computers, not for drawing or for composing music. 10 10 Tools So: we (some of us) need to make better tools for the rest. Next: We can add or create functionality to our computer that is unique to our work, and that requires us to make the computer do new things. 11 11 Generative Art Art that has been generated, composed, or constructed in an algorithmic manner through the use of systems defined by computer software algorithms, or similar mathematical or mechanical or randomised autonomous processes. (Wikipedia) 12 12 Casey Reas: Process 13 13 Marius Watz: System_C, 14 14 Lia: re-move.orgre-move.org 15 15 Ben Fry: AnemoneAnemone 16 16 Golan Levin: AVESAVES 17 17 Martin Wattenberg: Shape of SongShape of Song 18 18 Methods We’ll look at more artists and images later. meanwhile How’d they make these? What is generative art and how does it differ from what I/we do now? 19 19 Painting Can be rendering a real object 20 20 Painting Can be creation of a more pure emotion. 21 21 Key Question The important issue for this class is: How does one do it? For many artists, a lot of time is spent on doing, some on planning, not much on thinking how. In generative art, how is critical. 22 22 Algorithm An interesting word, sometimes scary for those who know it (because of how you were taught). Named after al Khwarizmi (the [man] of Khwarizm), a nickname of the 9th century Persian astronomer and mathematician Abu Jafar Muhammand ibn Musa, who authored many texts on arithmetic and algebra. He worked in Baghdad and his nickname alludes to his place of origin in present-day Uzbekistan and Turkmenistan 23 23 Algorithm Born 780 died about 850. The treatise Hisab al-jabr w'al-muqabala is the first book to be written on algebra. (and gives us the word) 24 24 Algorithm “A step-by-step problem-solving procedure, especially an established, recursive computational procedure for solving a problem in a finite number of steps.” ……… poor “an 'algorithm' is an effective method for solving a problem expressed as a finite sequence of instructions. “ … better “An algorithm is a specific set of instructions for carrying out a procedure or solving a problem, usually with the requirement that the procedure terminate at some point.” … best so far 25 25 Algorithm So when I asked how, I was requesting an algorithm. Generative art, and much other computer mediated artwork, requires a deal of prior thought on how the work is to be created. One wants an algorithm. 26 26 Algorithms The connotation of algorithm is mathematical, but it need not be. It does require precision, mostly. At least in the statement of the algorithm. A set of instructions is to be followed, and so should be written so that they can be implemented. 27 27 Algorithm function divide(x; y) Input: Two n-bit integers x and y, where y 1 Output: The quotient and remainder of x divided by y if x = 0: return (q; r) = (0; 0) (q; r) = divide(bx=2c; y) q = 2 q; r = 2 r if x is odd: r = r + 1 if r y: r = r y; q = q + 1 return (q; r) This is not how we will do things. 28 28 Algorithm Sadly, some formal way of specifying the algorithm is needed. The machine doing the work can’t read your mind. window (0,100)-(0,100) for n=1 to 4 p(n)=(rnd*100) next n line(p1,p2)-(p3,p4) (1)Identify two random points on a 100 unit square plane. (2)Draw a line connecting these two points. More on this later. 29 29 Algorithm The algorithm is a basic specification, in a human language. To make it do something, the algorithm must be expressed in such a way that a machine can actually Execute/implement/carry out each step. The most common way to do this is as a special ‘language’, a collection of symbols, in which each symbol is unambiguous, and in which it is possible to express intentions clearly and unambiguously. English is bad for this. 30 30 Precision/Fidelity In art, precision is not of great importance. High fidelity VS low fidelity 31 31 What is important is that the work that results conveys the artist’s intention. In technical computer work, the result must accurately represent the algorithm, which in turn must represent a process to be studied or emulated. 32 32 The precision needed for an algorithm to be specified and to be successfully translated into an implementable description can interfere with the artist’s intention. Does an artist ever want to draw a line between (10,10) and (90,90)? … and what does this mean, anyhow? 33 33 History Look at ory-of-generative-and-computer-art ory-of-generative-and-computer-art There is really no definitive history, and what I show here is the briefest of summaries. 34 34 History The use of algorithms dates from prehistoric times. Study of the stone circles at Stonehenge (c BC) reveals an algorithmic arrangement based on phases of the moon and the annual movement of the sun. 35 35 History Architectural plans, musical scores and dance notations bear one feature in common - they are all recipes for carrying out a task. From this perspective a broad range of notational systems can be viewed and studied as algorithmic procedure. 36 s George Brecht, 1961 GB published a set of 50 cards to be given to each participant. Each card held an instruction to be performed with a vehicle. Vehicles with drivers were instructed to assemble at sundown in a parking lot and randomly park their vehicles. Then each driver, with a shuffled deck of instructions, performed 50 events such as "turn on lights", "start engine", "stop engine", "open window". This work was performed at St Vincent College under the direction of Stephen Joy in 1963. 37 s Ben Laposky First (?) graphic images generated by an electronic machine. Laposky (a mathematician and artist from Cherokee, Iowa) manipulated electronic beams across the face of an oscilloscope and then recorded the patterns using hihg- speed film, color filters and special camera lenses.He called his oscillographic artworks oscillons and electronic abstractions 38 s Manfred Mohr The fundamental view that machines should not be considered as a challenge to humanity but, like McLuhan predicted, as an extension of ourselves is the basic philosophy when becoming involved with technology. 39 s Frieder Nake. 'Hommage à Paul Klee 13/9/65 Nr.2' 1965 40 s Georg Nees Georg Nees was arguably the first world-wide to show his digital art. Studied mathematics, physics and philosophy in Erlangen and Stuttgart (D). He has been producing computer graphics, sculptures and films since 1964. 41 s A. Michael Noll A.Michael Noll is one of the earliest pioneers to use a digital computer to create patterns and animations solely for their artistic and aesthetic value. His first computer art was created at Bell Labs in Murray Hill, New Jersey during the Summer of l/CompArtExamples.html 42 s Manfred R. Schroeder My interest in computer graphics was awakened by the late Leon Harmon and Ken Knowlton. Our aim then (in the early 1960s) was to use computers for creating images that could not otherwise be drawn or painted. More specifically, we wanted to generate pictures that would be perceived as totally different depending on the viewing distance. Thus my prize-winning One Picture is Worth a Thousand Words would just look like printed letters and English text from nearby. But at intermediate viewing distances One Picture appeared to be a weaving pattern and finally, from afar, it would look like a human eye looking at you. 43 s Harold Cohen The Robot as an Artist “Aaron began its existence some time in the mid-seventies, in my attempt to answer what seemed then, but turned out not to be, a simple question: What are the minimum conditions under which a set of marks functions as an image?" mwiki.php/AITopics/Art#history-art 44 s David Em - JPL artist-in-residence, leading to the first ever artist's monograph published on digital art ( The Art of David Em, published by Harry N. Abrams) 45 s Larry Cuba is widely recognized as a pioneer in the use of computers in animation art, and was one of the "hybrid" artist/technologists. Producing his first computer animation in 1974, Cuba was at the forefront of the computer-animation artists. 46 s The first algorithmic brush strokes executed with an oriental brush mounted on a pen plotter were achieved in 1987 with an HI DMP52 pen plotter. This is one of a series attempting to achieve spontaneity and expressive energy as found in Chinese Shufa. 47 s Charles Csuri The fundamental view that machines should not be considered as a challenge to humanity but, like McLuhan predicted, as an extension of ourselves is the basic Lines in Space 1966 48 s Yoichiro Kawaguchi was born on Tanegashima Island in He received his Master of Fine Arts from Tokyo University of Education in Currently he is Associate Professor of Computer Graphics Art at Art & Science Lab, Department of Art, Nippon Electronics College, Tokyo. 49 s Kenneth Knowlton A pentomino consists of five squares joined along edges; there are 12. This picture contains 27 of each kind. Since Golomb introduced them in 1953, he has been "committed to their care and feeding." They have led to thousands of puzzles and problems. 50 50 A turning point When computer power becomes easily available to all, including artists and musicians, then the convergence of informatics and the arts really begins. There is a democratization that occurs at about 1984. Similar presentations © 2016 SlidePlayer.com Inc.
http://slideplayer.com/slide/4318351/
CC-MAIN-2016-50
refinedweb
1,917
62.38
Today, we're going to talk about Primes. If you play Horizon: Zero Dawn and think we're talking about the Alpha Prime, Elizabet Sobek, we're not. Nor are we referring to any leader, past or present, of the Autobots. Instead, we're talking about prime numbers. More specifically, we're going to talk about the why behind an exercise that many of us did in CS 101 or in whiteboard interviews or, at the very least, have had to explain on occasion to an interviewer: prime number generators. Now, a prime number generator is known mostly for its use in cryptography systems, mathematical applications, and even data science. But, for most of us, its just an excercise that we do when covering the Num Theory part of university CS degrees. So why do I decide to talk about them? Well, its because if you follow my content, you know that I like to dig into certain topics and to understand the why, not just the what. And, despite my best efforts, prime numbers have come up in my programming more often than I would like to admit. Plus, as engineers, developers and coders, it is our job to understand as much as we can about the machines we work with every day, and our duty to try and understand, as best we can, what our colleagues do every day. I, personally, believe that even little tidbits like the topic of today's blog post will help us to be better developers, even if we never write them in production ourselves. I also believe that it will help us to empathize with team members, other teams, or even friends on teams at other companies, which will make the world of software engineering better for everyone! So, getting into it, today we're just going to cover a small, yet meaningful, tidbit on how we write efficient prime generators. I'll be using Python, because I love it and its my go-to language, but you can follow along in whatever language you like. As we know, a Prime Number is a number that is only divisible by itself and 1. A function that generates all prime numbers between 1 and 1000 would, most often, look like this: def prime_gen() for num in range(1, 1000): if num > 1: for i in range(2, num): if (num % i) == 0: break else: yield(num) On my Macbook pro, the run time to print all elements in this generator is 0.564s. But what if I told you we could decrease that bigtime with only a few keystrokes, and not change the structure of the generator at all? How? Well, you see this block of code here? for i in range(2, num): if (num % i) == 0: break This block tests to see if any of the numbers between 2 (because 1 is not a prime number and is skipped) and the number we are testing from prime-ness ( num) are factors of num. If we find some that are, then num is not a prime number (because Primes are only divisible by themselves and 1), and therefore we can stop testing for prime-ness. But this block actually tests twice as many numbers as it needs to, slowing the testing process down. Why is that? To put it plainly, if you haven't found a factor of a number (aka, a number that divides evenly into it) by the time you hit that number's square root, you ain't gonna find one at all. This is because a number's square root is a smaller number that equals the original number when multiplied by itself. This then means, that half of the factors of a number will be below the square root, and half will be above it. In turn, this means that if we're testing a number for factors and haven't found any that are less than its square root, we won't find any that are above it's square root and can safely conclude that the number is prime! Now, armed with such knowledge, we can say with a straight face, why we only test for a prime's factors up to the square root of the number in question. It also means, that we can add the following code to our original generator: for i in range(2, ceil(sqrt(num) + 1)): if (num % i) == 0: break else: yield(num) As a side note, in Python, the ceil() and sqrt() functions are part of the math class, so be sure to import them before trying to use them. Your completed generator, with the new code, should look like this: from math import ceil, sqrt def prime_gen(): for num in range(2, 1000): if num > 1: for i in range(2, ceil(sqrt(num) + 1)): if (num % i) == 0: break else: yield(num) And, on the same equipment as our first test, the run time to print the elements for our revised generator is 0.023s. That is only 4% of the run time of our first generator! So, I hope this has helped some of those out there to get a better grip on a CS principle that most of us (myself included) had an "I kinda get it..." opinion of at first. As always, please keep coding! No matter what you do in this industry. No matter where you come from, what your social status is, what your parents did, what neighborhood/country/district you're from, or anything else that people try to convince you means you're not cut out for this gig, code on. The world needs those with every possible approach to any problem facing it. Which means they need the diversity that you can provide. Otherwise, we'll always solve the same problems the same way and get the results we've always gotten! Happy coding everybody!!! Discussion
https://dev.to/kaelscion/i-guess-i-kinda-get-it-the-prime-generator-story-2efk
CC-MAIN-2020-50
refinedweb
990
72.8
FormHelper¶ What is a FormHelper and how to use it, is throughly explained in a previous section {% crispy %} tag with forms. FormHelper with a form attached (Default layout)¶ Since version 1.2.0 FormHelper optionally can be passed an instance of a form. You would do it this way: from crispy_forms.helper import FormHelper class ExampleForm(forms.Form): def __init__(self, *args, **kwargs): super(ExampleForm, self).__init__(*args, **kwargs) self.helper = FormHelper(self) When you do this crispy-forms builds a default layout using form.fields for you, so you don’t have to manually list them all if your form is huge. If you later need to manipulate some bits of a big layout, using dynamic layouts is highly recommended, check Updating layouts on the go. Also, now the helper is able to cross match the layout with the form instance, being able to search by widget type if you are using dynamic API. Helper attributes you can set¶ - template_pack - Allows you to set what template pack you want to use at FormHelperlevel. This is useful for example when a website needs to render different styling forms for different use cases, like desktop website vs smartphone website. - template - When set allows you to render a form/formset using a custom template. Default template is at {{ TEMPLATE_PACK }}/[whole_uni_form.html|whole_uni_formset.html] - field_template - When set allows you to render a form/formset using a custom field template. Default template is at {{ TEMPLATE_PACK }}/field.html. Beware that this is only effective when setting a FormHelper.layout. - form_method = ‘POST’ -') You can also point it to a URL ‘/whatever/blabla/’. Sometimes you may want to add arguments to the URL, for that you will have to do in your view: from django.core.urlresolvers import reverse form.helper.form_action = reverse('url_name', args=[event.id]) form.helper.form_action = reverse('url_name', kwargs={'book_id': book.id}) - attrs Added in 1.2.0, a dictionary to set any kind of form attributes. Underscores in keys are translated into hyphens. The recommended way when you need to set several form attributes in order to keep your helper tidy: ``{'id': 'form-id', 'data_id': '/whatever'}`` <form id="form-id" data-id="/whatever" ...> - form_id - Specifies form DOM id attribute. If no id provided then no id attribute is created on the form. - form_class - String containing separated CSS classes to be applied to form class attribute. The form will always have by default ‘uniForm’ class. - form_tag = True - It specifies if <form></form>tags should be rendered when using a Layout. If set to Falseit renders the form without the <form></form>tags. Defaults to True. - disable_csrf = False - Disable CSRF token, when done, crispy-forms won’t use {% crsf_token %}tag. This is useful when rendering several forms using {% crispy %}tag and form_tag = Falsecsrf_token gets rendered several times. - form_error_title - If you are rendering a form using {% crispy %}tag and it has non_field_errorsto display, they are rendered in a div. You can set the title of the div with this attribute. Example: “Form Errors”. - formset_error_title - If you are rendering a formset using {% crispy %}tag and it has non_form_errorsto display, they are rendered in a div. You can set the title of the div with this attribute. Example: “Formset Errors”. - form_style = ‘default’ - Helper attribute for uni_form template pack. Uni-form has two different form styles built-in. You can choose which one to use, setting this variable to defaultor inline. - form_show_errors = True - Default set to True. It decides whether to render or not form errors. If set to False, form.errors will not be visible even if they happen. You have to manually render them customizing your template. This allows you to customize error output. - render_unmentioned_fields = False - By default django-crispy-forms renders the layout specified if it exists strictly, which means it only renders what the layout mentions, unless your form has Meta.fieldsand Meta.excludedefined, in that case it uses them. If you want to render unmentioned fields (all form fields), for example if you are worried about forgetting mentioning them you have to set this property to True. It defaults to False. - render_hidden_fields = False - By default django-crispy-forms renders the layout specified if it exists strictly. Sometimes you might be interested in rendering all form’s hidden fields no matter if they are mentioned or not. It defaults to False. - render_required_fields = False - By default django-crispy-forms renders the layout specified if it exists strictly. Sometimes you might be interested in rendering all form’s hidden required fields no matter if they are mentioned or not. It defaults to False. - include_media = True - By default django-crispy-forms renders all form media for you within the form. If you want to render form media yourself manually outside the form, set this to False. If you want to globally prevent rendering of form media, override the FormHelper class with this setting modified. It defaults to False. Bootstrap Helper attributes¶ There are currently some helper attributes that only have functionality for a specific template pack. This doesn’t necessarily mean that they won’t be supported for other template packs in the future. - help_text_inline = False - Sets whether help texts should be rendered inline or block. If set to Truehelp texts will be rendered help-inlineclass, otherwise using help-block. By default text messages are rendered in block mode. - error_text_inline = True - Sets whether to render error messages inline or block. If set to Trueerrors will be rendered using help-inlineclass, otherwise using help-block. By default error messages are rendered in inline mode. - html5_required = False - When set to Trueall required fields inputs will be rendered with HTML5 required=requiredattribute. - form_show_labels = True - Default set to True. It decides whether to render or not form’s fields labels. Bootstrap 3 Helper attributes¶ All previous, bootstrap (version 2) attributes are also settable in bootstrap 3 template pack FormHelpers. Here are listed the ones, that are only available in bootstrap3 template pack: - label_class = ‘’ - Default set to ''. This class will be applied to every label, this is very useful to do horizontal forms. Set it for example like this label_class = col-lg-2. - field_class = ‘’ - Default set to ''. This class will be applied to every div controlswrapping a field. This is useful for doing horizontal forms. Set it for example like this field_class = col-lg-8. Custom Helper attributes¶ Maybe you would like that FormHelper did some extra thing that is not currently supported or maybe you have a very specific use case. The good part is that you can add extra attributes and crispy-forms will automagically inject them within template context. Let’s see an example, to make things clear. We want some forms to have labels uppercase, for that we would like to set a helper attribute name labels_uppercase to True or False. So we go and set in our helper: helper.labels_uppercase = True What will happen is that crispy-forms will inject a Django template variable named {{ labels_uppercase }} with its corresponding value within its templates, including field.html, which is the template in charge of rendering a field when using crispy-forms. So we can go into that template and customize it. We will need to get familiar with it, but it’s quite easy to follow, in the end it’s only a Django template. When we find where labels get rendered, this chunk of code to be more precise: {% if field.label and not field|is_checkbox and form_show_labels %} <label for="{{ field.id_for_label }}" class="control-label {% if field.field.required %}requiredField{% endif %}"> {{ field.label|safe }}{% if field.field.required %}<span class="asteriskField">*</span>{% endif %} </label> {% endif %} The line that we would change wold end up like this: {% if not labels_uppercase %}{{ field.label|safe }}{% else %}{{ field.label|safe|upper }}{% endif %}{% if field.field.required %} Now we only need to override field template, for that you may want to check section Overriding layout objects templates. Warning Be careful, depending on what you aim to do, sometimes using dynamic layouts is a better option, check section Updating layouts on the go.
http://django-crispy-forms.readthedocs.io/en/latest/form_helper.html
CC-MAIN-2017-09
refinedweb
1,330
59.4
Video Presentation: Manipulating XML With jQuery NOTE: Right now, the following examples only work in Mozilla-based browsers (meaning, not in IE). I am working on an update that is IE-compatible. We all know that jQuery is the most awesome Javascript library around. Every day, we are using jQuery to create richer, more dynamic, more effective user experiences on the web and in our AIR applications. Hands down, it's the fastest, most effective way to create Javascript-based user interfaces. But, jQuery is also quite good at reading and manipulating XML documents. Since XHTML is really just a subset of XML, it should be no surprise that just about everything we can do with XHTML documents, we can also do with XML documents by way of jQuery. In the following mini video presentation, I demonstrate how to use jQuery to do all of the following: - Create XML documents from scratch. - Add new sub-trees to an XML document object model. - Perform contextual searches on the XML DOM. - Access, update, and create XML attributes. - Trim sub-trees from an XML document object model. - Even bind and trigger custom events on an XML tree. In short, the video demonstrates that all of the actions you like to perform on XHTML, can also be performed on XML. If you want to play around with this code yourself, here is the HTML page that I demonstrated in the video: - <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> - <html> - <head> - <title>Manipulating XML With jQuery</title> - <script type="text/javascript" src="jquery-1.3.2.js"></script> - <script type="text/javascript"> - // When the DOM is ready, run the scripts. - $(function(){ - // Build the base data XML object. We are building the - // XML collection in this cause, but this could have - // just as easily come from an AJAX call of type XML. - var jData = $( "<data></data>" ); - // Output the info to the output. - output( "Data Size: " + jData.size() ); - // ----------------------------------------------- // - // Append a dataum node to the XML collection. - jData.append( - "<datum id='1'>\ - <name>Tricia</name>\ - <hair>Brunette</hair>\ - </datum>\n\ - <datum id='2'>\ - <name>Joanna</name>\ - <hair>Brunette</hair>\ - </datum>\n\ - <datum id='3'>\ - <name>Eva</name>\ - <hair>Blonde</hair>\ - </datum>" - ); - // Query for all the datum nodes. - var jDatum = jData.find( "datum" ); - // Output the size of the collection and the - // HTML (XML) of the data node. - output( - ("Datum Size: " + jDatum.size()), - jData.html() - ); - // ----------------------------------------------- // - // Find Tricia's node. She is the datum record with - // the id of 1. - var jTricia = jDatum.filter( "[ id = 1 ]" ); - // Output tricia's node. - output( - "Tricia's Node", - ("ID: " + jTricia.attr( "id" )), - jTricia.html() - ); - // ----------------------------------------------- // - // Find the blond girl in the node set. - var jBlonde = jData.find( - "datum:has(hair:contains('Blonde'))" - ); - // Output the blonde node. - output( - "Blonde's Node", - ("ID: " + jBlonde.attr( "id" )), - jBlonde.html() - ); - // ----------------------------------------------- // - // Blondes might have more fun, but not in my - // programming world. Remove the blonde node from - // its parent data set (the data node). - jBlonde.remove(); - // Output the data node to see if blondie was - // removed from the party. - output( - "Data (without Blondie):", - jData.html() - ); - // ----------------------------------------------- // - // Let's rate the hottness of the remaining girls. - // To start with, let's add a default attribute with - // not value. - jDatum.attr( "hotness", "unknown" ); - // Output the updated data set. - output( - "Data (with hotness attributes):", - jData.html() - ); - // ----------------------------------------------- // - // Tricia is a total babe. If she were a children's - // cereal, she'd be magically babelicious. As such, - // let's take her existing node reference and update - // the hotness attribute. - jTricia.attr( "hotness", 10 ); - // Output the updated data set. - output( - "Data (with Tricia update):", - jData.html() - ); - // ----------------------------------------------- // - // Even though we are dealing with an XML document - // that is not rendered on the page, we can still - // bind and trigger events on it. - jData.bind( - "hug", - function( objEvent ){ - // Get the target node for the event. - var jTarget = $( objEvent.target ); - // Check to see if the target node of the hug - // event was a datum node (we don't care if - // another node triggered this event). - if (jTarget.is( "datum" )){ - // Output that the given girl node was - // hugged (you sly dog you!). - output( - "Datum Node Hugged:", - jTarget.html() - ); - } - } - ); - // Trigger a hug event on Tricia. - jTricia.trigger( "hug" ); - // ----------------------------------------------- // - // We are done. Let's empty the data set. - jData.empty(); - // At this point, we have completely disconnected the - // Datum collection from the Data container. As such, - // we shouldn't be able to access the parent node of - // any of the datum nodes. To test this, try to grab - // Tricia's parent. - var jParent = jTricia.parent(); - // Output the parent collection size and the size of - // data child collection. - output( - ("Parent Size: " + jParent.size()), - ("Data Child Size: " + jData.children().size()) - ); - }); - // I output my arguments to the output, each on a new line - // followed by an extra line break. - function output(){ - var jOutput = $( "#output" ); - // Loop over arguments and output them. - $.each( - arguments, - function( index, value ){ - // Clean the value. - value = $.trim( value ); - // Remove tabs for demo. - value = value.replace( - new RegExp( "\\t+", "g" ), - " " - ); - // Remove all leading spaces. - value = value.replace( - new RegExp( "^\\s+", "gm" ), - "" - ); - // Append to existing content. - jOutput.val( jOutput.val() + value + "\n" ); - } - ); - // Output an additional line break. - jOutput.val( - jOutput.val() + - "\n-------------------------\n\n" - ); - } - </script> - </head> - <body> - <h1> - Manipulating XML With jQuery - </h1> - <form> - <textarea - id="output" - style="width: 100% ; height: 400px ; font-size: 16px ;" - ></textarea> - </form> - </body> - </html> If you still haven't looked into jQuery, do so immediately. If you'd like to get a great primer on the subject, take a look at my presentation: An Intensive Exploration of jQuery. It is one of the best Javascript libraries out there; and, as stated before, I'll never start another web development project without really embarassing, but this code does NOT work in Internet Explorer. I will debug this and post an update. Hi Ben, I would like to express my sincere thanks for this fantastic video. I must say again as I said on my previous email - I think that your videos are head and shoulders above the rest. Very clearly worded, and more specifically, extremely well exemplified. Also, thanks for passing on extensive knowledge and support. I have really enjoyed watching the video and am very grateful for the opportunity to ask further questions. Below I have some questions and also discuss the problems/bugs I encountered while trying the code: 1 - In IE6, the code does not run properly. I have debugged the code and this is the extract of code from the JQUERY library where it breaks: Does the JQUERY library offers cross browser support including IE6? 2 - I have my own little javascript library which I use to work with xml. I use XPATH quite extensively to find the nodes I need in the xml. I use XPATH expression like this: Is it possible to use XPATH with JQUERY? 3 - There is something that is still not clear to me. How do I move/copy an specific XML structure from one XML into another XML in a specific location ? Suppose I have this xml: XML1 Suppose I want to move/copy all the address elements from XML1 with all its attributes and child elements to another xml - XML2 under the addresses tag. I am not sure how to achieve this!!! XML2 This would be the required output: I would appreciate if you could show us some code examples. Again thanks for your help. Cheers C I am sorry, but I was not able to post the xml and code sample since the blog was blocking my message. @Cleyton, Don't worry about it. My blog blocks things that it thinks might be spammers (based on content). You probably just a link tag or something. I got your email. Thanks for pointing out this *glaring* IE compatibility issue. I apologize for not testing this. I just assumed... Hi Ben. Excelent video. Thanks for share the knowlege. Nice video, what programs are you using to produce these? Hi Ben, I was wondering if the reason why the code does not work in IE6/7 is because in IE6/7 the way to create an xml document is different from Mozilla-based browsers. IE uses Activex new ActiveXObject("MSXML2.DOMDocument")//("MSXML2.DOMDocument.4.0") FF uses parser.parseFromString(nd, "text/xml") of document.implementation.createDocument Hope this will shed some light. One thing I don't understand about the JQuery library is how it treats an xml string you pass to it. Does it create an xml document? Also, shouldn't the library be cross browser so that it would in IE6 as well? Cheer C Hi Ben, Great work and thanks for sharing the info too. Once you've managed to spend some time getting it to work with IE, please post back here so that we can all take a look? Thanks again, David @Andy, I've been using JING to make most of these. Hi Ben, We have been waiting for a reply from you regardin the sample examples working only in Mozilla-based browsers (meaning, not in IE). You said you were working on an update that woudl be IE-compatible. Could you please shed some light on the outcome of your tests? I have noticed that a lot of people are having the same problem i.e. to get JQuery to manipulate xml in IE and also in FF 3.0. Based on my research I believe there is a bug in JQuery when it comes to working with XML. What are your thoughts on that? Cheers C @Cleyton, Sorry my man, I have been so busy with exploring the ColdFusion 9 alpha that I've hardly had any free time. I have a conference coming up next week, so I will most likely have more free time after that. Sorry for the suspense!? Thanks @Stefano,.HTML is undefined. This is not a big problem for me, cause I don't really need to get the .html() or I can get it using a different approach cause I'm developing only on firefox, but it seems really a strange behaviour. @Stefano, Very interesting. I wonder if is has to do with the fact that I am manually building the XML data string locally, rather than getting it from a remote URL? Very curious., the html() property no longer works (in Safari or IE6). By using text() instead of html(), I can get the text, but not the XML elements themselves. Argh! Still looking, but so far I see no (browser-independent) way of serializing the xml object to it's string representation...anyone else having any luck??? 1.) there is not html() function for an xml node, because xml elements do not have a innerHTML property which is what the html() function utilizes. In fact by my understanding og JQ, using html on a pure xml doc shouldnt work at all unless the xml node in question happens to have the XHTML or HTML namespace attached to it. This means if you want to get the nodes as a sertialized string you need to create an extension or global utilitiy function to do so, for example: <code> $.fn.xmlString = function(){ var str = ''; this.each(function(){ if(window.ActiveXObject){ str += this.xml; } else { str += (new XMLSerializer()).serializeToString(this); } }); return str; }; </code> 2.) I could be wrong about this because i just ran across the issue (in fact i found this post in trying to find proof that my suspicion is correct), but you cant simply move a node from document a to document b because they dont have the same owner document - at least this is the way it is in php when you using DOMDocument and/or SimpleXml. There are particular function you use for this... I think this is probably an issue with differences between the underlying implementation of XML in the browsers - oddly enough this implys Mozilla/Gecko is actually LESS strict than MS and Webkit. What really gets me is that i beleive both Webkit and Gecko use nearly identical if not the same XML DOM implementations (as implied by the simple if/else statement in the searializer above. Hope that helps. Can we convert xml file into a csv file using this jQuery?If yes,can u please provide me the code. @Pavan, Are you talking about simply creating a comma-delimited list? Or creating an actual physical file? Thank You Here is a simple test I put together to get around the IE problem. if ($.browser.msie) { var xDom = new ActiveXObject("Microsoft.XMLDOM"); jData = xDom.loadXML("<data></data>"); } else { jData = $("<data></data>"); } alert(jData.size()); @John, Cool tip - I'll have to give it a try. Thanks. This was really useful. I've been doing the same thing lately. But, I just can't get it to work in IE8... Anyone have succeeded in this? @Eirik, My code won't work in IE, unfortunately. But, if I recall correctly, there's a number of comments above in which people have outlined how to get this to work in IE-based browsers. I believe you need to parse the XML a bit differently (using a proprietary IE technology). Hi, IE8 uses the standard DOM class that is used by other browsers like FF. In your code you could try to check if it is IE8 and then use the standard way to create an xml document. Please let us know if this works. Cheer C Binding events and manipulating shared subtrees with multi-part selectors is all well and good, but how do I get Tricia's phone number? Binding events and manipulating shared subtrees with multi-part selectors is all well and good, but how do I get Tricia's phone number? @Ben, @Ben,. Thanks for a great walk trough of how to use jQuery combined with XML. Really makes my mind bubbling with ideas.
http://www.bennadel.com/blog/1637-video-presentation-manipulating-xml-with-jquery.htm
CC-MAIN-2015-14
refinedweb
2,296
75.61
Around’s Cube craze, there was another puzzle craze, this time for the 15 Puzzle. In some ways this was like a 2-dimensional version of the Rubik’s Cube. However, there was a twist – depending on the initial configuration the 15 Puzzle can be unsolvable. Prizes were offered which could never honestly be won and people racked their brains to find a solution where none existed. In this article I want to share a Python implementation of the 15 Puzzle. It is a solvable version – but I guess if you are feeling mean you could modify the code and share the game with someone you want to annoy! Programming the 15 Puzzle with Python We will use the Python Turtle Graphics Module for this program, along with a tiny bit of Tkinter, which is the GUI (graphical user interface) library which the Turtle Module is built on. You will need an actual installation of Python to make this work rather than a browser-based version. Here is the listing: import turtle import tkinter as tk import random NUM_ROWS = 4 # Max 4 NUM_COLS = 4 # Max 4 TILE_WIDTH = 90 # Actual image size TILE_HEIGHT = 90 # Actual image size FONT_SIZE = 24 FONT = ('Helvetica', FONT_SIZE, 'normal') SCRAMBLE_DEPTH = 100 images = [] for i in range(NUM_ROWS * NUM_COLS - 1): file = f"number-images/{i+1}.gif" # Use `.format()` instead if needed. images.append(file) images.append("number-images/empty.gif") images.append("number-images/scramble.gif") def register_images(): global screen for i in range(len(images)): screen.addshape(images[i]) def index_2d(my_list, v): """Returns the position of an element in a 2D list.""" for i, x in enumerate(my_list): if v in x: return (i, x.index(v)) def swap_tile(tile): """Swaps the position of the clicked tile with the empty tile.""" global screen current_i, current_j = index_2d(board, tile) empty_i, empty_j = find_empty_square_pos() empty_square = board[empty_i][empty_j] if is_adjacent([current_i, current_j], [empty_i, empty_j]): temp = board[empty_i][empty_j] board[empty_i][empty_j] = tile board[current_i][current_j] = temp draw_board() def is_adjacent(el1, el2): """Determines whether two elements in a 2D array are adjacent.""" if abs(el2[1] - el1[1]) == 1 and abs(el2[0] - el1[0]) == 0: return True if abs(el2[0] - el1[0]) == 1 and abs(el2[1] - el1[1]) == 0: return True return False def find_empty_square_pos(): """Returns the position of the empty square.""" global board for row in board: for candidate in row: if candidate.shape() == "number-images/empty.gif": empty_square = candidate return index_2d(board, empty_square) def scramble_board(): """Scrambles the board in a way that leaves it solvable.""" global board, screen for i in range(SCRAMBLE_DEPTH): for row in board: for candidate in row: if candidate.shape() == "number-images/empty.gif": empty_square = candidate empty_i, empty_j = find_empty_square_pos() directions = ["up", "down", "left", "right"] if empty_i == 0: directions.remove("up") if empty_i >= NUM_ROWS - 1: directions.remove("down") if empty_j == 0: directions.remove("left") if empty_j >= NUM_COLS - 1: directions.remove("right") direction = random.choice(directions) if direction == "up": swap_tile(board[empty_i - 1][empty_j]) if direction == "down": swap_tile(board[empty_i + 1][empty_j]) if direction == "left": swap_tile(board[empty_i][empty_j - 1]) if direction == "right": swap_tile(board[empty_i][empty_j + 1]) def draw_board(): global screen, board # Disable animation screen.tracer(0) for i in range(NUM_ROWS): for j in range(NUM_COLS): tile = board[i][j] tile.showturtle() tile.goto(-138 + j * (TILE_WIDTH + 2), 138 - i * (TILE_HEIGHT + 2)) # Restore animation screen.tracer(1) def create_tiles(): """ Creates and returns a 2D list of tiles based on turtle objects in the winning configuration. """ board = [["#" for _ in range(NUM_COLS)] for _ in range(NUM_ROWS)] for i in range(NUM_ROWS): for j in range(NUM_COLS): tile_num = NUM_COLS * i + j tile = turtle.Turtle(images[tile_num]) tile.penup() board[i][j] = tile def click_callback(x, y, tile=tile): """Passes `tile` to `swap_tile()` function.""" return swap_tile(tile) tile.onclick(click_callback) return board def create_scramble_button(): """Uses a turtle with an image as a button.""" global screen print(images) button = turtle.Turtle(images[NUM_ROWS * NUM_COLS]) button.penup() button.goto(0, -240) button.onclick(lambda x, y: scramble_board()) def create_scramble_button_tkinter(): """An alternative approach to creating a button using Tkinter.""" global screen canvas = screen.getcanvas() button = tk.Button(canvas.master, text="Scramble", background="cadetblue", foreground="white", bd=0, command=scramble_board) canvas.create_window(0, -240, window=button) def main(): global screen, board # Screen setup screen = turtle.Screen() screen.setup(600, 600) screen.title("15 Puzzle") screen.bgcolor("aliceblue") screen.tracer(0) # Disable animation register_images() # Initialise game and display board = create_tiles() create_scramble_button_tkinter() # create_scramble_button() scramble_board() draw_board() screen.tracer(1) # Restore animation main() turtle.done() You will need the images to make this work – you can download them (and the code) from here. A couple of points about the code: The game logic and the display are tightly coupled in this implementation, which is not always a great idea – I was keen to get to playing the game so was hasty with the planning. I have provided two ways to implement the scramblebutton. The Tkinterversion could provide a gentle introduction to Turtle'sparent library There is a technique for passing additional parameters to the click callback which I discuss in this article about the 21 Game There is a small amount of repetition in the code which I may refactor at some point, but for now as far as I can tell you have working version of the 15 Puzzle to play with. Solving the 15 Puzzle I don’t want to deprive you of the fun of solving this puzzle for yourself. What I have done is made it possible to work with a smaller version in order to develop strategies and work out what is possible. To to this just change the constants at the top of the code. One concept I find useful to think about when trying to be more effective than simply randomly moving squares, is cycles – see if you can see any cycles happening as you attempt to solve this puzzle. So that was my implementation of the 15 Puzzle in Python. I hope you found it enjoyable. If you want a challenge, try writing the code for the game yourself. And let me know how you get on with either the strategy or the code in the comments below. Happy puzzling! 1 Comment on “The 15 Puzzle with Python Turtle Graphics” Great code. Thanks.
https://compucademy.net/the-15-puzzle-with-python-turtle-graphics/
CC-MAIN-2022-27
refinedweb
1,041
55.54
Features¶ PyScaffold comes with a lot of elaborated features and configuration defaults to make the most common tasks in developing, maintaining and distributing your own Python package as easy as possible. Configuration, Packaging & Distribution¶ All configuration can be done in setup.cfg like changing the description, url, classifiers, installation requirements and so on as defined by setuptools. That means in most cases it is not necessary to tamper with setup.py. The syntax of setup.cfg is pretty much self-explanatory and well commented, In order to build a source, binary or wheel distribution, just run python setup.py sdist, python setup.py bdist or python setup.py bdist_wheel. Uploading to PyPI Of course uploading your package to the official Python package index PyPI for distribution also works out of the box. Just create a distribution as mentioned above and use twine to upload it to PyPI, e.g.: pip install twine twine upload dist/* For this to work, you have to first register a PyPI account. If you just wanna test, please be kind and use TestPyPI before uploading to PyPI. Warning Be aware that the usage of python setup.py upload for PyPI uploads also works but is nowadays strongly discouraged and even some of the new PyPI features won’t work correctly if you don’t use twine. Namespace Packages Optionally, namespace packages can be used, if you are planning to distribute a larger package as a collection of smaller ones. For example, use: putup my_project --package my_package --namespace com.my_domain to define my_package inside the namespace com.my_domain in java-style. Package and Files Data Additional data, e.g. images and text files, that reside within your package and are tracked by Git will automatically be included ( include_package_data = True in setup.cfg). It is not necessary to have a MANIFEST.in file for this to work. Just make sure that all files are added to your repository. To read this data in your code, use: from pkgutil import get_data data = get_data('my_package', 'path/to/my/data.txt') Starting from Python 3.7 an even better approach is using importlib.resources: from importlib.resources import read_text, read_binary data = read_text('my_package', 'path/to/my/data.txt') The library importlib_resources provides a backport of this feature. after you have Sphinx installed. Start editing the file docs/index.rst to extend the documentation. The documentation also works with Read the Docs. The Numpy and Google style docstrings are activated by default. Just make sure Sphinx 1.3 or above is installed. Dependency Management in a Breeze¶ PyScaffold out of the box allows developers to express abstract dependencies and take advantage of pip to manage installation. It also can be used together with a virtual environment to avoid dependency hell during both development and production stages. In particular, PyPA’s Pipenv can be integrated in any PyScaffold-generated project by following standard setuptools conventions. Keeping abstract requirements in setup.cfg and running pipenv install -e . is basically what you have to do (details in Dependency Management). Warning Experimental Feature - Pipenv support is experimental and might change in the future --travis to generate templates of the Travis configuration files .travis.yml and tests/travis_install.sh which even features the coverage and stats system Coveralls. In order to use the virtualenv management and test tool Tox the flag --tox can be specified. If you are using GitLab you can get a default .gitlab-ci.yml also running pytest-cov with the flag --gitlab. rely on the tox documentation for detailed configuration options. Management of Requirements & Licenses¶ Installation requirements of your project can be defined inside setup.cfg, e.g. install_requires = numpy; scipy. To avoid package dependency problems it is common to not pin installation requirements to any specific version, although minimum versions, e.g. sphinx>=1.3, or maximum versions, e.g. pandas<0.12, are used sometimes. More specific installation requirements should go into requirements.txt. This file can also be managed with the help of pip compile from pip-tools that basically pins packages to the current version, e.g. numpy==1.13.1. The packages defined in requirements.txt can be easily installed with: pip install -r requirements.txt All licenses from choosealicense.com can be easily selected with the help of the --license flag. Extensions¶ PyScaffold comes with several extensions: - Create a Django project with the flag --djangowhich is equivalent to django-admin.py startproject my_projectenhanced by PyScaffold’s features. - With the help of Cookiecutter it is possible to further customize your project setup with a template tailored for PyScaffold. Just use the flag --cookiecutter TEMPLATEto use a cookiecutter template which will be refined by PyScaffold afterwards. - … and many more like --gitlabto create the necessary files for GitLab. There is also documentation about writing extensions. Warning Deprecation Notice - In the next major release both Cookiecutter and Django extensions will be extracted into independent packages. After PyScaffold v4.0, you will need to explicitly install pyscaffoldext-cookiecutter and pyscaffoldext-django in your system/virtualenv in order to be able to use them.. Updates from PyScaffold 2¶ Since the overall structure of a project set up with PyScaffold 2 differs quite much from a project generated with PyScaffold 3 it is not possible to just use the --update parameter. Still with some manual efforts an update from a scaffold generated with PyScaffold 2 to PyScaffold 3’s scaffold is quite easy. Assume the name of our project is old_project with a package called old_package and no namespaces then just: - make sure your worktree is not dirty, i.e. commit all your changes, - run putup old_project --force --no-skeleton -p old_packageto generate the new structure inplace and cdinto your project, - move with git mv old_package/* src/old_package/ --forceyour old package over to the new srcdirectory, - check git statusand add untracked files from the new structure, - use git difftoolto check all overwritten files, especially setup.cfg, and transfer custom configurations from the old structure to the new, - check if python setup.py test sdistworks and commit your changes. Adding features¶ With the help of an experimental updating functionality it is also possible to add additional features to your existing project scaffold. If a scaffold lacking .travis.yml was created with putup my_project it can later be added by issuing putup --update my_project --travis. For this to work, PyScaffold stores all options that were initially used to put up the scaffold under the [pyscaffold] section in setup.cfg. Be aware that right now PyScaffold provides no way to remove a feature which was once added.
https://pyscaffold.org/en/latest/features.html
CC-MAIN-2018-47
refinedweb
1,091
50.12
This blog post is a strange one in a lot of ways, it is more of me pointing out a set of recipes and linking through to a related article. So what are the recipes, and what will it show you, the reader how to do? Well I am going to start with a wee story first (don’t worry the point is one its way). A while back I got this email out the blue from this guy in the states who was creating a MVVM framework for Windows 8, and I have kept in contact with this dude (and he really is a dude, if I can convince him to upload an image, you will see what I mean) and we got talking about his IOC container within his MVVM framework, which is like MEF for Windows 8. Looking through Ian’s code, it was immediately obvious to me, than Ian really (and I do mean really) knew how to use the Expression API within .NET. This namespace has always intrigued me, so I talked to Ian about the merits of writing a joint kind of article, where we would effectively come up with some scenarios to solve using the Expression API. Ian said he would be more than happy to write the code to any scenarios I could come up with, if I was happy to do the article writing side of things. This seemed a very fair way to do it, so we have done just that. Now you may be asking what is so cool about the Expression API, well one thing I quite like is that you can literally write entire programs in it, and another thing that you see time and time again, is creating compiled lambda expressions that have been built up on the fly, which compile down to a delegate, so provide uber fast performance when compared to reflection. That is why the Expresson API can be useful (at least we feel that way). That is essentially what the associated article is all about, we have a bunch of scenarios that I came up with (which I hope are good ones) which Ian has coded up. The examples range from simple property get/set through to some rather complex examples where we show you how to do things like how to create If-Then-Else Expressions and compute a HashCode for an object based of its property values. Here is a list of the scenarios we will be covering Want to know more please read the full article over here This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) I am lucky enough to have won a few awards for Zany Crazy code articles over the years
http://www.codeproject.com/Articles/738838/Expression-API-Cookbook
CC-MAIN-2015-11
refinedweb
466
60.82
Long and short filenames Windows 9x and Windows XP-based OSes both allow for long filenames on a FAT16 partition. These filenames are limited to 255 characters, but some applications like Windows 2000 Explorer (199 characters, including the period separator for the file extension), Windows XP Explorer (220 characters), and Windows 98 Explorer (235 characters) cannot display all of the characters of the filename. If you want to use names longer than Windows Explorer can display, you will have to use a different application. On a FAT16 or FAT32 partition, you are still limited to the 8.3 naming convention. Windows 9x and Windows XP-based OSes get around this problem by cheating the file system. When you save a file, it is saved using one directory entry and a short 8.3-character filename. The short filename is created by using the first 6 characters of the filename followed by a tilde (~) and an incremental number. If you are using a Windows XP-based OS, then after creating four files with the same 6 starting characters, the formula for creating short names is changed. The first two characters are used, followed by a randomly generated 4-digit hexadecimal number, followed by a tilde (~) and the number 1. Table below lists the names of six files and their short filenames that were created in the same directory on a Windows XP system. To get a listing of the short filenames, you can use dir /x or open the Properties window for each file, by right-clicking on the file and choosing Properties.Short Filenames in Windows XP Long Filename Entry Short Filename Entry ShortFileTest1.txt SHORTF~1.TXT ShortFileTest2.txt SHORTF~2.TXT ShortFileTest3.txt SHORTF~3.TXT ShortFileTest4.txt SHORTF~4.TXT ShortFileTest5.txt SH0AF2~1.TXT ShortFileTest6.txt SHD0C6~1.TXT If you are using Windows 9x, then the filenames continue to increment until you run out of directory entries on a disk or you hit the limit of 65,536 entries in any given directory. As the file names increment and move to ~10 or ~100, the number of characters at the start of the name decreases to five and then four. Microsoft has stated in different documentation for Windows 9x that it will not allow more than 99 files to be created in a directory with the same initial characters for the short filename. So if there were several files that started with the words "My File for something.txt", then the last file that can be created in the directory has a short filename of myfil~99.txt. After performing tests with each version of Windows 9x, I can tell you that this stated information is wrong. You have now seen how short filenames are generated, but the question about where the long filenames are stored still exists. The long filenames are stored in additional empty directory entries. The characters for the long filename are stored using 11 characters per additional directory entry. So a file with a name of My financial report for 2000.txt takes one directory entry of the short filename (possibly myfina~1.txt) and one additional entry for each of the 11 characters in the filename, or an additional three entries. That means that this one file would actually occupy four directory entries on your drive. These long filename directory entries have a non-standard attribute combination of Read-only, Hidden, System, and Volume Label. Although many files on your disk may have a combination of Read-only, Hidden, and System; Volume Label is usually used alone and only on one directory entry that stores the Volume Label for the disk. By using all four of these attributes, they are a nonstandard combination, and if MS-DOS systems see these entries, they ignore them rather than generating an error. One of the problems with long filenames occurs when the long filename entries disappear. This can happen if you use MS-DOS-based disk utilities on your disk. Some of these utilities will tell you that you have a problem with your directory entries and offer to fix them. Fixing unfortunately means deleting all of the "invalid" entries, which means you lose all of your long filenames. This is not a good thing. Microsoft makes conflicting claims about the compatibility of MS-DOS 6.x versions of scandisk.exe and defrag.exe. Some of their documentation states that MS-DOS 6.x utilities will not harm the long filename entries, while other documentation states that you should not use any MS-DOS-based file utilities. Used MS-DOS 6.x versions of defrag.exe on my disks, I have lost the long filename entries, which makes me think that these utilities are not compatible with the long filename entries. For Windows 9x, Microsoft provides a utility called either lfnbk.exe or sulfnbk.exe, depending on your version. This program runs with one of two switches, either /b or /r. The first switch backs up all of your long filename entries into a file on the root of your drive (lfn.dat). It also strips the current names from your file system so that older utilities can be run. After using your utilities, you can then use lfnbk /r to restore your long filenames to their original state. The brunt of many a virus hoax, sulfnbk.exe is a valid Windows application and not a virus, as has been often misreported on the Internet. There may come a time when you attempt to copy files to a destination that does not support long filenames. This used to happen with NetWare 3.x servers that had not enabled long filename support (OS/2 namespace). If this happens, you see a Rename File dialog box, for each file that has a long filename. In this tutorial: - Managing Files and Directories - Identifying File-Naming Conventions - Long and short filenames - Creating file associations - Understanding file extensions - Compression utilities Extensions - Graphic files Extensions - Understanding File Attributes - The basic attributes - Windows 2000 and Windows XP extended attributes - Encrypt - Index - Setting basic attributes
http://sourcedaddy.com/aplus/long-and-short-filenames.html
CC-MAIN-2018-17
refinedweb
1,014
62.58
Hi :) Today the instructor was trying to teach us how the following code works. To be truthful I couldn't understand anything. First thing he changed in the today's program was that he made declared the user defined function within the int main though in the past he told us to declare it before entering int main. Please have a look on CODE 2 which you would understand what I'm saying. I don't understand the reason for this. 1: I copied the following CODE 1 onto my flash drive. You can see that the declaration for the user defined function has been made inside the int main. What's the reason for this change? 2: What kind of data type is this "int&"? 3: Could you please tell me in simple words what that passing of arguments by reference means? What's the advantage of this practice. Please keep your replies as simple as possible. Thank you very much. CODE 1: CODE 2:CODE 2:Code: // orders two arguments passed by reference #include <iostream> using namespace std; int main() { void order(int&, int&); //prototype int n1=99, n2=11; //this pair not ordered int n3=22, n4=88; //this pair ordered order(n1, n2); //order each pair of numbers order(n3, n4); cout << "n1=" << n1 << endl; //print out all numbers cout << "n2=" << n2 << endl; cout << "n3=" << n3 << endl; cout << "n4=" << n4 << endl; return 0; } //-------------------------------------------------------------- void order(int& numb1, int& numb2) //orders two numbers { if(numb1 > numb2) //if 1st larger than 2nd, { int temp = numb1; //swap them numb1 = numb2; numb2 = temp; } } Code: // calculating area of a circle using user-defined function #include <iostream> #include <cstdlib> using namespace std; float area(float dummy); int main() { float r; float a; cout << "enter radius: "; cin >> r; a = area(r); cout << a; system("pause"); return 0; } //------------------------------------ // area(int), function definition float area(float dummy) { float Area; Area = 3.1416*(dummy*dummy); return Area; } //--------------------------------------
http://cboard.cprogramming.com/cplusplus-programming/137959-int-passing-arguments-reference-etc-printable-thread.html
CC-MAIN-2016-18
refinedweb
321
68.4
So I wanted to solve this problem: After a tennis tournament each player was asked how many matches he had. An athlete can't play more than one match with another athlete. As an input the only thing you have is the number of athletes and the matches each athlete had. As an output you will have 1 if the tournament was possible to be done according to the athletes answers or 0 if not. For example: Input: 4 3 3 3 3 Output: 1 Input: 6 2 4 5 5 2 1 Output: 0 Input: 2 1 1 Output: 1 Input: 1 0 Output: 0 Input: 3 1 1 1 Output: 0 Input: 3 2 2 0 Output: 0 Input: 3 4 3 2 Output: 0 the first number of the input is not part of the athletes answer it's the number of athletes that took part in the tournament for example in 6 2 4 5 5 2 1 we have 6 athletes that took part and their answers were 2 4 5 5 2 1 This is what I have wrote so far and it is not working completely: Code : import java.util.Scanner; import java.util.Arrays; public class Tennis { public static void main(String[] args) { Scanner input = new Scanner(System.in); String N; int count; int sum = 0; int max; int activeAthletes; int flag; System.out.printf("Give: "); N = input.nextLine(); String[] arr = N.split(" "); int[] array = new int[arr.length]; for (count = 0; count < arr.length; count++) { array[count] = Integer.parseInt(arr[count]); //System.out.print(arr[count] + " "); } for (count = 1; count < arr.length; count++) { sum += array[count]; } //System.out.println("\n" + sum); activeAthletes = array[0]; for (count = 1; count < array.length; count++) { if (array[count] == 0) { activeAthletes--; } } max = array[1]; for (count = 2; count < array.length; count++) { if (array[count] > max) { max = array[count]; } } // System.out.println(max); if ((sum % 2 == 0) && (max < activeAthletes)) { flag = 1; } else{ flag = 0; } System.out.println(flag); } }
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/15344-very-hard-algorithm-implementation-printingthethread.html
CC-MAIN-2015-48
refinedweb
331
64.51
2019-08-13 20:08:06 8 Comments I am new to constraint programming and try to figure out how to do a "at least n" constraint. For example I have int variables x, y and z all within a range of 0 to 5. Now I want all solutions in which at least 2 of the variables are between 2 and 3. So something like a "sum of given conditions >= 2" How would I do this in python and ideally with Google's OR-Tools? Thanks Related Questions Sponsored Content 37 Answered Questions [SOLVED] How do I check whether a file exists without exceptions? - 2008-09-17 12:55:00 - spence91 - 3724914 View - 5301 Score - 37 Answer - Tags: python file file-exists 10 Answered Questions [SOLVED] Does Python have a string &#39;contains&#39; substring method? 23 Answered Questions [SOLVED] Does Python have a ternary conditional operator? - 2008-12-27 08:32:18 - Devoted - 1801277 View - 5609 Score - 23 Answer - Tags: python operators ternary-operator conditional-operator 29 Answered Questions [SOLVED] What does if __name__ == "__main__": do? - 2009-01-07 04:11:00 - Devoted - 2619126 View - 5564 Score - 29 Answer - Tags: python namespaces main python-module idioms 12 Answered Questions [SOLVED] Calling a function of a module by using its name (a string) 38 Answered Questions 36 Answered Questions [SOLVED] How to pair socks from a pile efficiently? - 2013-01-19 15:34:35 - amit - 404979 View - 3851 Score - 36 Answer - Tags: algorithm sorting language-agnostic matching 31 Answered Questions [SOLVED] "Least Astonishment" and the Mutable Default Argument - 2009-07-15 18:00:37 - Stefano Borini - 143622 View - 2463 Score - 31 Answer - Tags: python language-design default-parameters least-astonishment 25 Answered Questions [SOLVED] How can I safely create a nested directory? - 2008-11-07 18:56:45 - Parand - 2456477 View - 3919 Score - 25 Answer - Tags: python exception path directory operating-system @Laurent Perron 2019-08-14 00:28:55 outputs @sal 2019-08-13 20:19:32 Assuming you have these conditions then you could build a list of values, and then apply lambda, mapand sumto find things out. For example: It can be of course compacted as:
https://tutel.me/c/programming/questions/57484655/how+does+one+create+a+quotat+least+nquot+constraint
CC-MAIN-2019-51
refinedweb
360
53.65
How do I change imagesize to fit on other screens and preserve aspect ratio Hi I am relative new to qt and have build my first user interface with Qt quik qml- code. All images are put i the rigth place x and y coordinates and the standard size of the ui is width: 1540 and height:540. But i want it to fit on ipad with the resolution 1024 x 768. How do i scale the whole ui to fit on other screens and preserve aspect ratio and that the ui looks the same on both screens? my code looks like this: import QtQuick 2.0 import QtQuick.Window 2.0 import QtGraphicalEffects 1.0 Item{ id: root focus: true width:1440 height:540 ... Image { id: layer_0 source: "images/layer_0.png" x: 0 y: 0 opacity: 1 } Image { id: layer_1 source: "images/layer_1.png" x: 4 y: 1 opacity: 1 } Image { id: emap source: "images/€map.png" opacity: 1 } ... } Hi @antemort and Welcome, You should not hard-code the widthand heightinstead use Screenqml item to get the actual resolution of the device it is running. More here Usage: import QtQuick 2.4 import QtQuick.Window 2.2 Item { width: Screen.width height: Screen.height } but if i want it to have like 1540 540 aspect ratio and I want to preserve it when changing resolution. Correct me if I'm wrong ( I probebly am) but doesn't screen.width make the images stretch to fit the screen and if for exampel screen size is like ipad 1024 x 768 it will not stay at the same aspect ratio as 1540: 540. @p3c0 I have tried with fillMode: Image.PreserveAspectFit on my images but they do not resize when I resize the window. Am I using it wrong? Now when I use Screen.width it only make the white space bigger around the image. what am I doing wrong? @antemort May be it is due to fixed xand yproperties for Imagethat you have set. Instead try using anchors. Now I have worked with layouts and resized all images to same ratio 1440 x 540 except 2 images. Those two images are speedgauge pins and need to be positioning in the middle of the gauges. What I want now is that those pins should change position and size when resizing the window that are set in full screen. So if image window is resized then these two images should also resize and find their way to the center of the gauge again. can you help me? @antemort If I understood you correctly then you just need to use anchors.centerInand anchor it to its parent so that it centers even if you resize the parent window. To resize it according to parent you can bind it to parents dimensions. See the following example import QtQuick 2.4 Item { width: 200 height: 200 Image { anchors.centerIn: parent width: parent.width / 2.0 // can be anything as per your requirements height: parent.height / 2.0 source: "" } } Try using your image as the source. I'm sorry, I must have explained it badly. I have two gauges on different sides off a symmetry line so those are not alligned in the center off the picture. The two pins are positioning with x and y values to be positioning in the center of the gauge circles. the pins are small images which have the orgin at the right side of the image facing the middle of the gaugecircle. So when the screen getting bigger or smaller i want width and height and the x and y values to be changed so it follows the other images that have anchors.fill: parent. @antemort Have you anchored the pin to the gauge circles or you have explicitly set xand ypositions ? I have explicity set x and y for the moment. I want to anchor but doesn't know how when the the pins are not position in the center of the image, only positioning so it is in the center of the circles. Image { id: visare source: "images/visare_2_skugga.png" x: 113 y: 271 width: 210 height: 13 transformOrigin: Item.Right opacity: 1 rotation: needleAngle smooth: true Behavior on rotation{ SpringAnimation{ spring: 0.5; damping: 0.2; mass:2; } } } Image { id: visare_rpm source: "images/visare_2_skugga.png" x: 912 y: 270 width: 210 height: 13 transformOrigin: Item.Right opacity: 1 rotation: needleRpm smooth: true Behavior on rotation{ SpringAnimation{ spring: 0.5; damping: 0.2; mass:2; } } } @antemort So why don't you use anchors.centerIn: parentand align it to center according to the gauge ? Here parentbeing the gauge. because the gauge circle have the measure 1440x540 like the rest of the images there are two cirles in one circle. If i use anchors.centerIn it will end up in the center of the image in between those circles. - p3c0 Moderators @antemort Wow this is way too different than what I understood earlier. You have a large image and it has some circles at some positions and you want that pins to be centered to that circles. Is it ? And for that you have hard-coded the pin's xand ypositions by trial and error method (i.e manually). If thats the case then I think you should change the implementation. It would be too difficult to maintain that position when the image resizes. Can you describe/explain it pictographically ? two circles in one image* here it is. Print screen from Qt designer with pins in right position @antemort Hmm this is what I guessed and was afraid of. I thinks it a bad idea to go for this way of implementation. Instead you should divide it into seperate images. One being that speedometer image while other being the background. Then you can just use Imageelement to load that speedometer image and you will thus will able to place it anywhere. Now, the pins can be made child of speedometer Imageelement and can be centered to that of the speedometer (parent) using anchors.centerIn. In this way it will be independent of any resolution changes. I had it before but then I needed to position all images and then I couldn't anchor them. when I don't anchor them they doesn't scale or follow the window when it scales. @antemort You can put the 2 gauge images besides eachother inside a RowLayout and add it to the Imageelement which contains that background image. Advantage of layouts is that it automatically positions the items inside it when it resizes.
https://forum.qt.io/topic/55301/how-do-i-change-imagesize-to-fit-on-other-screens-and-preserve-aspect-ratio
CC-MAIN-2019-35
refinedweb
1,091
75.4
Hello, like you can say 'public string' and there will be a box displayed in the script window in the inspector, is there a similar variable that let's you click and drag a script in there? Thanks! Answer by Bunny83 · Nov 27, 2016 at 07:36 PM This boils down to what you understand by "script". If you ask if you can drag a script asset (the actual .cs or .js file) onto a variable, the answer is "no" for runtime scripts and "yes" for editor scripts. Scripts inside the Unity editor are represented with the editor type "MonoScript". However that's an editor only class that can't be used in a runtime script. edit I quickly have written a property drawer which allows you, by simply attaching an attribute to a string variable, to have an object field in the inspector where you can drag and drop MonoScript files. It will extract the class name of the class inside the MonoScript and save that name in the string variable. Two files are required: MonoScriptPropertyDrawer.cs - this is the actual PropertyDrawer. This file has to be placed in a folder named "editor" inside your assets folder. MonoScriptAttribute.cs - this is just the custom attribute definition. This file must not be inside an editor folder as it is required by your runtime scripts. I released those under the MIT license in case someone want to change something. Ohh it has a hidden "feature". You can click the label of the field in the inspector to toggle between object field and the usual text field where you can manually edit the string. Note however if you manually type in a string that doesn't represent the name of a MonoScript the object field can't link properly back. To use this property drawer you have to import this namespace in the script where you want to have a MonoScriptField: using B83.Unity.Attributes; Then simply add the "MonoScript" attribute to your string variable like this: [MonoScript] public string typeName; That's all [ original post ]If however you ask if you can drag and drop a script instance the answer is yes. That's actually one of the main features of Unity. You just have to use the type of script you want to use as variable type. Short example: public class MyScript : MonoBehaviour { public void Foo() { Debug.Log("Bar"); } } In another script you can define a variable of type MyScript: public MyScript script; void Start() { script.Foo(); } When those two scripts are assigned to gameobjects in the scene you can drag the gameobject with the MyScript attached onto the variable "script" in the inspector of the. Cannot assign Public GameObject variable in Inspector... 2 Answers Public variable not showing in Inspector(Solved) 1 Answer Avoid public variable to get overwritten by the Inspector 1 Answer Making a variable work in two separate objects? 4 Answers How to make variable accessible by other scripts but not visible in editor? 1 Answer
https://answers.unity.com/questions/1277116/public-script-variable-in-inspector.html
CC-MAIN-2019-43
refinedweb
500
70.23
! Step 4: Connections Overview the entire board (again) for any bridges to ensure there are no short circuits. Now it is time to connect this up and test it. Connect the power (5v and ground). Connect the wires to the shift register, if you use the library as default you will connect Green to Arduino Pin 7, Blue to Arduino Pin 8, Yellow to Arduino Pin 9.. Step 5: Software The method of using a shift register to drive these displays with only 3 pins seems to have originally documented by Stephen Hobley. He did a great job of adjusting the built-in LiquidCrystal Library so it works brilliantly with the 595 Shift Register. I have now updated this library to be compatible with Arduino 1.x and adjusted some of the Shift Register pin assignments to be easier to prototype with. You need to download the latest code. It is feature complete and should be a drop-in replacement for any project you already have. Here is the test Arduino sketch to show you how to use the new library, replacing the LiquidCrystal 6-pin with a great 3-pin version. --------------------COPY BELOW HERE-------------------- /* * 3-pin Arduino interface for HD44780 LCDs via 74HC595 Shift Register * by Rowan Simms code@rowansimms.com * License: Creative Commons - Attribution. * Full Documentation and Description: * * This sketch allows Arduinos to use a shift register to control an LCD, allowing * a reduction in pins it requires from 6 to 3 while still retaining full control * including backlight on/off. * This requires the use of the LiquidCrystal595 library * available at: */ #include <LiquidCrystal595.h> // include the library LiquidCrystal595 lcd(7,8,9); // datapin, latchpin, clockpin void setup() { lcd.begin(16,2); // 16 characters, 2 rows lcd.clear(); lcd.setCursor(0,0); lcd.print("Wow. 3 pins!"); lcd.setCursor(0,1); lcd.print("Fabulous"); } void loop() { // not used. } --------------------COPY ABOVE HERE-------------------- Copy this in to a new Sketch after installing the library and upload to your Arduino. You should now be basking in the glorious glow of your LCD. Step 6: Conclusion This shield really does allow you to use just 3 pins of your Arduino to drive an LCD display - and it takes less than 6 seconds to connect it up. Don't want to commit to a shield just yet? Wish to do this with only 3 components and breadboard? I understand that you may not wish to make a shield before trying this method out - that is completely understandable. For you, I have this documented for breadboards too. Sure, you will have to deal with more hookup wire, but it gives you a great way of at least trying this 3-pin method without any soldering. That layout, more code and wiring explanations are available from That's it. Enjoy your sub-6-second hookups! 14 People Made This Project! indavidjool made it! hemanthk13 made it! zammykoo made it! ricecakeburner made it! Fezder made it! sahanl2 made it! ArifSae made it! VagelisS3 made it! mrunmay made it! rramesh68 made it! rramesh68 made it! jan.wouter.5 made it! Build_it_Bob made it! jimpeachey made it! See 5 More 149 Discussions 6 years ago on Introduction I have this running great from an ATtiny85. It also has a TMP36 temp sensor to desplay the current temperature. LCD uses pins 0, 1, 2 and TMP36 uses pin 3. Reply 6 years ago on Introduction Here is my version. Thanks, it was a fun project. Reply 4 years ago on Introduction Hi, please where did you buy this stripboard? It was online? I just found another version with independent holes, different of yours... Sorry my English, thanks! Reply 2 years ago hi. You can get the connected holes from electronic store. They sell copper board with independent holes and copper strip board Reply 6 years ago on Introduction That is fantastic Matt. Really nice work. Being in London, I feel those cold mornings too. Thank-you for sharing. R Reply 6 years ago on Introduction Hey thanks bitterOz. I gave your instructables page props in the description of the above video and in the prototype video Thanks again, It was a fun project. Matt Reply 5 years ago on Introduction I'm wanting to build your project Matt. Any chance you could post the modified code for the attiny85? Reply 5 years ago on Introduction Hi. Here's a link I made for you. It should contain the code. The temp sensor I use is a TMP36 from Adafruit.com I hope this helps. Matt Reply 5 years ago on Introduction Thanks! You should really do an instructable on this, or at the least, draw up a schematic for us noobs and post it in the description of your video. 9 days ago Just me or do the cuts in section 2 not line up with the cuts shown in section 3? 5 months ago It worked once and then started showing garbage values and never worked again. It shows a row of boxes whenever I power it up again, nothing else. Please help ! Reply 5 months ago Connecting OE to ground worked 1 year ago Made it on an arduino. Thx for saving me 3 (really needed) pins. As heman said, needed lcd.setLED2Pin(HIGH); to turn the backlight on. If i would build it a second time i would change the layout a bit so you can change the poti without dismounting the lcd. 1 year ago I've rechecked the circuit and code thrice, no hit for me... Tried flipping the e and c of transistor too, including that extra bit of code... Reply 1 year ago Not even backlight is working, and yes I have the correct transistor and have already switched it two times 2 years ago @bitteroz Had to swap the emitter and collector of the BC547 NPN transistor to make the back light work. Mistake in circuit drawings? Or is my LCD different ? I have a green one. Reply 2 years ago Also had to use following to turn on backlight. lcd.setLED2Pin(HIGH); lcd.shift595(); Reply 1 year ago Would you send me the exact code you used? 1 year ago Would it be possible to connect multiple screens with multiple shift registers to the same 3 pins the same way you can daisy chain the shift registers for other purposes (like 7 segment LED's or input buttons)? 2 years ago I have this running great from an ATtiny85. It also has a TMP36 temp sensor to desplay the current temperature. LCD uses pins 0, 1, 2 and TMP36 uses pin 3
https://www.instructables.com/id/Hookup-a-16-pin-HD44780-LCD-to-an-Arduino-in-6-sec/
CC-MAIN-2019-04
refinedweb
1,101
76.32
> From: mongo57a at comcast.net [mailto:mongo57a at comcast.net] > > I see that there is a "global" command - and I assume its use would be > global var_name. This I have coded (works) but I am unable to > use it "global > name var_name is not defined". Not precisely. The 'global' statement tells a function to use a name in its module namespace, rather than its local namespace. There is one completely global namespace in python - if a name is not found in the module namespace, then the namespace of the __builtin__ module is searched. It is considered *very* bad form to stick things in the __builtin__ module. It is generally considered bad form to use the 'global' statement - it usually means that your design needs more work. There are of course exceptions to the above two statements, but you should really understand python namespaces completely first. FWIW, I have *never* used the 'global' statement in python, and I've only played with __builtin__ when doing some pretty funky stuff ;) Tim Delaney
https://mail.python.org/pipermail/python-list/2002-October/160916.html
CC-MAIN-2014-15
refinedweb
170
71.55
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). On 10/08/2017 at 08:41, xxxxxxxx wrote: `I import a mesh of triangles (from an OBJ file). The overall object is a deformed plain. When I throw it under a subdivision surface, it suddenly got holes. Obviously, not all triangles in the mesh are actually welded together. What I now need to do, is to "weld" all points that are at identical locations into ONE point. I found a way to do that by executing an "Optimize Mesh", either through the menu or CallCommand, however, this will completely break if the user has changed the setting of "Optimize Mesh". Is there a programmatic way of doing that? It might be doable through c4d.utils.SendModelingCommand (but that documentation is kind of minimal) or by iterating through all points and doing some magic (but that raises the question on how to efficiently find all "identical" points and how to merge them without breaking any polys and/or UV setups). Any inputs welcome On 10/08/2017 at 10:31, xxxxxxxx wrote: Throught SendModelingCommand you can fill a BaseContainer with option so I guess it will do everything you want. import c4d def optimize(obj, tolerance) : doc = c4d.documents.GetActiveDocument() doc.AddUndo(c4d.UNDOTYPE_CHANGE, obj) settings = c4d.BaseContainer() settings[c4d.MDATA_OPTIMIZE_TOLERANCE] = tolerance settings[c4d.MDATA_OPTIMIZE_POINTS] = True settings[c4d.MDATA_OPTIMIZE_POLYGONS] = True settings[c4d.MDATA_OPTIMIZE_UNUSEDPOINTS] = True c4d.utils.SendModelingCommand(command=c4d.MCOMMAND_OPTIMIZE, list=[obj], mode=c4d.MODELINGCOMMANDMODE_ALL, bc=settings, doc=doc) def main() : doc.StartUndo() optimize(op, 0.001) doc.EndUndo() c4d.EventAdd() if __name__=='__main__': main() It's look like you already read the document but in case you didn't On 10/08/2017 at 13:14, xxxxxxxx wrote: Thx! Works like a charm PS: I removed the duplicate line
https://plugincafe.maxon.net/topic/10238/13727_welding-points/1
CC-MAIN-2021-31
refinedweb
333
50.94
NAME mouse_init, mouse_init_return_fd - specifically initialize a mouse SYNOPSIS #include <vgamouse.h> int mouse_init(char *dev, int type, int samplerate); int mouse_init_return_fd(char *dev, int type, int samplerate); DESCRIPTION These routines can be used to open the mouse manually, ignoring the mouse types or devices specified in the config file. dev is the name of the mouse device (defaults to /dev_mouse). samplerate may be one MOUSE_DEFAULTSAMPLERATE(150) or any other value. Probably it is in Hz. type is one of the types which are listed already in vga_getmousetype(3). If these routines are used it is not necessary to call vga_setmousesupport(3), but it’s probably better to not use these and use vga_setmousesupport(3) instead. The return_fd version returns the file descriptor of the mouse device to allow you to do further tricks with the mouse (but the filehandle may change during a VC switch). The other version just returns 0 if successful. Both return -1 on error. SEE ALSO svgalib(7), vgagl(7), libvga.config(5), eventtest(6), mouse_close(3), mouse_getposition_6d(3), mouse_get.
http://manpages.ubuntu.com/manpages/hardy/man3/mouse_init_return_fd.3.html
CC-MAIN-2015-27
refinedweb
174
57.98
Composite controls are a bit more advanced than the controls we have seen thus far, but they are still fairly simple to build. For one thing, you still do not need to worry about handling PostBacks , because you can rely on all the features of the included controls to work just as if they were placed on the page themselves . Composite controls, because they typically encapsulate several pieces of UI code, are very close in function to user controls, so you will want to consider the differences between user controls and custom controls before you decide which method to use to create the functionality you need. For this example, we will simply give the user the option of making the YesNoListBox required. To do this, we will actually go ahead and use our existing YesNoListBox control and include it along with a RequiredFieldValidator in a new control called ReqYesNoListBox . The complete code for this new control is in Listing 12.5. using ASPNETByExample; using System.Web.UI.WebControls; namespace ASPNETByExample { public class ReqYesNoDropDownList : System.Web.UI.Control, System.Web.UI. INamingContainer { // Declare child controls protected YesNoDropDownList ynList; protected RequiredFieldValidator ynRequired; public ReqYesNoDropDownList() { this.ynList = new YesNoDropDownList(); this.ynRequired = new RequiredFieldValidator(); ynList.ID = "YesNo" + this.UniqueID; ynRequired.ControlToValidate = ynList.ID; ynRequired.Display = ValidatorDisplay.Dynamic; ynRequired.Enabled = true; ynRequired.EnableViewState = false; ynRequired.Text = "*"; ynRequired.ErrorMessage = "You must select either Yes or No."; } public string Value { get { return this.ynList.Value; } set { this.ynList.Value = value; } } protected override void CreateChildControls() { this.Controls.Clear(); this.Controls.Add(ynList); this.Controls.Add(ynRequired); } } } If you want to add Trace statements to your custom controls so that you can debug their execution using ASP.NET's built-in tracing support, you need to reference the HttpContext.Current instance. For example, the following line of code would output a line to the trace results of the ASP.NET page that the control was listed on: System.Web.HttpContext.Current.Trace.Write("Render","Rendering "); Because we're going to be using our existing YesNoDropDownList control, I've gone ahead and included the ASPNETByExample namespace, along with the System.Web.UI.WebControls namespace, which holds the definition for the RequiredFieldValidator control. As with our HelloWorld control, we are inheriting from the Control class. This time, however, we are also implementing the INamingContainer interface. This interface is important to custom controls, because it ensures that any control created on an ASP.NET page will have a unique ID on the page. When building custom controls, you should always use this interface. You only need to worry about this if you are inheriting directly from Control ”all the existing Web controls already implement this interface (so, for example, our YesNoDropDownList doesn't need to specify this interface, since DropDownList already implements it). After declaring the controls that we will include in our composite control, we initialize their values in the ReqYesNoDropDownList() constructor. There's a little bit of a trick to note here. Since validator controls require a ControlToValidate property to be set, you need to know the name of the YesNoDropDownList . However, unless we set its ID ourselves , we have no way of knowing what unique ID the ASP.NET processor will give to this control. Worse, if we just hard-coded the ID field, we would never be able to use more than one of our controls on a page, because the names would conflict. The solution to this problem is to create an ID dynamically using the UniqueID of our composite control as part of the name of the child control. In this way, we can know the name of the control for the validator to reference, and we can still have as many of these composite controls on a page as we want. After dealing with the issue of IDs, we set some default properties for the RequiredFieldValidator , ynRequired . Because these child controls are declared as protected, the user cannot access any of these properties directly, so the only way they can be set is within the composite control. If you want your control's child controls to be available to users directly, you can either declare them as public (which is not usually recommended), or you can set up properties that map to the child control's properties. This is what we have done for the YesNoDropDownList Value property. Our ReqYesNoDropDownList exposes a public property of Value , which simply delegates its sets and gets to the ynList child control. Finally, we come to the most important method of a composite control, the CreateChildControls() method. This method is similar to Render() in that it is built into the Control object and must be overridden in our composite control in order for us to use it. In this case, we are programmatically adding our child controls to this control's Controls collection, using the collection's Add() method. The controls are automatically rendered by the Control class in the order in which we have added them. NOTE It is quite possible to build custom controls that inherit from other custom controls, ad infinitum. Some very powerful suites of controls can be built in this fashion. If you have several layers of controls that need to render user interface logic, either with the Render() method or the CreateChildControls() method, you can make sure that your superclass performs its rendering by using the base keyword in C# (which is analogous to the MyBase keyword in VB.NET, or the super keyword in Java). So, within the Render() method, you would call base.Render() , and within the CreateChildControls() method, you would call base.CreateChildControls() . To test out our new control, let's take a look at ReqYesNo.aspx, which is listed in its entirety in Listing 12.6. <%@ Register <html> <body> <form id="YesNo" method="post" runat="server"> You must answer yes or no: <YN:ReqYesNoDropDownList <asp:Button <hr /> <asp:ValidationSummary <h1> <%=fun.Value%> </h1> </form> </body> </html> For this test, we've modified the YesNo.aspx file so that it uses the ReqYesNoDropDownList (which was compiled into the same assembly as before), and removed the default value for the list so that the user will have to choose either Yes or No. We've also added a ValidationSummary control to display any error messages that are generated by our validation controls. Attempting to submit the form without choosing a value results in the error message being displayed, as in Figure 12.1. Now that we have seen how to build custom controls from scratch or by inheriting from existing controls, let's dig a little bit deeper and look at some of the more advanced options available. Note that control-building techniques could easily fill a book themselves, so we will only be able to provide cursory coverage of some of these topics. When developing your own controls, there are several more advanced pieces of functionality that you may want to add. These include Handling control events Handling PostBack s Using templates Raising events Databinding These techniques are covered in the rest of the chapter.
https://flylib.com/books/en/3.146.1.104/1/
CC-MAIN-2021-43
refinedweb
1,175
54.12
Image processing Dear all, I have question about manipulating image in Sage. Here is some example. import pylab img=pylab.imread(DATA+'lena.png') Img is now array with float elements. But I want to add some shapes on this image like circle, rectangle, polygon etc, but c=Circle, b=Rectangle and a=Polygon are object of some clases so if I want to do something like this show(img+c) or show(img+b) it is not allowed. Another question is: How I can put circle on my position (position is corner of image) because Img is array so I don't know where new row start. Is there some simple way to do this? Thank you in advance
https://ask.sagemath.org/question/9911/image-processing/
CC-MAIN-2017-47
refinedweb
120
66.33
Earlier. A Brief History of Helm Helm was originally an open source project of Deis. Modeled after Homebrew, the focus of Helm 1 was to make it easy for users to rapidly install their first workloads on Kubernetes. We announced Helm officially at the inaugural KubeCon San Francisco in 2015. A few months later, we joined forces with Google’s Kubernetes Deployment Manager team and began iterating on Helm 2. Our goal was to maintain the ease of use of Helm, piece was called Tiller, and it handled installing and managing Helm charts.. This brings us to Helm 3. In what follows, I’ll preview some of the new things on the roadmap. Saying Hello to Lua In Helm 2, we introduced templates. Early in the Helm 2 dev cycle, we supported Go templates, Jinja, raw Python code, and even a prototype of ksonnet support. But having multiple template engines simply caused more problems than it solved. So we decided to pick one. Go templates had four advantages: - The library was built into Go - Templates were executed in a tightly sandboxed environment - We could inject custom functions and objects into the engine - They worked well with YAML While we retained an interface in Helm for supporting other template engines, Go templates became our default. With a few years of experience, we have seen engineers from many industries build thousands of charts using Go templates. And we’ve learned their frustrations: - The syntax is hard to read and poorly documented. - Language issues, such as immutable variables, confusing data types, and restrictive scoping rules make simple things hard. - An absence of the ability to define functions inside of templates makes it even harder to build reusable libraries. Most importantly, by using a template language, we were essentially reducing Kubernetes objects to their string representation. (In other words, template developers must manage Kubernetes resources as YAML text documents.) Work on Objects, not YAML Chunks We repeatedly hear our users asking for the ability to inspect and modify Kubernetes resources as objects, not as strings. But they are equally adamant that however we would choose to provide this, it must be easy to learn and well supported in the ecosystem. After months of investigating, we decided to provide an embedded scripting language that could be sandboxed and customized. In the top 20 languages, there is only one candidate that fits that bill: Lua. In 1993, a team of Brazilian computer scientists created a lightweight scripting language designed to be embedded in other tools. Lua has a simple syntax, is broadly supported, and has hovered in the top 20 languages for a long time. IDEs and text editors support it, and there is a wealth of tutorials and books teaching it. This is the type of existing ecosystem that we want to build upon. Our work on Helm Lua is still very much in its proof-of-concept stage, but we’re looking at a syntax that would be at once familiar and flexible. A comparison of old and new approaches highlights where we are thinking of going. Here’s what the example alpine Pod template looks like for Helm 2: apiVersion: v1 kind: Pod metadata: name: {{ template "alpine.fullname" . }} labels: heritage: {{ .Release.Service }} release: {{ .Release.Name }} chart: {{ .Chart.Name }}-{{ .Chart.Version }} app: {{ template "alpine.name" . }} spec: restartPolicy: {{ .Values.restartPolicy }} containers: - name: waiter image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} command: ["/bin/sleep", "9000"] This is an unsophisticated template, but it’s possible to see at a glance all of the template directives (like {{ .Chart.Name }}) that are embedded. Here’s what that same Pod definition looks like in our draft Lua code: function create_alpine_pod(_) local pod = { apiVersion = "v1", kind = "Pod", metadata = { name = alpine_fullname(_), labels = { heritage = _.Release.Service or "helm", release = _.Release.Name, chart = _.Chart.Name .. "-" .. _.Chart.Version, app = alpine_name(_) } }, spec = { restartPolicy = _.Values.restartPolicy, containers = { { name = waiter, image = _.Values.image.repository .. ":" .. _.Values.image.tag, imagePullPolicy = _.Values.image.pullPolicy, command = { "/bin/sleep", "9000" } } } } } _.resources.add(pod) end We don’t need to go through the example line-by-line to understand what is happening. We can see at a glance that the code defines a Pod. But instead of using a YAML string with embedded template directives, we’re defining the Pod as a Lua object. Let’s Make That Code Shorter Because we’re working directly with objects (instead of manipulating a big glob of text), we can take full advantage of scripting facilities. This is most attractive in the way that it opens the possibility of building shareable libraries. We hope that by introducing (or enabling the community to produce) utility libraries, we could even reduce the above to something like this: local pods = require("mylib.pods"); function create_alpine_pod(_) myPod = pods.new("alpine:3.7", _) myPod.spec.restartPolicy = "Always" -- set any other properties _.Manifests.add(myPod) end In this example, we are taking advantage of the fact that we can treat the resource definition as an object, setting properties with ease while keeping the code succinct and readable. Templates… Lua… Why Not Both? While templates are not great for all things, they do have advantages. Go templates represent a stable technology with an established user base and plenty of existing charts. Many chart developers report that they like writing templates. So we are not planning on removing template support. Instead, we are going to allow templates and Lua to work together. Lua scripts will have access to the Helm templates both before and after they are rendered, giving advanced chart developers the opportunity to perform sophisticated transformations on existing charts, while still making it easy to build Helm charts with templates. While we’re excited about introducing Lua scripting, we’re also removing a big piece of the Helm architecture. Saying Goodbye to Tiller During the Helm 2 development cycle, we introduced Tiller as part of our integration with Deployment Manager. Tiller played an important role for teams working on the same cluster—It made it possible for multiple different operators to interact with the same set of releases. But Tiller acted like a giant sudo server, granting a broad range of permissions to anyone with access to Tiller. And our default installation pattern was permissive in its configuration. DevOps and SREs therefore had to learn additional steps when installing Tiller into a multi-tenant cluster. Furthermore, it turned out that with the advent of CRDs, we no longer needed to rely upon Tiller to maintain state or act as a central hub for Helm release information. We could simply store this information as separate records in Kubernetes. Tiller’s primary goal could be accomplished without Tiller. So one of the first decisions we made regarding Helm 3 was to completely remove Tiller. A Security Improvement With Tiller gone, the security model for Helm is radically reduced. User authentication is delegated to Kubernetes. Authorization is, too. Helm permissions are evaluated as Kubernetes permissions (using the RBAC system), and cluster administrators can restrict Helm permissions at whatever granularity they see fit. Releases, ReleaseVersions, and State Storage Without Tiller to track the state of various releases in-cluster, we need a way for all Helm clients to cooperate on managing releases. To do this, we are introducing two new records: - Release: This will track a particular installation of a particular chart. So if we do a helm install my-wordpress stable/wordpress, this will create a release named my-wordpress and track the lifetime of this WordPress installation. - ReleaseVersion: Each time we upgrade a chart, Helm will need to track what changed, and whether the change was successful. A ReleaseVersion is tied to a release, but records just the details of the upgrade, rollback, or deletion. So when we do helm upgrade my-wordpress stable/wordpress, the original Release object will stay the same, but a new child object, ReleaseVersion will be created, packaging the details of this upgrade operation. Releases and ReleaseVersions will be stored in the same namespace as the chart’s created objects. With these features, teams of Helm users will still be able to track the records of Helm installs inside of the cluster, but without needing Tiller. But Wait, There’s More! In this article, I have tried to highlight some of the big changes coming to Helm 3. But this list is by no means exhaustive. The full plan for Helm 3 includes features like improvements to the chart format, performance-oriented changes for chart repositories, and a new event system that chart developers can tie into. We’re also taking a moment to perform what Eric Raymond calls code archeology, cleaning up the codebase and updating parts that have languished over the last three years. With Helm’s entrance into CNCF, we are excited not just about Helm 3, but also about Chart Museum, the brilliant Chart Testing tool, the official chart repo, and other projects under the Helm CNCF umbrella. We feel that good package management for Kubernetes is just as essential to the cloud-native ecosystem as good package managers are for Linux. Pingback: The Beginner’s Guide to Helm, the package manager for Kubernetes – Nuvoli Systems Pingback: Is Your Kubernetes Helm Repository Ready for Enterprise?
https://sweetcode.io/a-first-look-at-the-helm-3-plan/
CC-MAIN-2021-25
refinedweb
1,548
54.42
. PTR records for RFC 1918 addresses in private zones To perform reverse lookups with custom PTR records for RFC 1918 addresses, you must create a Cloud DNS private zone that is at least as specific as one of the following example zones. This is a consequence of the longest suffix matching described in Name resolution order. Example zones include the following: 10.in-addr.arpa. 168.192.in-addr.arpa. 16.172.in-addr.arpa., 17.172.in-addr.arpa., ... through 31.172.in-addr.arpa. If you create a Cloud DNS private zone for RFC 1918 addresses, custom PTR records that you create for VMs in that zone are overridden by the PTR records that internal DNS creates automatically. This is because internal DNS PTR records are created in the previous list of more specific zones. For example, suppose you create a managed private zone for in-addr.arpa. with the following PTR record for a VM whose IP address is 10.1.1.1: 1.1.1.10.in-addr.arpa. -> test.example.domain PTR queries for 1.1.1.10.in-addr.arpa. are answered by the more specific 10.in-addr.arpa. internal DNS zone. The PTR record in your Cloud DNS private zone for test.example.domain is ignored. To override the automatically created internal DNS PTR records for VMs, make sure that you create your custom PTR records in a private zone that is at least as specific as one of the zones presented in this section. For example, if you create the following PTR record in a private zone for 10.in-addr.arpa., your record overrides the automatically generated one: 1.1.1.10.in-addr.arpa. -> test.example.domain. If you only need to override a portion of a network block, you can create more specific reverse Cloud DNS private zones. For example, a private zone for 5.10.in-addr.arpa. can be used to hold PTR records that override any internal DNS PTR records that are automatically created for VMs with IP addresses in the 10.5.0.0/16 address range. Supported DNS record types Cloud DNS supports the following types of records. To work with DNS records, see Managing records. Wildcard DNS records Wildcard records are supported for all record types, except for NS records.. Forwarding zones Cloud DNS forwarding zones let you configure target name servers for specific private zones. Using a forwarding zone is one way to implement outbound DNS forwarding from your VPC network. A Cloud DNS forwarding zone is a special type of Cloud DNS private zone. Instead of creating records within the zone, you specify a set of forwarding targets. Each forwarding target is an IP address of a DNS server, located in your VPC network, or in an on-premises network connected to your VPC network by Cloud VPN or Cloud Interconnect. Forwarding targets and routing methods Cloud DNS supports three types of targets and offers standard or private methods for routing traffic to them. You can choose one of the two following routing methods when you add the forwarding target to the forwarding zone: Standard routing: Routes traffic through an authorized VPC network or over the internet based on whether the forwarding target is an RFC 1918 IP address. If the forwarding target is an RFC 1918 IP address, Cloud DNS classifies the target as either a Type 1 or Type 2 target, and routes requests through an authorized VPC network. If the target is not an RFC 1918 IP address, Cloud DNS classifies the target as Type 3, and expects the target to be internet accessible. Private routing: Always routes traffic through an authorized VPC network, regardless of the target's IP address (RFC 1918 or not). Consequently, only Type 1 and Type 2 targets are supported. To access a Type 1 or a Type 2 forwarding target, Cloud DNS uses routes in the authorized VPC network where the DNS client is located. These routes define a secure path to the forwarding target: To send traffic to Type 1 targets, Cloud DNS uses automatically created subnet routes. To reply, Type 1 targets use a special return route for Cloud DNS responses. To send traffic to Type 2 targets, Cloud DNS can use either custom dynamic routes or custom static routes, except for custom static routes with network tags. To reply, Type 2 forwarding targets use routes in your on-premises network. For additional guidance about network requirements for Type 1 and Type 2 targets, see forwarding target network requirements. Forwarding target selection order Cloud DNS lets you configure a list of alternative name servers for an outbound server policy and a list of forwarding targets for a forwarding zone. In case of multiple forwarding targets, Cloud DNS uses an internal algorithm to select a forwarding target. This algorithm ranks each forwarding target. To process a request, Cloud DNS first tries a DNS query by contacting the forwarding target with the highest ranking. If that server does not respond, Cloud DNS repeats the request to the next highest ranked forwarding target. If no forwarding targets reply, Cloud DNS synthesizes a SERVFAIL response. The ranking algorithm is automatic, and the following factors increase the ranking of a forwarding target: - The higher the number of successful DNS responses processed by the forwarding target. Successful DNS responses include NXDOMAIN responses. - The lower the latency (round-trip time) for communicating with the forwarding target. Use forwarding zones VMs in a VPC network can use a Cloud DNS forwarding zone in the following cases: - The VPC network has been authorized to use the Cloud DNS forwarding zone. You can authorize multiple VPC networks in the same project to use the forwarding zone. - The guest operating system of each VM in the VPC network uses the VM's metadata server 169.254.169.254as its name server. Overlapping forwarding zones Because Cloud DNS forwarding zones are a type of Cloud DNS managed private zone, you can create multiple zones that overlap. VMs configured as described earlier resolve a record according to the Name resolution order, using the zone with the longest suffix. For more information, see Overlapping zones. Caching and forwarding zones Cloud DNS caches responses for queries sent to Cloud DNS forwarding zones. Cloud DNS maintains a cache of responses from reachable forwarding targets for the shorter of the following time spans: - 60 seconds - The duration of the record's time-to-live (TTL) When all of the forwarding targets for a forwarding zone become unreachable, Cloud DNS maintains its cache of the previously requested records in that zone for the duration of each record's TTL. When to use peering instead Cloud DNS cannot use transitive routing to connect to a forwarding target. To illustrate an invalid configuration, consider the following scenario: You've used Cloud VPN or Cloud Interconnect to connect an on-premises network to a VPC network named vpc-net-a. You've used VPC Network Peering to connect VPC network vpc-net-ato vpc-net-b. You've configured vpc-net-ato export custom routes, and vpc-net-bto import them. You've created a forwarding zone whose forwarding targets are located in the on-premises network to which vpc-net-ais connected. You've authorized vpc-net-bto use that forwarding zone. Resolving a record in a zone served by the forwarding targets fails in this scenario, even though there is connectivity from vpc-net-b to your on-premises network. To demonstrate this failure, perform the following tests from a VM in vpc-net-b: Query the VM's metadata server 169.254.169.254for a record defined in the forwarding zone. This query fails (expectedly) because Cloud DNS does not support transitive routing to forwarding targets. To use a forwarding zone, a VM must be configured to use its metadata server. Query the forwarding target directly for that same record. Although Cloud DNS does not use this path, this query demonstrates that transitive connectivity succeeds. You can use a Cloud DNS peering zone to fix this invalid scenario: - Create a Cloud DNS peering zone authorized for vpc-net-bthat targets vpc-net-a. - Create a forwarding zone authorized for vpc-net-awhose forwarding targets are on-premises name servers. You can perform these steps in any order. After completing these steps, Compute Engine instances in both vpc-net-a and vpc-net-b can query the on-premises forwarding targets. DNS peering DNS peering lets you send requests for records that come from one zone's namespace to another VPC network. For example, a SaaS provider can give a SaaS customer access to DNS records it manages. To provide DNS peering, you must create a Cloud DNS peering zone and configure it to perform DNS lookups in a VPC network where the records for that zone's namespace are available. The VPC network where the DNS peering zone performs lookups is called the DNS producer network. To use DNS peering, you must authorize a network to use a peering zone. The VPC network authorized to use the peering zone is called the DNS consumer network. After Google Cloud resources in the DNS consumer network are authorized, they can perform lookups for records in the peering zone's namespace as if they were in the DNS producer network. Lookups for records in the peering zone's namespace follow the DNS producer network's name resolution order. Therefore, Google Cloud resources in the DNS consumer network can look up records in the zone's namespace from the following sources available in the DNS producer network: - Cloud DNS managed private zones authorized for use by the DNS producer network. - Cloud DNS managed forwarding zones authorized for use by the DNS producer network. - Compute Engine internal DNS names in the DNS producer network. - An alternative name server, if an outbound server policy has been configured in the DNS producer network. DNS peering limitations and key points Keep the following in mind when configuring DNS peering: - DNS peering is a one-way relationship. It allows Google Cloud resources in the DNS consumer network to look up records in the peering zone's namespace as if the Google Cloud resources were in the DNS producer network. - The DNS producer and consumer networks must be VPC networks. - While DNS producer and consumer networks are typically part of the same organization, cross-organizational DNS peering is also supported. - DNS peering and VPC Network Peering are different services. DNS peering can be used with VPC Network Peering, but VPC Network Peering is not required for DNS peering. - Transitive DNS peering is supported, but only through a single transitive hop. In other words, no more than three VPC networks (with the network in the middle being the transitive hop) can be involved. For example, you can create a peering zone in vpc-net-athat targets vpc-net-b, and then create a peering zone in vpc-net-bthat targets vpc-net-c. - If you are using DNS peering to target a forwarding zone, the target VPC network with the forwarding zone must contain a VM, a VLAN attachment, or a Cloud VPN tunnel located in the same region as the source VM that uses the DNS peering zone. For details about this limitation, see Forwarding queries from VMs in a consumer VPC network to a producer VPC network not working. To create a peering zone, you must have the DNS Peer IAM role for the project that contains the DNS producer network. Overlapping zones Two zones overlap with each other when the origin domain name of one zone is either identical to or is a subdomain of the origin of the other zone. For example: - A zone for gcp.example.comand another zone for gcp.example.comoverlap because the domain names are identical. - A zone for dev.gcp.example.comand a zone for gcp.example.comoverlap because dev.gcp.example.comis a subdomain of gcp.example.com. Rules for overlapping zones Cloud DNS enforces the following rules for overlapping zones: Overlapping public zones are not allowed on the same Cloud DNS name servers. When you create overlapping zones, Cloud DNS attempts to put them on different name servers. If that is not possible, Cloud DNS fails to create the overlapping zone. A private zone can overlap with any public zone. Private zones scoped for different VPC networks can overlap with each other. For example, two VPC networks can each have a database VM named database.gcp.example.comin a zone gcp.example.com. Queries for database.gcp.example.comreceive different answers according to the zone records defined in the zone authorized for each VPC network. Two private zones that have been authorized to be accessible from the same VPC network cannot have identical origins unless one zone is a subdomain of the other. The metadata server uses longest suffix matching to determine which origin to query for records in a given zone. Query resolution examples Google Cloud resolves Cloud DNS zones as described in Name resolution order. When determining the zone to query for a given record, Cloud DNS tries to find a zone that matches as much of the requested record as possible (longest suffix matching). Unless you have specified an alternative name server in an outbound server policy, Google Cloud first attempts to find a record in a private zone (or forwarding zone or peering zone) authorized for your VPC network before it looks for the record in a public zone. The following examples illustrate the order that the metadata server uses when querying DNS records. For each of these examples, suppose that you have created two private zones, gcp.example.com and dev.gcp.example.com, and authorized access to them from the same VPC network. Google Cloud handles the DNS queries from VMs in a VPC network in the following way: The metadata server uses public name servers to resolve a record for myapp.example.com.(note the trailing dot) because there is no private zone for example.comthat has been authorized for the VPC network. Use digfrom a Compute Engine VM to query the VM's default name server: dig myapp.example.com. For details, see Query for the DNS name using the metadata server. The metadata server resolves the record myapp.gcp.example.comusing the authorized private zone gcp.example.combecause gcp.example.comis the longest common suffix between the requested record name and available authorized private zones. NXDOMAINis returned if there's no record for myapp.gcp.example.comdefined in the gcp.example.comprivate zone, even if there is a record for myapp.gcp.example.comdefined in a public zone. Similarly, queries for myapp.dev.gcp.example.comare resolved according to records in the authorized private zone dev.gcp.example.com. NXDOMAINis returned if there is no record for myapp.dev.gcp.example.comin the dev.gcp.example.comzone, even if there is a record for myapp.dev.gcp.example.comin another private or public zone. Queries for myapp.prod.gcp.example.comare resolved according to records in the private zone gcp.example.combecause gcp.example.comis the longest common suffix between the requested DNS record and the available private zones. Split horizon DNS example You can use a combination of public and private zones in a split horizon DNS configuration. Private zones enable you to define different responses to a query for the same record when the query originates from a VM within an authorized VPC network. Split horizon DNS is useful whenever you need to provide different records for the same DNS queries depending on the originating VPC network. Consider the following split horizon example: - You've created a public zone, gcp.example.com, and you've configured its registrar to use Cloud DNS name servers. - You've created a private zone, gcp.example.com, and you've authorized your VPC network to access this zone. In the private zone, you've created a single record as shown in the following table. In the public zone, you've created two records. The following queries are resolved as described: - A query for foo.gcp.example.comfrom a VM in your VPC network returns 10.128.1.35. - A query for foo.gcp.example.comfrom the internet returns 104.198.6.142. - A query for bar.gcp.example.comfrom a VM in your VPC network returns an NXDOMAINerror because there’s no record for bar.gcp.example.comin the private zone gcp.example.com. - A query for bar.gcp.example.comfrom the internet returns 104.198.7.145..
https://cloud.google.com/dns/docs/overview?hl=tr
CC-MAIN-2021-39
refinedweb
2,768
55.74
Hi people. I’m using winxp with VC++ 6.0 and working on an MFC modeless dialog. I’m trying to load the music from the resource with FSOUND_Sample_Load. I search the forum and found the solution that brett posted in other thread previously. But I still can’t seem to get it right. Here’s the code that I’m having trouble with. 😕 IDR_SOUND1 //an midi file IDR_SOUND2 // an mp3 file FSOUND_SAMPLE *play1; FSOUND_SAMPLE *play2; int CBgnMusic::OnBgnMusicInit() { if (FSOUND_Init(44100,32,0) == FALSE) return (0); else return (1); } void CBgnMusic::LoadToMem() { HRSRC rec; HGLOBAL handle; void *data; DWORD length; rec = FindResource(GetModuleHandle(NULL), MAKEINTRESOURCE(IDR_SOUND1), _T("Sound")); handle = LoadResource(NULL, rec); data = LockResource(handle); length = SizeofResource(NULL, rec); } void CBgnMusic::OnBgnMusic1() { play1=FSOUND_Sample_Load(FSOUND_FREE, (const char *)data, FSOUND_LOADMEMORY,length); m_debugBgnMusic.output << FSOUND_GetError() << endl; FSOUND_PlaySound(FSOUND_FREE, play1); } I have checked that FSOUND_Init is success, and all rec, handle, data, and length have valid pointers like rec 0x004072F8 handle 0x0040DF50 data 0x0040DF50 length 22950 and these will change if I change it to IDR_SOUND2. The problem is play1 return 0x00000000, and I get error 11 (FMOD_ERR_FILE_FORMAT) when I use FSOUND_GetError() to check immediately after the play1 line. I have used FSOUND_STREAM to play the midi file before I converted to FSOUND_SAMPLE and it works ok. Anyone know what I did wrong? Thanks. edited: I place fmod.dll into the release folder and the root folder, and error 11 is changed to error 10 (FMOD_ERR_FILE_NOTFOUND). 😕 When I was using FSOUND_STREAM before, fmod.dll has always been outside of release folder. But anyway, still return an error to me. - hcn asked 15 years ago - You must login to post comments
http://www.fmod.org/questions/question/forum-6234/
CC-MAIN-2018-47
refinedweb
278
64.91
I’m displaying Admob banner ads in my game and I noticed that the AdView somehow messes up with the input from the gamepad. For example buttons like X, A, L1, R1 stop working, they don't trigger onKeyDown onKeyUp KEYCODE_BUTTON_Y KEYCODE_BUTTON_MENU setFocusable(false) Ok, seems like I found the solution. If I override dispatchKeyEvent and cancel dispatching onKey events to the AdView, then the gamepad works fine. Not sure if it's OK to do this, but it works. public class MyAdView extends AdView { public MyAdView(Activity activity, AdSize adSize, String adUnitId) { super(activity, adSize, adUnitId); } @Override public boolean dispatchKeyEvent(KeyEvent event) { return false; } }
https://codedump.io/share/CiGntpbnV5ka/1/android-admob-view-disrupts-input-from-gamepad-onkey-events-stop-working
CC-MAIN-2017-17
refinedweb
105
51.58